In an unprecedented partnership between OpenAI and two well-known media organizations, Vox and The Atlantic, concerns are being raised by journalists regarding the ethical implications of utilizing AI-generated content. This collaboration has sparked a debate within the journalism community, with some applauding the innovation while others are questioning the authenticity and integrity of the generated content. As the distinction between human and AI-generated journalism becomes less clear, the industry is facing a crucial point, grappling with the impacts of this new technology on the core principles of journalism.
Unions Express Apprehension Over AI Training Agreements
Axios recently reported that OpenAI had entered into agreements with The Atlantic and Vox Media, allowing the ChatGPT maker to utilize their editorial content to further train its language models. However, writers from these publications, along with their representing unions, were taken aback by the announcements and have raised concerns. Two unions have released statements expressing “dismay” and “concern” over these agreements.
The Atlantic union conveyed their unease with the deal made with OpenAI, highlighting the lack of transparency from management regarding the implications of the agreement on their work. Similarly, the Vox Union, representing various publications under Vox Media, voiced their serious concerns about the partnership and its potential adverse effects on their members and the broader ethical considerations surrounding generative AI usage.
Usage of Licensed Content for AI Training Sparks Controversy
OpenAI has previously acknowledged using copyrighted data from publications, similar to those involved in the licensing agreements, to train AI models like GPT-4, which powers the ChatGPT AI assistant. While asserting that the practice is legitimate, OpenAI has also licensed training content from entities such as Axel Springer and social media platforms like Reddit and Stack Overflow, leading to backlash from users of these platforms.
Under the multi-year agreements with The Atlantic and Vox, OpenAI gains access to the publishers’ archived content, dating back to 1857 in The Atlantic’s case, as well as current articles to train AI language models like ChatGPT. In return, the publishers receive undisclosed financial compensation and the opportunity to leverage OpenAI’s technology to develop new journalism products.
Journalists and Unions React to the Partnerships
The news of these agreements caught journalists and unions off guard. Vox reporter Kelsey Piper expressed frustration over the lack of consultation with writers before the announcement but mentioned assurances from the editor in chief regarding protecting their work. Journalists from The Atlantic and Vox responded to the agreements through critical articles, expressing skepticism and apprehension about the implications of partnering with OpenAI on journalism integrity and the broader digital ecosystem.
Ongoing Legal Battles Over AI Training Practices
While some publications have embraced collaborations with OpenAI, others like The New York Times have taken a different stance. The ongoing lawsuit between OpenAI and The New York Times revolves around the contentious issue of scraping data for AI training purposes. The Times has accused OpenAI of unauthorized use of its content to train AI models, while OpenAI defends its practices as fair use. The resolution of this legal dispute remains pending, while transparency remains a key concern for The Atlantic Union regarding the use of their creative output by external entities.
the intersection of AI technology and journalism raises complex ethical and practical questions that require careful consideration and transparency to uphold the values of journalistic integrity and intellectual property rights.