News Organizations Advocate for Consent Before AI Utilizes Content
On Wednesday, a consortium of news groups announced that artificial intelligence firms are required to seek permission prior to utilizing copyrighted text and images for content generation.
Media organizations including AFP, Getty Images and the Associated Press said in an open letter that AI tools could crush their business models, flood the web with misinformation and violate copyright laws.
AI tools like chatbot ChatGPT and image generators DALL-E 2, Stable Diffusion and Midjourney exploded in popularity last year with their ability to generate rich content from short text prompts.
However, the companies behind the tools, including OpenAI and Stability AI, are already facing lawsuits from artists, writers and others who claim their work has been plagiarized.
Wednesday’s open letter from news organizations including the European Pressphoto Agency and Gannett/USA TODAY is the latest attempt to influence the debate by organizations that have a lot to lose if AI companies continue to scrape material from the Internet without restriction.
“Generative AIs and large language models enable any actor, regardless of their intent, to produce and distribute synthetic content at a scale far beyond our previous experience,” the newsgroups wrote.
They listed potential problems, including copyright infringement, a flood of false or biased content and a vicious circle where media groups can no longer fund journalism to provide reliable information.
The organizations said they wanted to be part of the solution and called for discussions to ensure legal access to content.
AI companies have attracted billions in investment, and all major tech companies have turned to the technology, led by Google and Microsoft.
Major companies in the field have formed several coalitions – including the Partnership on AI and the Frontier Model Forum – and are largely calling for self-regulation.
They pledged in July to take steps such as watermarking AI content and have broadly committed to combating misinformation, but without specific timetables or measures.
However, there has been little talk about copyright, which is likely to be a much more difficult and possibly more expensive issue.
Google used a public hearing in Australia earlier this year to claim a “fair dealing” exception to copyright law specifically to enable data mining for artificial intelligence.