OpenAI might get sued by the New York Times for using its intellectual property, alleges a report. (REUTERS)News 

OpenAI Facing Potential Lawsuit from New York Times for Copyright Infringement with ChatGPT

For a while now, there has been an ongoing conflict between artificial intelligence (AI) and individuals in creative industries. In Hollywood, actors and writers have initiated a strike in the United States to restrict the implementation of AI, fearing potential job losses. Authors have also joined this movement by sending an open letter to companies involved in AI development, urging them to fairly compensate for the use of their work as “fuel” for AI. Recently, it has been reported that the renowned news publisher New York Times might pursue legal measures against OpenAI, the creator of the widely-used AI chatbot ChatGPT.

According to an NPR report, “Lawyers for the newspaper are exploring whether to sue OpenAI to protect intellectual property rights related to its reporting, according to two people with direct knowledge of the discussions.” The report also highlighted that the parties have been in stressful negotiations to reach an agreement on the license agreement. According to this agreement, OpenAI would have to pay an agreed amount for using its articles to train AI models.

However, unnamed sources cited by the report mentioned that the publication is losing patience with discussions that seem to be going nowhere and is now considering taking the legal route.

NOW you could sue OpenAI

If NYT does indeed take the legal route, this will be the largest intellectual property legal battle involving artificial intelligence. The biggest conflict between the parties at the moment is that ChatGPT acts as a direct competitor of the publication by answering questions based on their reporting.

This is not an isolated case either. The Authors Guild – a US writers’ union, in a press release after sending an open letter signed by more than 10,000 authors, said: “AI companies like to say that their machines simply ‘read’ the texts they have been trained to read. , this is an inaccurate anthropomorphization. Instead, they copy the texts themselves into the software and then play them over and over again.

As AI is an emerging technology, policymakers are still trying to understand its scope when drafting regulations for its application. So far, the European Union has drafted AI legislation, asking AI companies to maintain transparency in the sources of data acquisition for AI models and to seek consent from the parties from whom the data originates. However, the law is not currently in force, as the companies have postponed this part of the regulation. More discussions are expected in the near future.

Related posts

Leave a Comment