When the U.S. Supreme Court decides in the coming months whether to weaken a powerful shield protecting internet companies, the ruling also could have implications for rapidly developing technologies like artificial intelligence chatbot ChatGPT.News 

YouTube case in US Supreme Court could change ChatGPT and AI protections: Here’s how

WASHINGTON: As the US Supreme Court decides in the coming months whether to weaken the effective shield protecting internet businesses, the ruling could also have implications for fast-developing technologies such as the artificial intelligence chatbot ChatGPT.

A court is due to decide by the end of June whether Alphabet Inc’s YouTube can be sued over video recommendations for its users. This case will test whether a US law that shields technology platforms from legal liability for content their users post online is valid even when companies use algorithms to target recommendations to users.

What the court decides on these issues is relevant beyond social media platforms. Its ruling could affect the emerging debate over whether companies developing generative AI chatbots, such as ChatGPT from OpenAI, in which Microsoft Corp is a major investor, or Bardia from Alphabet’s Google should be protected from legal claims such as defamation or invasion of privacy. technology and legal experts.

That’s because the algorithms that use generative AI tools like ChatGPT and its successor, GPT-4, work somewhat similar to those that suggest videos to YouTube users, the experts added.

“The debate is really about whether the organization of information available through recommendations is so significant in shaping content that it becomes responsible,” said Cameron Kerry, a visiting fellow at the Brookings Institution think tank in Washington and an expert on artificial intelligence. “You have similar problems with a chatbot.”

Representatives of OpenAI and Google did not respond to requests for comment.

During arguments in February, Supreme Court justices expressed uncertainty about whether to weaken protections enshrined in the law, known as Section 230 of the Communications Decency Act of 1996. Although the case does not directly involve generative artificial intelligence, Justice Neil Gorsuch noted that artificial intelligence tools that produce “poetry” and ” polemic”, would not likely receive such legal protection.

The case is just one side of the emerging debate over whether § 230 immunity should apply to artificial intelligence models that are trained using existing web data but are capable of producing original works.

Section 230 protections generally apply to third-party content of users of the technology platform, not information that the company helped develop. Courts have yet to weigh in on whether an AI chatbot response is covered.

“THE CONSEQUENCES OF THEIR OWN ACTIONS”

Democratic Sen. Ron Wyden, who helped draft the bill while in the House, said liability protections should not apply to generative AI tools because such tools “create content.”

“Section 230 is about protecting users and sites for hosting and organizing user speech. It should not protect companies from the consequences of their own actions and products,” Wyden said in a statement to Reuters.

The technology industry has sought to preserve Section 230, despite bipartisan opposition to immunity. They said tools like ChatGPT act like search engines, directing users to existing content in response to a query.

“AI doesn’t really create anything. It takes existing content and puts it in a different way or in a different format,” said Carl Szabo, vice president and general counsel at technology industry trade group NetChoice.

Szabo said a weakened § 230 would be an impossible task for AI developers and threatens to expose them to a flood of lawsuits that could stifle innovation.

Some experts predict that courts may choose a middle ground and examine the context in which the AI model generated a potentially harmful response.

In cases where an AI model appears to paraphrase existing sources, protection may still apply. But chatbots like ChatGPT have been known to create fictitious responses that appear to have no connection to information found elsewhere on the web, a situation experts say is unlikely to be secure.

Hany Farid, an engineering expert and professor at the University of California, Berkeley, said it stretches the imagination to argue that AI developers should be immune from lawsuits over the models they “program, train and deploy.”

“When companies are held accountable in civil lawsuits for the harms of the products they produce, they produce safer products,” Farid said. “And when they’re not responsible, they produce less safe products.”

The case before the Supreme Court involves an appeal by the family of Nohemi Gonzalez, a 23-year-old California college student involved in the 2015 rampage by Islamist militants in Paris, because a lower court was dismissed by her family. lawsuit against YouTube.

The lawsuit accused Google of providing “material support” to terrorism and alleged that YouTube used the video-sharing platform’s algorithms to illegally recommend videos from the Islamic militant group that claimed responsibility for the Paris attacks to certain users.

Read all the Latest Tech News here.

Related posts

Leave a Comment