Open AI CEO Altman proposed the formation of a U.S. or global agency that would license the most powerful AI systemsNews 

ChatGPT CEO Sam Altman says AI should be regulated

The head of the AI company that makes ChatGPT told Congress on Tuesday that government intervention is critical to reducing the risks of increasingly powerful AI systems.

“As technology develops, we understand that people are concerned about how it could change the way we live. So are we,” OpenAI CEO Sam Altman said at the Senate hearing.

Altman proposed the creation of a US or global agency that would license the most powerful AI systems and have the authority “to take away that license and ensure compliance with safety standards.”

His San Francisco-based startup exploded in popularity after launching ChatGPT late last year. ChatGPT is a free chatbot tool that answers questions convincingly with human-like answers.

What began as a panic among teachers over the use of ChatGPT to cheat on homework has expanded into wider concerns about the ability of the latest “generative AI tools” to mislead people, spread falsehoods, violate copyright protections and ruin some jobs.

And while there is no immediate sign that Congress will enact sweeping new AI rules, as European lawmakers are doing, societal concerns brought Altman and other tech leaders to the White House earlier this month and have prompted U.S. agencies to vow to crack down on harmful AI products. that violate applicable civil rights and consumer protection laws.

Sen. Richard Blumenthal, the Connecticut Democrat who chairs the Senate Judiciary Committee’s Subcommittee on Privacy, Technology and the Law, opened the session with a taped speech that sounded like the senator but was actually a voice clone trained on Blumenthal’s lines, delivering a speech written by ChatGPT after he asked the chatbot to write his opening statement.

The result was impressive, Blumenthal said, but added, “What if I had asked it, and what if it had endorsed the surrender of Ukraine or the leadership of (Russian President) Vladimir Putin?”

Blumenthal said AI companies should be required to test their systems and disclose known risks before releasing them, expressing particular concern about how future AI systems could destabilize the labor market.

Pushed by his own worst artificial intelligence fears, Altman mostly avoided details, except that the industry could cause “significant damage to the world” and that “if this technology goes wrong, it could go wrong.”

But he later suggested that the new regulatory agency should introduce safeguards to prevent AI models that could “replicate and infiltrate nature” – hinting at futuristic concerns about advanced AI systems that could manipulate humans into giving up control.

OpenAI, founded by Altman in 2015 and supported by technology billionaire Elon Musk, has evolved from a non-profit research laboratory with a security-focused mission. Its other popular AI products including image maker DALL-E.

Microsoft has invested billions of dollars in the startup and integrated its technology into its own products, including its search engine Bing.

Altman also plans to embark on a global tour this month, visiting national capitals and major cities on six continents to discuss the technology with policymakers and the public. On the eve of his Senate testimony, he dined with dozens of US lawmakers, several of whom told CNBC they were impressed by his comments.

Also, Christina Montgomery, IBM’s privacy and trust chief, and Gary Marcus, a New York University professor emeritus who was part of a group of AI experts that called on OpenAI and other tech companies to halt development of more powerful AI models. six months gives society more time to think about the risks.

The letter was in response to the March release of OpenAI’s latest model, GPT-4, which is described as more powerful than ChatGPT.

Sen. Josh Hawley of Missouri, the panel’s Republican, said technology plays a big role in elections, jobs and national security. He said Tuesday’s hearing was a “critical first step toward understanding what Congress should do.”

Several tech executives have said they welcome some form of AI oversight, but have warned against overly heavy-handed regulations.

Altman and Marcus both called for an AI-focused regulator, preferably an international one, with Altman citing the precedent set by the UN nuclear agency and Marcus comparing it to the US Food and Drug Administration. But IBM’s Montgomery asked Congress to take a “precision regulation” approach.

“We believe AI should be regulated fundamentally at the point of risk,” Montgomery said, creating rules that guide the adoption of certain uses of AI rather than the technology itself.

Read all the Latest Tech News here.

Related posts

Leave a Comment