Sam Altman of OpenAI Calls for International AI Regulatory Body to Monitor Risks
During a visit to the United Arab Emirates on Tuesday, a prominent innovator cautioned that artificial intelligence could pose a significant threat to humanity, calling it an “existential risk.” The innovator proposed that an international organization, such as the International Atomic Energy Agency, should supervise the revolutionary technology.
Sam Altman, CEO of OpenAI, is on a world tour to discuss AI.
Altman, 38, said: “The world’s challenge is how do we manage these risks and make sure we still have these huge advantages. Nobody wants to destroy the world.”
OpenAI’s ChatGPT, a popular chatbot, has captured the world’s attention for providing article-like responses to user prompts. “Reducing the risk of AI-induced extinction should be a global priority, alongside other societal risks such as pandemics and nuclear war.”
Altman cited the International Atomic Energy Agency, the UN’s International Atomic Energy Agency, as an example of how the world has come together to monitor nuclear energy. This agency was established in the years after the United States dropped atomic bombs on Japan at the end of World War II.
“Let’s make sure we come together as a planet — and I hope this place plays a real role in that,” Altman said. “We talk about the IAEA as a model where the world said, ‘OK, very dangerous technology, let’s put all the bars in.'” And I think we can do both.
“I think in this case it’s a subtle message because it says it’s not that serious today, but it could get serious quickly. But we can thread that needle.”
Lawmakers around the world are also looking at artificial intelligence. The 27-nation European Union is enforcing an AI law that could become the de facto global standard for AI. Altman told the US Congress in May that government intervention would be critical to managing the risks posed by artificial intelligence.
But the United Arab Emirates, an autocratic union of seven hereditary sheikhdoms, presents another side to the dangers of artificial intelligence. Speech is still tightly controlled. Rights groups are warning the UAE and other Gulf states that spyware is regularly used to monitor activists, journalists and others. These limitations affect the flow of detailed information—the same details that AI programs like ChatGPT rely on, like machine learning systems, to deliver their responses to users.
Among the speakers opening for Altman at the ADGM event is Andrew Jackson, CEO of the Inception Institute of AI, which is described as a G42 company.
The G42 is linked to the powerful national security adviser and deputy ruler of Abu Dhabi, Sheikh Tahnoun bin Zayed Al Nahyan. G42’s CEO is Peng Xiao, who for years ran Pegasus, a subsidiary of DarkMatter, an Emirati security firm that has come under scrutiny for hiring former CIA and NSA employees and other Israelis. G42 also owns a video and voice calling app that is said to be a spying tool for the UAE government.
Speaking, Jackson described himself as a “representative of the AI ecosystem of Abu Dhabi and the United Arab Emirates.”
He said: “We are a political force and will be central to the regulation of AI globally.”