OpenAI’s Authority to Override CEO Sam Altman’s Decision to Ensure Protection of AI Model Releases
Artificial intelligence company OpenAI created a framework for addressing security in its most advanced models, including allowing the government to reverse security decisions, according to a plan posted on its website Monday.
Backed by Microsoft, OpenAI only deploys its latest technology if it is deemed safe in certain areas, such as cybersecurity and nuclear threats. The company is also establishing an advisory group that will review the safety reports and send them to the company’s management and board of directors. Although managers make decisions, the board can overturn those decisions.
Since the launch of ChatGPT a year ago, the potential dangers of AI have been on the minds of both AI researchers and the general public. Generative AI technology has dazzled users with its ability to write poems and essays, but also raised security concerns with its potential to spread disinformation and manipulate people.
In April, a group of AI leaders and experts signed an open letter calling for a six-month pause in the development of systems more powerful than OpenAI’s GPT-4, citing potential societal risks. According to a May Reuters/Ipsos poll, more than two-thirds of Americans are concerned about the potential negative effects of artificial intelligence, and 61 percent believe it could threaten civilization.