Licensing Advanced AI Systems Proposed by OpenAI
OpenAI’s internal policy memo reveals its endorsement of the notion that individuals seeking to develop advanced artificial intelligence systems should obtain government licenses. Additionally, the document indicates the company’s readiness to disclose the data it employs for training image generators.
The creator of ChatGPT and DALL-E outlined a series of AI policy commitments in an internal document following a May 4 meeting of White House officials and technology leaders, including OpenAI CEO Sam Altman. “We are committed to working with the US government and policymakers around the world to support the development of licensing requirements for future generations of the most efficient base models,” the San Francisco-based company said in a draft.
The idea of a government licensing system co-developed by AI heavyweights like OpenAI sets the stage for a potential clash with startups and open-source developers who may see it as an attempt to make it harder for others to break into the space. This isn’t the first time OpenAI has floated the idea: At a US Senate hearing in May, Altman supported the creation of an agency that he said could issue licenses to AI products and tear them up if someone breaks the rules.
The policy document comes just as Microsoft Corp., Alphabet Inc.’s Google and OpenAI are expected on Friday to publicly commit to security measures to develop the technology — as urged by the White House. According to people familiar with the plans, the companies commit to the responsible development and implementation of artificial intelligence.
OpenAI warned that the ideas presented in the internal policy document differ from those the White House will soon release with tech companies. Anna Makanju, the company’s vice president of global affairs, said in an interview that the company is not “pushing” licenses as much as it believes such a permit is a “realistic” way for governments to monitor new systems.
“It’s important that governments are aware if super-powerful systems with potential adverse effects are coming,” he said, and “there are very few ways to make sure governments are aware of these systems if someone is not willing to self-report the way we do.”
Makanju said that OpenAI only supports licensing systems for AI models more powerful than OpenAI’s current GPT-4 model and wants to ensure that smaller startups are free from excessive regulatory burdens. “We don’t want to stifle the ecosystem,” he said.
OpenAI also announced in an internal policy document that it is willing to be more transparent about the data it uses to train image generators like DALL-E, saying it is committed to “incorporating a provenance approach” by the end of the year. Data provenance—the practice of holding developers accountable for the transparency of their work and its provenance—is seen by policymakers as a critical factor in preventing AI tools from spreading misinformation and bias.
The commitments outlined in OpenAI’s memo closely follow some of Microsoft’s policy proposals announced in May. OpenAI has stated that despite receiving a $10 billion investment from Microsoft, it remains an independent company.
The company revealed in the document that it is conducting a survey on watermarking – a method used to track the authenticity and copyright of AI-generated images – and the detection and disclosure of AI-generated content. It plans to publish the results.
The company also said in the document that it is open to external red-bridging — that is, for people to come in and test its system for vulnerabilities on several fronts, including offensive content, manipulation and misinformation, and bias. The company stated in the memorandum that it supports the establishment of an information sharing center for cyber security cooperation.
In the memo, OpenAI seems to acknowledge the potential risk of artificial intelligence systems to the labor market and inequality. The company said in the draft that it would conduct research and make recommendations to policymakers to protect the economy from potential “disruptions.”