Joe Biden Warns of Potential Dangers of Artificial Intelligence, Tech Companies Pledge Protection
President Joe Biden emphasized the need for the United States to remain vigilant against potential risks posed by artificial intelligence. He outlined new measures to protect companies and pledged further governmental initiatives to address the advancements in this field.
“These commitments are real and concrete. They help the industry fulfill its fundamental responsibility to the American people to develop safe, secure and reliable technologies that benefit society and uphold our values, Biden said Friday.
Executives from Amazon.com Inc., Alphabet Inc., Meta Platforms Inc., Microsoft Corp., OpenAI, Anthropic and Inflection AI — all committed to implementing transparency and security measures — joined Biden at the White House for the announcement.
Biden said the company’s actions are just the first step and promised to take executive action while working with Congress to enact AI rules and regulations. “We have to be sharp-eyed and alert to threats,” he said.
The companies have agreed to test new AI systems with internal and external tests before they are released and allow outside groups to investigate security flaws, discriminatory trends or risks to consumer privacy, health information or security. The companies have also promised to share information with governments, civil society and academia, and to report vulnerabilities.
“These commitments, which companies will implement immediately, emphasize three fundamental principles: safety, security and trust,” Biden said.
Friday’s instructions are the result of months of behind-the-scenes lobbying. Biden and Vice President Kamala Harris met in May with several leaders present at Friday’s event, warning them that industry has a responsibility to ensure the safety of its products.
Biden aides say artificial intelligence has been a top priority for the president, who frequently raises the issue in meetings with advisers. He has also directed cabinet secretaries to explore how technology could intersect with their agencies.
The package of safeguards formalizes and expands some of the measures already implemented in large AI companies, and the commitments are only voluntary. The guidelines do not require the approval of specific third-party groups before companies can release AI technologies, and companies are only required to report – not eliminate – risks, such as potential inappropriate use or bias.
“It’s a moving target,” White House Chief of Staff Jeff Zients said in an interview. “Not only do we need to implement and enforce these commitments, but we need to figure out the next round of commitments as technology changes.”
Zients and other administration officials have said it will be difficult to keep pace with new technologies without legislation from Congress that imposes stricter rules and includes special funding for regulators.
“They’re calling for new laws, regulations and oversight,” Biden said Friday.
Evolving technology
AI companies said before the meeting that the moves better manage the risks of the rapidly developing technology, which has seen public interest explode in recent months.
Nick Clegg, Meta’s director of global affairs, called the voluntary commitments “an important first step in ensuring that responsible guardrails are created for AI and set a model for other governments to follow,” in a statement on Friday.
Microsoft President Brad Smith said Friday’s commitments “will help ensure that the promise of AI stays ahead of the risks.” He said Microsoft supports other measures to track the most effective AI models, including a licensing system, “know your customer” requirements and a national registry of high-risk systems.
Kent Walker, Google’s head of global affairs, framed Friday’s commitments to other international efforts by the Group of Seven and the Organization for Economic Co-operation and Development to “maximize the benefits of AI and minimize its risks.” He said artificial intelligence is already used in many of Google’s most popular products, such as Search, Maps and Translate, and the company is designing its systems to be “responsible from the ground up.”
The White House said it would consult with the governments of 20 other countries before Friday’s announcement. But the pace of surveillance is already lagging behind the development of artificial intelligence.
In Europe, the EU’s AI law is a long way from being passed by the US Congress, but leaders there have acknowledged that companies must make voluntary commitments to secure their technology before the law takes effect.