Tech Companies Join Forces With US Government To Create Responsible AI Solutions
The Joe Biden administration has struck an agreement with seven prominent artificial intelligence (AI) technology firms, including Google, OpenAI, and Meta, to implement new measures aimed at effectively addressing the potential risks linked to AI.
The measures would include testing the safety of artificial intelligence and making the results of these tests public. Companies include Amazon, Anthropic, Meta, Google, Inflection and OpenAI.
“These commitments are real and concrete. Artificial intelligence is changing people’s lives around the world. “The people here are critical to shepherding that innovation responsibly and safely,” Biden said after a meeting at the White House late Friday.
“Artificial intelligence should benefit society as a whole. For this to happen, these powerful new technologies must be built and deployed responsibly,” said Nick Clegg, Meta’s Director of Global Affairs.
“As we develop new AI models, technology companies should be transparent about how their systems work and work closely with industry, government, academia and civil society,” he added.
As part of the agreement, the technology companies have agreed on security testing of their AI systems by internal and external experts before their release.
This ensures that people can detect AI by implementing watermarks and publicly reports on AI capabilities and limitations on a regular basis.
These companies also investigate risks such as bias, discrimination and invasion of privacy.
“This is a serious responsibility, we have to get it right. It also has huge, huge potential,” Biden said.
OpenAI said watermarking contracts would require companies to “develop tools or APIs to determine whether certain content was created on their system.”
Google has committed to introducing similar data earlier this year.
Earlier this week, Meta announced that it would open-source its large language model, Llama 2, making it free for researchers, like OpenAI’s GPT-4.