G-7 Nations Call for AI Companies to Implement Watermarks and Audits
In an effort to bridge the gap between Europe and the US, the Group of Seven nations is set to request tech companies to adhere to a set of regulations that will help minimize the potential dangers associated with artificial intelligence systems.
The 11 draft guidelines, which are voluntary, include external testing of AI products before they are deployed, public reports on security measures and oversight to protect intellectual property, according to a copy seen by Bloomberg News, which may be agreed upon next. a week in Japan. The document is still being discussed and its content and the timing of the announcement may still change.
Still, the countries – Canada, France, Germany, Italy, Japan, Britain and the United States – are divided on whether companies should monitor the development, people familiar with the matter said. While the United States opposes any kind of oversight, the European Union is pushing for a mechanism that would check compliance and publicly name companies that have broken the rules, said the people, who asked not to be identified because the talks are private.
After OpenAI’s ChatGPT service started a race between tech companies to develop their own AI systems and applications, governments around the world began grappling with how to deploy guardrails against disruptive technology while still reaping the benefits.
The EU is likely to be the first Western government to establish mandatory rules for AI developers. Its proposed artificial intelligence law is in final negotiations with the goal of reaching an agreement by the end of the year.
The United States has urged other G-7 countries to accept voluntary commitments with companies including OpenAI, Microsoft Corp. and Alphabet Inc.’s Google in July. President Joe Biden’s administration has also called for artificial intelligence regulation in the United States, but the government is limited in what it can do without action from Congress.
The proposed guidelines include the following requirements:
- Conduct internal and external testing before and after product deployment to check for security vulnerabilities, including “red teaming” emulating attacks.
- Publish safety and security assessment reports and share information with organizations such as governments and academia
- Reveal privacy and risk management practices and implement physical security and cyber security controls
- Identify content created by artificial intelligence with a watermark or other methods
- Invest in AI security research
- Prioritize the development of artificial intelligence systems that address global challenges, including the climate crisis, health and education
- Adopt international standards for testing and content verification
- Manage data entering the systems to protect intellectual property rights and personal data.
One more thing! ReturnByte is now on WhatsApp channels! Follow us by clicking the link to never miss any updates from the world of technology. Click here to join now!