Europe Taking the Lead in AI Regulation
On Wednesday, European lawmakers approved the world’s initial comprehensive regulations for artificial intelligence, marking a significant milestone as governments worldwide strive to control AI.
Brussels’ years-long push to create artificial intelligence guardrails has become more urgent as the rapid development of chatbots like ChatGPT shows the benefits the new technology can bring – and the new dangers it brings.
Here’s an overview of EU AI law:
HOW DO THE RULES WORK?
The measure, first proposed in 2021, covers any product or service that uses an artificial intelligence system. The law classifies artificial intelligence systems according to four risk levels, from minimal to unacceptable.
Riskier applications, such as hiring or technologies aimed at children, are subject to stricter requirements, such as transparency and the use of accurate data.
It is the responsibility of the 27 EU member states to implement the rules. Regulators can force companies to remove their apps from the market.
In extreme cases, violations can result in fines of up to 40 million euros ($43 million), or 7% of a company’s annual global turnover, which in the case of tech companies like Google and Microsoft can reach into the billions.
WHAT ARE THE RISKS?
One of the EU’s main goals is to protect itself from all AI threats to health and safety and to protect fundamental rights and values.
This means that some uses of AI are strictly prohibited, such as “social scoring” systems that rate people based on their behavior.
Also prohibited is artificial intelligence that exploits vulnerable people, including children, or uses subliminal manipulation that can cause harm, for example an interactive talking toy that encourages dangerous behavior.
Predictive policing tools that collect data to predict when crimes will be committed are also out there.
Lawmakers reinforced an initial proposal by the European Commission, the EU’s executive branch, by expanding the ban on real-time remote facial recognition and biometric identification in public. The technology scans passers-by and connects their face or other physical features to a database with the help of artificial intelligence.
A controversial amendment to allow law enforcement exceptions, such as finding missing children or preventing terrorist threats, did not pass.
There are strict requirements for artificial intelligence systems used in categories such as employment and education that would affect the course of a person’s life, such as transparency to users and the implementation of measures to assess and reduce the risks of algorithm bias.
Most artificial intelligence systems, such as video games or spam filters, fall into the category of low or no risk, the commission says.
what about CHATGPT?
The original measure barely mentioned chatbots, mostly requiring them to be labeled so users know they’re interacting with a machine. Later, negotiators added provisions to cover general-purpose AI like ChatGPT after it exploded in popularity, placing some of the same requirements on that technology as high-risk systems.
One important addition is the requirement to thoroughly document all copyrighted material used to teach AI systems to create text, images, videos and music that resemble human work.
This would let content creators know if their blog posts, digital books, scientific articles or songs have been used to train algorithms using systems like ChatGPT. They can then decide if their work has been copied and seek compensation.
WHY ARE EU RULES SO IMPORTANT?
The European Union is not a major player in the development of cutting-edge artificial intelligence. This role is taken by the United States and China. But Brussels is often a trendsetter with regulations that tend to become de facto global standards, and has become a frontrunner in efforts to target the power of big tech companies.
According to experts, the sheer size of the EU’s internal market and 450 million consumers makes it easier for companies to comply than to develop different products for different regions.
But it’s not just a blow. By setting common rules for artificial intelligence, Brussels also aims to develop the market by instilling user confidence.
“The fact that this is a regulation that can be enforced and companies are held accountable is significant,” as other places like the United States, Singapore and the United Kingdom have only offered “guidance and recommendations,” said Kris Shrishak, a technology expert and senior researcher Irish Civil Liberties Council.
“Other countries may want to adapt and copy” EU rules, he said.
Companies and industry groups warn that Europe needs to find the right balance.
“The EU will become a leader in AI regulation, but it remains to be seen whether it will lead to AI innovation,” said Boniface de Champris, policy director at the technology lobby group Computer and Communications Industry Association. companies.
“Europe’s new AI rules must effectively address clearly defined risks and leave enough flexibility for developers to deliver useful AI applications for the benefit of all Europeans,” he said.
Sam Altman, CEO of ChatGPT maker OpenAI, has voiced his support for some AI guardrails and signed a warning with other tech leaders about the risks it poses to humanity. But he has also said it is “a mistake to impose strict regulation on the field right now”.
Others play by the rules of artificial intelligence. Britain, which left the EU in 2020, is aiming for leadership in artificial intelligence. Prime Minister Rishi Sunak plans to host a world summit on AI security this fall.
“I want to make the UK not only the intellectual home but also the geographical home of global AI security regulations,” Sunak said at a technology conference this week.
WHAT NEXT?
It may take years before the rules take full effect. The next step is three-way negotiations involving the member states, the Parliament and the European Commission. There may be more changes to the negotiations as they try to agree on the wording.
Final approval is expected by the end of this year, after which companies and organizations often have about two years to adapt.
Brando Benifei, an Italian member of the European Parliament who is leading its work on AI law, said they were calling for faster adoption of rules for rapidly developing technologies such as generative artificial intelligence.
To fill the gap before the legislation takes effect, Europe and the United States are drafting voluntary codes of conduct that officials promised at the end of May would be completed within weeks and could be extended to other “like-minded” countries.