AI in Europe: Make or Break Time?
Negotiators are currently in the process of finalizing the details of European Union artificial intelligence rules, which are being hailed as a global precedent. However, these talks have become more complex due to the unexpected emergence of generative AI, which is capable of producing work that closely resembles that of humans. This development has added a crucial element to the ongoing discussions, making it a decisive moment for the future of these regulations.
First proposed in 2019, the EU AI law was expected to be the world’s first comprehensive AI regulation, further strengthening the 27-nation bloc’s position as a global leader in curbing the tech industry.
But the process has stalled in a last-minute battle over how to manage systems that support general-purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. Big tech companies are lobbying against what they see as over-regulation that stifles innovation, while European lawmakers want more protections for state-of-the-art artificial intelligence systems developed by companies.
Meanwhile, the US, UK, China and global alliances such as the Group of 7 major democracies have joined the race to create guardrails for the rapidly developing technology, highlighted by warnings from scientists and human rights groups about the existential dangers posed by generative AI to humanity. and everyday risks.
“Rather than the AI Bill becoming the global gold standard for AI regulation, there is a small but growing chance that it will not be agreed before the European Parliament elections,” said Nick Reiners, technology policy analyst at Eurasia Group. a political risk advisory firm.
He said “there’s simply so much to catch up on” in what officials hope will be a final round of negotiations on Wednesday. Even if they’re working late into the night as expected, they may struggle to get through the new year, Reiners said.
When the European Commission, the EU’s executive body, published the draft in 2021, it barely mentioned general purpose artificial intelligence systems like chatbots. The proposal to classify artificial intelligence systems according to four levels of risk – from minimal to unacceptable – was basically intended as product safety legislation.
Brussels wanted to test and verify data used by AI-powered algorithms, just like consumer safety checks on cosmetics, cars and toys.
That changed with the rise of generative artificial intelligence, which aroused wonder by composing music, creating images, and writing essays that resemble human work. It also raised fears that the technology could be used to launch massive cyber attacks or create new bioweapons.
The risks prompted EU legislators to strengthen the AI law by extending it to foundation models. Also known as large language models, these systems are trained on a huge set of written works and images scraped from the Internet.
Basic patterns allow generative AI systems like ChatGPT to create something new, unlike traditional AI that processes data and performs tasks according to predetermined rules.
Chaos last month at Microsoft-backed OpenAI, which built one of the most famous foundational models, GPT-4, raised concerns for some European leaders about the dangers of allowing a few dominant AI companies to police themselves.
CEO Sam Altman was fired and quickly rehired, but some board members deeply wary of the security risks posed by AI walked away, signaling that AI’s corporate governance may be subject to boardroom dynamics.
“At least things are now clear” that companies like OpenAI are defending their companies and not the public interest, European Commission member Thierry Breton said at an artificial intelligence conference in France days after the turmoil.
Opposition to these government rules on AI systems came from an unlikely place: France, Germany and Italy. The EU’s three largest economies rejected the position in favor of self-regulation.
The change of heart was seen as a step to help domestic generative AI players such as France’s Mistral AI and Germany’s Aleph Alpha.
Behind it “is a determination not to let US companies dominate the AI ecosystem as they have done in previous technologies such as cloud (computing), e-commerce and social media,” Reiners said.
A group of influential IT researchers published an open letter warning that undermining the AI Act in this way would be a “historic failure”. Meanwhile, Mistral executives sparred online with a researcher at an Elon Musk-backed nonprofit that seeks to prevent the “existential risk” of artificial intelligence.
AI is “too important not to regulate, and too important not to regulate well,” Google lawyer Kent Walker said in a speech in Brussels last week. “The competition should be for the best AI rules, not the first AI rules.”
Foundation models, which are used for a wide range of tasks, have proved the most difficult issue for EU negotiators because regulating them “goes against the whole logic of the law, which is based on the risks posed by certain uses”, said director Iverna McGowan. European Office for Digital Rights at the non-profit Center for Democracy and Technology.
The nature of general-purpose AI systems means “you don’t know how to use them,” he said. At the same time, regulations are needed “because otherwise there’s no accountability in the food chain” when other companies build services with them, McGowan said.
Altman has proposed a US or global agency that would license the most powerful AI systems. He suggested this year that OpenAI could leave Europe if it could not comply with EU rules, but quickly retracted those comments.
Aleph Alpha said “a balanced approach is needed” and supported the EU’s risk-based approach. But it is “not suitable” for constitutional models that need “more flexible and dynamic” provisions, the German AI firm said.
EU negotiators still need to resolve a few other points of contention, including a proposal to ban real-time public facial recognition altogether. Countries want the exemption so law enforcement can use it to find missing children or terrorists, but rights groups fear it effectively creates a legal basis for surveillance.
The EU’s three branches of government have one of their last chances to reach an agreement on Wednesday.
Even if they do, Block 705 lawmakers still have to sign off on the final version. The vote must take place by April before they start campaigning for EU-wide elections in June. The law will not enter into force until a transition period, typically two years.
If they don’t make it in time, the legislation will be delayed until later next year – after new EU leaders, who may have different views on AI, take office.
“There is a good chance that it will indeed be the last, but it is equally likely that we will need even more time for negotiations,” Dragos Tudorache, a Romanian lawmaker leading the European Parliament’s negotiations on the AI law, told the panel. discussion last week.
His office said he was not available for an interview.
“The discussion is still very fluid,” he told an event in Brussels. “We’re going to keep you guessing until the last minute.”