Exploring the Global Impact of Europe’s AI Regulation
The AI regulation of the European Union is expected to inspire several imitations in Brussels and other regions. Almost there, but not quite.
“This is the AI moment.”
This is what happened in the declaration of Doreen Bogdan-Marti, secretary general of the International Telecommunication Union, at the end of the UN summit held in Geneva on July 7, 2023.
At a historic UN Security Council meeting 11 days later, Secretary-General António Guterres agreed. So are nations and regulators.
There has been a desire from powerful quarters to protect citizens from the potential harms of AI – the problems that are known (discrimination, privacy violations, copyright theft) and those that are not. Yet.
Most countries have approached such issues by allowing sectors to regulate artificial intelligence separately, such as aircraft design and flight safety. The infamous Boeing 737 MAX – which was grounded for more than 18 months after two incidents in five months that killed 346 people – is one glaring example of regulatory failure.
Other areas proactively regulated by AI include medical information (leading robotic surgery and scan analysis), automated vehicles (the yet-to-be-realized Tesla robot taxis and “Full Self Drive” [sic]), and police social media networks for protection. against disadvantages such as disinformation.
Some countries, such as the United States, Japan and Great Britain, do not see the need for regulation to go beyond the so-called Adaptive sectoral regulation and possible international agreements discussed in the G7-Hiroshima process complement speculative risks.
Others want to go further.
More can be done. General laws could regulate artificial intelligence in the wider society. China has already released its artificial intelligence law as part of social control, which includes internet filtering through the Great Firewall of China and a social credit score system.
China plans to tightly control the use of AI in the same way it has with social media, banning Facebook, Google and TikTok from operating within its borders (even though the latter has a Chinese parent company).
Liberal democracies do not adopt the Chinese approach, but may go further than the US, UK and Japan. The largest consumer market, the European Economic Area, plans the so-called The “AI Act”, which is actually a European regulation on artificial intelligence.
The law has been locked in negotiations within the EU for more than two years after it was proposed. It may take until April 2024 to get it right. But it is not possible to simply lift EU AI law and transfer it to another jurisdiction: it is part of the laws of the European Union’s institutions, and such a law would be lost in translation.
The adoption and adaptation of EU legislation in other countries has a name: the “Brussels effect”, named after the city that hosts the EU headquarters.
It is most often referred to when describing the reaction to the EU’s 2018 General Data Protection Regulation (GDPR), which has been widely praised for setting a much-copied global data protection standard.
But a nuanced analysis of Brussels’ influence is problematic. Many countries did not adopt the GDPR, instead adopting a separate law (Convention 108 ) from the Council of Europe, the Strasbourg-based 47-member human rights organization that predates the EU.
In 2023, a group of multidisciplinary experts unanimously stated that Brussels’ influence was either not possible or, if it was, it would be limited.
They found that the AI Act would be part of a “digital code,” a vast body of previously agreed laws with an interlocking web of powers and authorities, all of which would need to be replicated to make sense of the additions the AI Act would provide.
If such a Brussels effect in AI is unlikely or very limited, there is a model that nations could adopt.
Although the Organization for Economic Co-operation and Development (OECD) and the United Nations Educational, Social and Cultural Organization (UNESCO) both agree on ethical principles for artificial intelligence, they are not binding.
This leaves the Council of Europe in Strasbourg.
Unlike EU regulations, Council of Europe conventions do not directly affect national legislation. States other than the 47 members of the Council can sign conventions through an international agreement.
For example, Council Convention 108 has 55 members, including Canada and Latin American and African countries.
The Council of Europe has been acting outside of its members for decades, especially the 2001 cybercrime agreement, which included Japan and the United States in addition to Canada.
Convention 108 is evidence of what Lee Bygrave of the University of Oslo has described as the “Strasbourg effect”, an alternative to the Brussels phenomenon.
The impact of Strasbourg could promote the development of artificial intelligence. The convention is likely to be similar to the EU AI law, but with key differences. The Convention is currently being negotiated with the US, UK and Japan and is likely to adopt a more flexible approach and increase co-regulation with industry and, where appropriate, independent experts.
As the Council of Europe is primarily a human rights organization, it is likely to pay more attention to the human rights implications of the introduction of artificial intelligence.
The convention also has the advantage that it will be created in mid-2023, unlike the EU law that started in 2021. This means that it can better handle the fundamental major language models born in early 2023, such as ChatGPT, Bard and others.
In 2024, when the EU’s AI law and the Council’s AI treaty are finalized, other liberal democracies such as Australia, the UK, Brazil, Japan and the US are expected to adopt and adapt these laws.
When the rush begins, the Strasbourg effect from nations copying the convention is more likely than any Brussels effect.
The AI regulatory “moment” that Bodgan-Martin declared in July is years in the making and an exercise in international legal coordination. It is best to be comprehensive and careful to ensure that the power of AI is used for the benefit of humanity.