Rules on AI in the EU
Last week, European Union officials dedicated long hours to reaching a consensus on groundbreaking regulations that will govern the utilization of artificial intelligence within the 27-nation bloc. Known as the Artificial Intelligence Act, these rules represent the most recent effort to regulate technology in Europe, with implications that extend beyond its borders.
Here’s a closer look at the AI rules:
WHAT IS AI ACT AND HOW DOES IT WORK?
The AI Act takes a “risk-based approach” to products or services using AI and focuses on regulating the use of AI rather than the technology. Legislation is designed to protect democracy, the rule of law and fundamental rights such as freedom. while encouraging investment and innovation.
The riskier the AI application, the stricter the rules. Those that pose a limited risk, such as content recommendation systems or spam filters, would only be subject to light rules, such as disclosing that they are powered by artificial intelligence.
High-risk systems, such as medical devices, face more stringent requirements, such as using high-quality data and providing clear information to users.
Some uses of AI are banned because they are considered to pose an unreasonable risk, such as social scoring systems that guide human behavior, certain types of predictive policing and emotion recognition systems in schools and workplaces.
Members of the public cannot have their faces scanned by AI-powered “remote biometric identification systems” except for serious crimes such as kidnapping or terrorism.
The AI law will not take effect until two years after European lawmakers give final approval, which is expected to happen in a rubber-stamp vote in early 2024. Violations can result in fines of up to 35 million euros ($38 million) or 7 percent of the company’s global value. revenue.
HOW WILL AI ACT AFFECT THE REST OF THE WORLD?
The AI law affects almost 450 million EU residents, but according to experts, its impact can be felt much further, because Brussels plays a leading role in drawing up the rules that act as a global standard.
The EU has played a role in the past with previous technical directives, most notably forcing a common charging plug forced Apple to abandon its internal Lightning cable.
While many other countries figure out if and how they can curb AI, the EU’s comprehensive regulations are poised to serve as a model.
“The AI Act is the world’s first comprehensive, horizontal and binding AI rule that will not only change the situation in Europe, but is likely to significantly increase global momentum for AI regulation in all jurisdictions,” said Anu Bradford, Columbia Law School. a professor who is an expert in EU law and digital regulation.
“It gives the EU a unique position to lead the way and show the world that artificial intelligence can be controlled and its development subject to democratic control,” he said.
Even what the law doesn’t do can have global implications, rights groups said.
By failing to comply with the ban on real-time facial recognition, Brussels has “in effect given the green light to dystopian digital surveillance in the EU’s 27 member states, setting a devastating precedent globally,” Amnesty International said.
The partial ban is a “horribly missed opportunity to stop and prevent the massive damage to human rights, civil space and the rule of law that is already under threat through the EU”.
Amnesty also condemned the failure of lawmakers to ban the export of AI technologies that could harm human rights — including for use in social scoring, which China does to reward obedience to the state through surveillance.
WHAT ARE OTHER COUNTRIES DOING WITH AI REGULATION?
The world’s two major AI powers, the United States and China, have also started the ball rolling according to their own rules.
US President Joe Biden signed a sweeping executive order on artificial intelligence in October, which is expected to be strengthened by legislation and global agreements.
It requires leading AI developers to share security test results and other information with the government. The agencies are creating standards to ensure AI tools are safe before they are released to the public, and providing guidelines for flagging AI-generated content.
Biden’s order builds on previous voluntary commitments made by tech companies such as Amazon, Google, Meta and Microsoft to ensure their products are secure before they are released.
At the same time, China has issued “interim measures” to control generative artificial intelligence, which covers text, images, audio, video and other content created for people in China.
President Xi Jinping has also proposed the Global AI Governance Initiative, which calls for an open and fair environment for the development of artificial intelligence.
HOW WILL AI ACT INFLUENCE CHATGPT?
The spectacular rise of OpenAI’s ChatGPT showed that the technology was advancing dramatically and forced European decision makers to update their proposals.
The AI Act includes provisions for chatbots and other so-called general-purpose artificial intelligence systems that can perform a wide range of tasks, from composing poetry to creating video to writing computer code.
Officials took a two-tiered approach: most general-purpose systems face basic transparency requirements, such as the details of data disclosure, and EU environmental sustainability efforts, as evidenced by how much energy they spent training models on large volumes of literature. works and pictures scraped from the internet.
They must also comply with EU copyright law and summarize the content they use in training.
The most advanced AI systems with the most computing power are set to face tougher rules that pose “systemic risks” that officials want to stop from spreading to services built by other software developers.