Discover the top 5 AI regulations in Europe, including the prohibition of AI in high-risk systems.
On Friday, policymakers and lawmakers in the European Union reached an agreement on the world’s inaugural comprehensive regulations for the utilization of artificial intelligence (AI) in various tools like ChatGPT, as well as in biometric surveillance.
In the coming weeks, they will work out details that could change the final legislation, which is expected to take effect early next year and take effect in 2026.
Until then, companies are encouraged to sign a voluntary AI agreement to implement the main obligations of the rules.
Here are the main points agreed:
HIGH RISK SYSTEMS
So-called high-risk AI systems – those deemed to have the potential to significantly harm health, safety, fundamental rights, the environment, democracy, elections and the rule of law – must meet certain requirements, such as fundamental rights impact assessments and obligations to access the EU market.
AI systems deemed to pose limited risks would be subject to very light transparency requirements, such as disclaimer labels stating that the content is generated by AI so that users can decide how to use it.
USING AI IN LAW ENFORCEMENT
Law enforcement agencies may use real-time biometric remote identification systems in public spaces only to identify victims of kidnapping, human trafficking, and sexual exploitation, and to prevent a specific and current terrorist threat.
They are also allowed to seek to trace people suspected of terrorist offences, human trafficking, sexual abuse, murder, kidnapping, rape, armed robbery, participation in a criminal organization and environmental offences.
GENERAL PURPOSE AI SYSTEMS (GPAI) AND FOUNDATION MODELS
The GPAI and establishment models are subject to transparency requirements, such as the preparation of technical documents, compliance with EU copyright law, and the dissemination of detailed summaries of the content used in algorithm training.
Basic models classified as systemic risk and high impact GPAI must conduct model evaluations, assess and mitigate risks, perform adversarial testing, report serious incidents to the European Commission, ensure cyber security and report on their energy efficiency.
Until harmonized EU standards are published, GPAIs with systemic risk can rely on the code of conduct to comply with the regulation.
FORBIDDEN AI
The rules prohibit the following:
– Biometric classification systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation or race.
– Untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases;
– Recognizing emotions at the workplace and in educational institutions.
– Social scoring based on social behavior or personal characteristics.
– Artificial intelligence systems that manipulate human behavior to circumvent their free will.
– Artificial intelligence took advantage of people’s vulnerability due to their age, disability, social or economic situation.
PENALTIES FOR VIOLATIONS
Depending on the violation and the size of the company involved, fines start at 7.5 million euros ($8 million), or 1.5 percent of global annual turnover, and rise to 35 million euros, or 7 percent of global turnover.
(1 dollar = 0.9293 euros)