European Union policymakers and lawmakers have passed world's first comprehensive set of rules regulating the use of artificial intelligence (AI) in tools such as ChatGPT.AI 

Landmark AI Regulation Bill Approved in Europe

BRUSSELS: European Union policymakers and lawmakers on Friday agreed on the world’s first comprehensive set of rules governing the use of artificial intelligence (AI) in tools such as ChatGPT and biometric surveillance.

In the coming weeks, they will work out details that could change the final legislation, which is expected to take effect early next year and take effect in 2026.

Until then, companies are encouraged to sign a voluntary AI agreement to implement the main obligations of the rules.

Here are the main points agreed:


So-called high-risk AI systems – systems deemed to have the potential to significantly harm health, safety, fundamental rights, the environment, democracy, elections and the rule of law – must meet certain requirements, such as fundamental rights impact assessments and obligations to access the EU market.

AI systems deemed to pose limited risks would be subject to very light transparency requirements, such as disclaimer labels stating that the content is generated by AI so that users can decide how to use it.


Law enforcement agencies may use real-time biometric remote identification systems in public spaces only to identify victims of kidnapping, human trafficking, and sexual exploitation, and to prevent a specific and current terrorist threat.

They are also allowed to seek to trace people suspected of terrorist offences, human trafficking, sexual abuse, murder, kidnapping, rape, armed robbery, participation in a criminal organization and environmental offences.


The GPAI and establishment models are subject to transparency requirements, such as the preparation of technical documents, compliance with EU copyright law, and the dissemination of detailed summaries of the content used in algorithm training.

Basic models classified as systemic risk and high impact GPAI must conduct model evaluations, assess and mitigate risks, perform adversarial testing, report serious incidents to the European Commission, ensure cyber security and report on their energy efficiency.

Until harmonized EU standards are published, GPAIs with systemic risk can rely on the code of conduct to comply with the regulation.


The rules prohibit the following:

– Biometric classification systems that use sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation or race.

– Untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases;

– Recognizing emotions at the workplace and in educational institutions.

Social scoring based on social behavior or personal characteristics.

– Artificial intelligence systems that manipulate human behavior to circumvent their free will.

– Artificial intelligence took advantage of vulnerabilities caused by people’s age, disability, social or economic situation.


Depending on the violation and the size of the company involved, fines start at 7.5 million euros ($8 million), or 1.5 percent of global annual turnover, and rise to 35 million euros, or 7 percent of global turnover.

Related posts

Leave a Comment