Get a sneak peek at the AI Act cheat sheet before its implementation, as the European Union seals a groundbreaking agreement
Europe has reached a preliminary agreement on significant European Union (EU) regulations concerning the utilization of artificial intelligence (AI). This political accord is considered a significant milestone for the EU, and it is anticipated that the AI Act will now proceed through the remaining stages of implementation. According to reports, the main areas of disagreement within the bloc were the application of AI in biometric surveillance by governments and the regulation of AI systems like ChatGPT. With the agreement now established, it is important to understand the essential components of the AI Act and its potential impact on the future of this developing technology.
These key points were shared by Oliver Patel, AstraZeneca’s corporate director of AI on LinkedIn. Posting its image as an “AI Act hoax”, he said: “Now that the dust has settled on Friday’s announcement of a political agreement on the AI Act, it’s time to dig into the details”. It should also be noted that these passages are taken only from publicly available text. As the full text becomes available, more items will be added to the list.
The most important things you need to know about AI law
With this agreement, the Artificial Intelligence Act has become the world’s first comprehensive Al Act. It is expected to enter into force in 2026. The draft law focuses on the transparency and security of information sharing, as well as defining how the regulatory framework works together with the market.
First, let’s look at the basics of AI law.
- Definition of AI: It has been adapted to the recently updated OECD definition.
- The new definition is: An artificial intelligence system is a machine-based system that, for explicit or implicit purposes, deduces from the input it receives how to produce results such as predictions, content, recommendations or decisions that can affect physical or virtual. environments. The levels of autonomy and adaptability of different artificial intelligence systems vary after implementation.
- Extraterritorial: This word is used for organizations outside the EU
- Exceptions: This defines a state where the AI Act cannot be enacted. It includes national security, military and defence; R&D; open source (partial).
- Additional period of compliance 6-24 months
- Risk-based: This defines different levels for risky AI. The most dangerous AI systems are ranked as Prohibited Al, followed by High Risk Al, Limited Risk Al, and Minimal Risk Al.
- Extensive requirements for high-risk Al
- Generative Ai: Specific transparency and disclosure requirements have been changed.
Banned AI system
According to the proposed amendment to the Artificial Intelligence Act, this particular text defines the reasons for banning artificial intelligence systems. The text states: “In addition to the many beneficial uses of artificial intelligence, the technology can also be misused and provides new and powerful tools for manipulative, exploitative and social control practices. Such practices are particularly harmful and offensive and should be prohibited because they contradict human dignity, freedom, equality -value, respect for democracy and the rule of law and the fundamental rights of the Union, including the right to non-discrimination, data protection and privacy and the rights of the child.”
The following are the areas where AI is prohibited by the AI Act, according to Patel.
- Social credit scoring systems
- Emotion recognition systems at work and in education
- Ali took advantage of people’s vulnerabilities (e.g. age, disability)
- Manipulating behavior and circumventing free will
- Untargeted face image scraping for face recognition
- Biometric classification systems using sensitive features
- Specific predictive policing applications
- Law enforcement uses real-time biometric identification in public (except in limited, pre-authorized situations)
High risk artificial intelligence systems
The draft law says the following about high-risk artificial intelligence systems: High-risk artificial intelligence systems should only be placed on the Union market, introduced or used if they meet certain mandatory requirements. Those requirements should ensure that high-risk artificial intelligence systems that are available in the Union or whose outputs are otherwise used in the Union do not pose unreasonable risks to important public interests of the Union recognized and protected by Union law.
The following are areas where the use of artificial intelligence is considered a high-risk risk, with particular attention paid to regulatory compliance.
- Medical devices
- Vehicles
- Recruitment, HR and employee management
- Education and vocational training • Influencing elections and voters
- Access to services (e.g. insurance, bank, credit, benefits, etc.)
- Management of critical infrastructure (e.g. water, gas, electricity, etc.)
- Emotion recognition systems
- Biometric identification
- Law enforcement, border control, immigration and asylum
- Administration of justice
- Certain products and/or security components of certain products
With that in mind, the following are the key requirements for high-risk AI highlighted in the proposed AI Act shared by Patel.
- Fundamental rights impact assessment and compliance assessment
- Registration in the EU public database for high-risk Al systems
- Implement a risk management and quality management system
- Data management (e.g. reduction of bias, representative training data, etc.)
- Transparency (e.g. user manuals, technical documentation, etc.)
- Human control (e.g. explainability, auditable logs, in-the-loop, etc.)
- Accuracy, robustness and cyber security (e.g. testing and monitoring)
General purpose AI
There was no mention of general-purpose artificial intelligence in the original AI Act. However, the amendment proposes the following definition: ‘Generic AI system’ means an AI system that can be used and adapted to a wide range of applications for which it was not intentionally and specifically designed.
The following key elements have been added for such AI systems.
- Separate requirements for general purpose Al (GPAI) and base models
- Openness to all GPAIs (e.g. technical documentation, training data summaries, copyright and IP protections, etc.)
- Additional requirements for high-impact models with systemic risk: model evaluations, risk assessments, adversarial testing, event reporting, etc.
- Generative Al: individuals need to be informed when they interact with Al (e.g. chatbots); All content must be labeled and detectable (e.g. deep fakes)
Penalties and enforcement
The draft AI law highlights the following: “In accordance with the terms and conditions laid down in this Regulation, Member States shall lay down the sanctions to be applied if an operator infringes this Regulation and shall take all necessary measures to ensure that they are properly and effectively implemented and are aligned with the guidelines issued by the Commission and the Office of Artificial Intelligence .”
Accordingly, the following have been confirmed.
- Up to 7 percent of global annual turnover or 35 million euros for prohibited Al violations
- Up to 3 percent of global annual turnover or €15 million for most other violations
- Maximum fines for SMEs and startups of up to 1.5 percent of global annual turnover or 7.5 million euros for providing incorrect information
- The European ‘Al Office’ and ‘Al Board’ have been established centrally at the EU level
- The market surveillance authorities of EU countries monitor the implementation of the Al Act
- Anyone can complain about non-compliance