More than a year and a half after it was initially proposed by the European Commission, the European Parliament and the European Council have agreed on a legal framework for artificial intelligence.
The rules are based on what the EU characterizes as a “future-proof definition of AI,” and classifies AI systems according to the level of risk; ranging from “Minimal” to “Unacceptable.”
Non-compliance with the AI Act will subject the violator to stiff fines, ranging from €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information. This framework is similar to the fines set forth by the EU’s General Data Protection Regulation (GDPR), although the AI Act fines are much higher.
Risk-levels as defined by the AI Act
- Unacceptable risk: The AI Act will ban systems that represent a clear threat to safety, livelihoods and rights of people. Examples include systems or applications that “manipulate human behaviour to circumvent users’ free will,”
- High risk: AI technologies used in critical infrastructure, educuational or vocational training, product safety, employment, essential private and public services, law enforcement, migration asylum and border control, and the administration of justice and democratic processes,
- Limited risk: AI systems that are given transparency obligations, such as informing chatbot users that they are interacting with a machine,
- Minimal risk: The EU proposal does not intervene in applications such as video games or spam filters.
“On Artificial Intelligence, trust is a must, not a nice to have,” said Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, when the proposal for the AI Act was first released in 2021. “With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” she said.
What the AI Act means with regard to piracy
Applications of AI banned by the AI Act include tactics to manipulate human behavior that are frequently used by pirates. These include “untargeted scraping of facial images from the Internet or CCTV footage to create facial recognition databases,” which can be used in deepfakes. Other such banned applications include “AI systems that manipulate human behaviour to circumvent their free will,” and “AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).”
General-purpose AI (GPAI) systems, and the GPAI models they are based on, will also have to adhere to transparency requirements as initially proposed by Parliament, including compliance with EU copyright law and disseminating detailed summaries about the content used for AI training.
According to a statement by the European Council, an AI Office within the Commission is set up tasked to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states. A scientific panel of independent experts will advise the AI Office about general-purpose AI (GPAI) models, by contributing to the development of methodologies for evaluating the capabilities of foundation models.
The Council also identified an AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission and will give an important role to Member States on the implementation of the regulation, including the design of codes of practice for foundation models. Finally, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
High Risk systems will be subject to strict obligations for quality, risk-assessment, activity logging, documentation, human oversight and robustness. All remote biometric identification systems are considered high risk and “Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where strictly necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence).”
Updated Machinery Regulation parameters have been issued for mechanical products, covering “an extensive range of consumer and professional products, from robots to lawnmowers, 3D printers, construction machines, industrial production lines.”
Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI. Press release. December 9, 2023. European Parliament
Commission welcomes political agreement on Artificial Intelligence Act. Press release. December 9, 2023. European Commission
Artificial Intelligence Act: Council and Parilament strike a deal on the first rules for AI in the world. Press release. December 9, 2023. Council of the European Union
(The proposed Artificial Intelligence Act itself) Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence and amending certain Union legislative acts. Proposed regulation. April 21, 2021. Eur-Lex portal for European Union law
Europe fit for the Digital Age: Commission proposes new rules and actions for excellence and trust in Artificial Intelligence. Press release. April 21, 2021. European Commission
Why it matters
As a ruling by the EU, the AI Act will be applied in a uniform manner across all EU member states. The EU is not alone of course.
In October, the Biden administration issued an Executive Order to establish security and safety standards for the US. The Order is intended to “protect (consumers) from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.”
“The EU’s AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment,” said Ursula von der Leyen, President of the European Commission. “The AI Act transposes European values to a new era. By focusing regulation on identifiable risks, today’s agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI,” she said.