European Commission establishes oversight office for the AI Act

Sponsor ad - 728w x 90h (at 72 dpi)

A European AI Office for governance and coordination at an EU level was established in February, following the political agreement between the European Parliament and the EU’s Council on the Artificial Intelligence Act (AI Act) in December 2023.

The AI Office will contribute to the development of novel use cases and emerging applications in Europe’s industrial ecosystems, as well as the public sector. Application areas include robotics, health, biotech, manufacturing, mobility, climate and virtual worlds.

Sponsor ad

The AI Office will make use of its expertise to support the implementation of the AI Act by:

  • Contributing to the coherent application of the AI Act across the Member States, including the set-up of advisory bodies at EU level, facilitating support and information exchange
  • Developing tools, methodologies and benchmarks for evaluating capabilities and reach of general-purpose AI models, and classifying models with systemic risks
  • Drawing up state-of-the-art codes of practice to detail out rules, in cooperation with leading AI developers, the scientific community and other experts
  • Investigating possible infringements of rules, including evaluations to assess model capabilities, and requesting providers to take corrective action
  • Preparing guidance and guidelines, implementing and delegated acts, and other tools to support effective implementation of the AI Act and monitor compliance with the regulation
  • Strengthening the development and use of trustworthy AI, including advising on and enabling access to AI sandboxes and real-world testing
  • Fostering international cooperation by promoting the EU approach and contributing to a global approach to AI
  • Cooperation with institutions, experts and stakeholders

About the AI Act

The AI Act is in the process of being adopted and will go into full effect over a period of three years following its publication in the EU’s Official Journal.  The initial proposal for the AI Act was published in April 2021.

The Act introduced dedicated rules for general purpose AI models to help ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the Commission.

The Regulatory Framework of the EU’s AI Act defines four levels of risk. Source: European Commission

The Act segments AI risks into four categories

Unacceptable Risk: The Act will ban AI systems considered to be a clear threat to the fundamental rights of people, including AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors or systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, .

High Risk: AI technology used in critical infrastructure, product safety, law enforcement, justice and democratic processes, essential services, employment and worker management; and educational or vocational training.  AI systems used in these applications will be subject to strict obligations for risk assessment and mitigation, activity logging, quality of data-sets, robustness, human oversight and detailed documentation.

Limited Risk: The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine

Minimal or no risk: The AI Act allows the free use of minimal-risk AI. This includes applications such as AI-enabled video games or spam filters. The vast majority of AI systems currently used in the EU fall into this category.

(The above are taken from the EU’s summary)

Non-compliance

Just as there are stiff penalties for companies that don’t comply with the EU’s General Data Protection Regulation (GDPR), companies that do not comply with the rules under the AI Act will also be fined.

Fines would range from €35 million or 7% of global annual turnover (whichever is higher) for violations of banned AI applications, €15 million or 3% for violations of other obligations and €7.5 million or 1.5% for supplying incorrect information. More proportionate caps are foreseen for administrative fines for SMEs and start-ups in case of infringements of the AI Act.”

The research guidelines focus on the need to ensure that generative AI plays a positive role in improving research practices, and to prevent misuse. While one goal is for the scientific community to use AI technology in a responsible manner, it is not the work of policy makers alone. Universities, research organisations, funding bodies, research libraries, learned societies, publishers and researchers are also essential in shaping the discussion on AI and how it can serve the public interest in research.

Further reading

AI Act. Overview of the legal framework and links to supporting documentation.  Accessed March 20, 2024. European Commission

Proposal for a Regulation laying down harmonised rules on artificial intelligence. Policy and Legislation. April 21, 2021. Directorate-General for Communications Networks, Content and Technology. European Commission

Coordinated Plan on Artificial Intelligence 2021 Review. Policy and Legislation. April 21, 2021. Directorate-General for Communications Networks, Content and Technology. European Commission

Why it matters

“Artificial intelligence is already changing our everyday lives. And this is just the beginning. Used wisely and widely, AI promises huge benefits to our economy and society,” said Ursula von der Leyen, President of the European Commission. “By focusing regulation on identifiable risks, today’s agreement will foster responsible innovation in Europe. By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.”

Print Friendly, PDF & Email
From our Sponsors