Final Vote on EU AI Act Approaches

Key Takeaways

  • The EU AI Act introduces a risk-based approach to regulate the use of artificial intelligence, including prohibitions on certain high-risk AI systems.

  • Risk management, data governance and protection, and human oversight are essential components for regulating high-risk AI systems.

  • The European Parliament approved amendments in June 2023 to ensure that the regulations stay up to date with changing technology.

Legislation for the EU AI Act is currently in the final phase of its development, known as the ‘trilogue’ phase. Here we summarize progress on the legislation to date and what the current sticking points for co-legislators are.

The EU AI Act

The EU AI Act is a legislative initiative in the EU to support and encourage AI innovation, while at the same time safeguarding human rights. A global first, this legislation will affect any businesses operating in or interacting with the EU market, no matter where that business is headquartered.

The EU AI Act takes a risk-based approach, classifying AI activities as either of minimal, limited, high, or of unacceptable risk. The obligations of businesses are made more stringent, depending on which risk category their AI activities fall into.

Progress of legislation 

The legislation is currently in the ‘trilogue’ phase with negotiations ongoing between the European Parliament, the European Commission and the Council of the European Union (the Council) in order to finalize the law. 

Matters still to be resolved include:

  • The definition of AI. The European Parliament favors the broadest definition of AI, emphasizing the rights and interests of EU citizens. The European Commission and the Council favor narrower definitions of AI that would be more permissive of AI innovation
  • Which kinds of AI activity will be prohibited
  • Enforcement. This includes disagreement on the size of fines for non-compliance.

Aligning with other jurisdictions

EU institutions recognize that the overall effectiveness of the EU AI Act will depend significantly on steps taken in other jurisdictions. While US and EU authorities agree on some matters — having a risk-based approach, the principles of trustworthy AI, and the need for international standardization — there are also key differences in their approach: While the White House has introduced a Blueprint for an AI Bill of Rights, it is non-binding and it is requires that individual federal agencies come up with their own AI action plan. Consequently, there is potential for the regulatory approach across the US to become disjointed and scattered. Discussions between EU and US authorities are ongoing to try and harmonize their respective approaches to AI regulation.

Next steps

The three EU institutions have signaled an intention to finalize matters by November this year, in which case the AI Act would become law by the end of 2023. If there is a delay for some reason, they have signaled an intention to pass the law by the European Parliament elections in June 2024, at the very latest.

Picture of Drew Donnelly, PhD

Drew Donnelly, PhD

Drew is regulatory expert, specializing in AI regulation and compliance

FAQ

The EU AI Act is the world’s most expensive legal framework for artificial intelligence. It outlines a general approach to AI that classifies AI systems according to their risk profile, and bans certain AI activities and products deemed to be high risk.   

The European AI Alliance is a key industry body enabling stakeholders to share their views and collaborate on the new EU AI Act. 

While the fines are yet to be confirmed, they are currently proposed to be from €10 million to €40 million or 2% to 7% of a company’s global turnover annually. 

Related Posts