The EU AI Act

Key Takeaways

  • The EU AI Act introduces a risk-based approach to regulate the use of artificial intelligence, including prohibitions on certain high-risk AI systems.

  • Risk management, data governance and protection, and human oversight are essential components for regulating high-risk AI systems.

  • The European Parliament approved amendments in June 2023 to ensure that the regulations stay up to date with changing technology.

As artificial intelligence (AI) continues to advance and reshape our world, the need for regulatory safeguards becomes increasingly evident. The European Union’s (EU) AI Act is a groundbreaking legislative measure that sets the stage for a global ripple effect, influencing AI regulations and standards across the globe. 

In this in-depth guide, we look at how the EU AI Act works, and how it balances innovation with security and privacy concerns. 

Table of Contents

An Overview of the EU AI Act 

The proposed AI Act, introduced by the European Commission, is the world’s first legislative measure designed to oversee the use of artificial intelligence, including generative AI systems.

This legislation is currently working its way through the EU legislative process and is intended to become law by the end of 2023. 

This groundbreaking legislation adopts a risk-based approach, classifying AI systems into three risk tiers to ensure that higher-risk technologies receive appropriate oversight and regulation.

Businesses must undertake several steps to ensure compliance with the AI Act, including:

  • Risk evaluation of their AI systems

  • Enhancing awareness

  • Fostering ethical systems

  • Assigning accountability

  • Staying current

  • Setting up robust AI governance structures.

The subsequent sections delve into the classification of AI systems, crucial provisions of the AI Act, and repercussions for non-adherence.

1. Classification of AI Systems

The AI Act introduces a three-tier risk classification model, dividing AI technologies into four categories:

  1. Unacceptable risk

  2. High risk

  3. Limited risk

  4. Minimal risk

This classification system acknowledges that AI’s potential impact on society varies, and overly broad or precise definitions may hamper the law’s effectiveness and impede AI advancement. The European Data Protection Board may provide guidance on defining AI systems more accurately.

High-risk AI systems include those used in critical infrastructure, life and health-threatening scenarios, and potential mass surveillance or discrimination against vulnerable populations. Examples of high-risk AI systems in the financial sector are models used for creditworthiness assessments, risk premium evaluations, and biometric identification of natural persons or employee management.

Certain AI systems, particularly those that manipulate behaviour or result in unfair treatment of individuals or groups, are prohibited under the AI Act.

2. High-Risk AI Provisions

One of the AI Act’s core components is to impose stringent testing, documenting, and human monitoring requirements for high-risk AI systems. General-purpose AI systems may be capable of being applied to various situations, but their associated risks can differ significantly. As a result, high-risk AI systems are subject to more rigorous requirements, as outlined in Article 6 and Annex III of the AI Act.

Regulations must be strictly followed by both developers and users of high-risk AI systems. These regulations require rigorous testing, accurate documentation of data quality and the implementation of an accountability framework which outlines human oversight. This ensures that high-risk AI systems are developed and deployed responsibly, minimizing potential harm to users and society.

We discuss provisions relating to high-risk AI in detail further below. 

3. Penalties for Non-Compliance

Failure to comply with the AI Act can result in severe penalties, ranging from €10 million to €40 million or 2% to 7% of a company’s global annual turnover, depending on the severity of the infringement.

The maximum penalty for non-compliance stands at 30 million EUR or up to 6% of a company’s total worldwide annual turnover. These penalties underscore the importance of adhering to the AI Act’s provisions and ensuring responsible AI development and deployment.

High-Risk AI Systems

The EU Artificial Intelligence Act imposes supplementary regulatory requirements on high-risk AI systems, encompassing risk management processes, human oversight, and data governance standards. These stricter regulations aim to ensure that high-risk AI systems are developed and deployed responsibly, with due consideration for safety, ethics, and fundamental rights.

The upcoming sections delve into the following topics:

  1. Sectors governed by high-risk AI system regulation

  2. The significance of risk management and human oversight

  3. Requisite data governance standards for these systems.

1. Regulated Sectors

The AI Act regulates sectors where high-risk AI systems may have significant implications for human rights or safety. Examples of regulated sectors include:

  • Aviation

  • Automotive vehicles

  • Boats

  • Elevators

  • Toys

  • Medical products subject to the EU General Product Safety Directive

  • Financial services

By regulating these sectors, the AI Act aims to ensure that AI technologies are developed and used in a manner that respects human rights and safety while fostering innovation.

2. Risk Management and Human Oversight

High-risk AI system regulation heavily relies on risk management and human oversight. To comply with the AI Act, companies must implement a risk management process, adhere to increased data standards, document their systems in detail, systematically record their actions, provide users with information regarding their function, and enable human oversight and continual monitoring.

The risk management process for high-risk AI systems involves:

  • Conducting appropriate risk assessment and mitigation

  • Establishing a risk management system

  • Keeping records of risk assessment and mitigation measures

  • Including trustworthiness considerations in the development process

Additionally, data governance and management are necessary to ensure the responsible use of AI systems.

3. Data Governance and Protection

In the regulation of high-risk AI systems, data governance assumes a significant role. It involves:

  • Managing and protecting data employed in AI model training

  • Establishing standards for data collection, storage, and utilization

  • Ensuring that data is secure and adheres to applicable laws and regulations

The AI Act outlines specific data governance requirements for high-risk AI systems in Article 10, such as securely collecting and using data in compliance with the General Data Protection Regulation (GDPR).

Remote Biometric Identification

Remote biometric identification systems, which identify individuals from a distance by comparing unique biometric attributes, such as facial features or fingerprints, are subject to the EU AI Act’s stringent regulation. These systems can be employed for surveillance, law enforcement purposes, or other scenarios where identification is necessary without direct physical contact. As these systems handle sensitive biometric data, the AI Act addresses privacy concerns by regulating their use and balancing security needs with privacy rights.

Upcoming sections will discuss:

  • The application of biometric identification systems in public spaces and law enforcement

  • The use of facial recognition databases for serious crimes

  • The AI Act’s strategies to balance security and privacy.

1. Use in Public Spaces and Law Enforcement

Aiming to address privacy concerns, the AI Act prohibits the use of remote biometric identification systems in publicly accessible areas. However, the center-right European People’s Party attempted to introduce exemptions to this prohibition, albeit unsuccessfully. The AI Act stipulates that the use of such systems in public spaces must be necessary and proportionate, and any data collected must be securely stored and deleted when no longer needed.

The regulation of biometric identification systems extends to their use in law enforcement, where they can be employed to identify potential suspects in serious criminal cases, such as homicide, sexual assault, and terrorism. It is crucial that their use is strictly regulated and limited to ensure the protection of privacy and civil liberties.

2. Facial Recognition Databases and Serious Crimes

Facial recognition databases can be used for various applications, including law enforcement and serious crimes. The AI Act regulates the use of facial recognition databases in serious crime investigations, ensuring that data is collected and utilized in an authorized manner, that the data is accurate and up-to-date, and that the data is not used for any purpose other than that for which it was collected.

The AI Act also provides safeguards to protect the privacy of individuals whose data is collected and used

3. Balancing Security and Privacy

The AI Act prioritizes balancing security and privacy concerns when utilizing remote biometric identification systems. By implementing security mechanisms, such as encryption and access controls, and adhering to privacy regulations and principles in data collection, storage, and use, the AI Act aims to ensure that both security and privacy are upheld.

The AI Act also requires organizations to conduct regular risk assessments and to develop and implement appropriate security measures

The Global Impact of the EU AI Act 

The EU AI Act’s global impact extends far beyond the European Union’s borders. The Brussels Effect, the concept that the EU AI Act will influence AI regulations and standards worldwide, demonstrates the far-reaching consequences of this groundbreaking legislation.

In addition to the Brussels Effect, the AI Act aims to encourage the development of trustworthy AI systems worldwide, while also acknowledging potential unintended consequences.

This section will explore the Brussels Effect, the AI Act’s contribution to fostering trustworthy AI development, and potential unintended consequences of the legislation.

1. The Brussels Effect

The Brussels Effect refers to the EU’s remarkable capacity to establish global standards through its regulations. However, in the case of the AI Act, its ability to set global standards may be limited, as the Act only applies to the EU and its member states, under the guidance of the European Council.

Despite this limitation, the AI Act’s influence on AI policymaking and regulation worldwide should not be underestimated.

2. Encouraging Trustworthy AI Development

The AI Act aims to foster the global creation of reliable AI systems by ensuring that:

  • Safety, security, privacy, and data security are prioritized during their development and deployment

  • AI systems are transparent and comprehensible

  • AI systems are subject to human supervision

Through these measures, the AI Act sets the stage for a global shift toward more responsible and ethical AI development.

3. Potential Unintended Consequences

The potential unintended consequences of the AI Act include:

  • Producing biased outcomes

  • Conflicting with existing business models and applications

  • Impeding access to essential low-risk AI systems

  • Inhibiting innovation and advancement

  • Complex implementation

  • Potential for overregulation

  • Potential loopholes in the legislation

As the AI Act continues to evolve, it is crucial to address these potential issues to ensure its effectiveness in promoting responsible AI development and deployment.

The Future of AI Regulation

The future of AI regulation in the EU involves the European AI Alliance, coordination with other EU institutions, and possible amendments and revisions to the AI Act. As AI technology advances and its applications continue to evolve, it is essential to ensure that AI regulations remain relevant and effective in protecting the public interest and fostering innovation.

The following sections will delve into the European AI Alliance’s role, its coordination with other EU institutions, and potential future amendments and revisions to the AI Act.

1. The Role of the European AI Alliance

The European AI Alliance oversees the AI Act’s implementation by offering a platform for stakeholders to:

  • Cooperate and contribute to AI-related policies and regulations

  • Foster a forum for discussions, consultations, and the exchange of best practices

  • Ensure that AI systems are developed and deployed in a manner that is safe, ethical, and respects fundamental rights.

The Alliance also works to ensure that AI is used responsibly and that its potential is maximized

2. Coordination with Other EU Institutions

Through the Coordinated Plan on Artificial Intelligence, the European AI Alliance collaborates with other EU institutions. This plan is designed to:

  • Bolster investment in AI

  • Harmonize AI policies

  • Execute AI strategies and programs

  • Ensure that AI is developed with human-centricity and trustworthiness

  • Position the EU as a leading hub for AI.

By working together, the European AI Alliance and the EU institutions can ensure that AI is developed

3. Possible Amendments and Revisions

As technology and its applications evolve, the AI Act could be subject to amendments and revisions. On June 14, 2023, the European Parliament approved amendments to the AI Act, which included changes to the criteria for “high-risk” AI systems and the prohibition of certain AI practices and systems.

As AI continues to advance, it is crucial to revisit and update the AI Act to ensure its relevance and effectiveness in regulating AI systems.

EU AI Act — Final Take

The EU AI Act is a pioneering piece of legislation that has set the stage for a global shift in AI policymaking and regulation. With its risk-based approach, stringent provisions for high-risk AI systems, and focus on balancing security and privacy concerns in biometric identification systems, the AI Act aims to ensure the responsible development and deployment of AI technologies. As the first comprehensive AI regulatory framework in the world, the AI Act has the potential to influence AI regulations and standards worldwide, shaping the future of AI globally.

In conclusion, the EU AI Act is a testament to the EU’s commitment to fostering responsible and ethical AI development while protecting the safety, security, and privacy of its citizens. As AI continues to advance and reshape our world, it is crucial for regulators, businesses, and stakeholders to work together to ensure that AI technologies are developed and deployed in a manner that upholds human rights, promotes innovation, and respects fundamental rights.

Picture of Drew Donnelly, PhD

Drew Donnelly, PhD

Drew is regulatory expert, specializing in AI regulation and compliance

FAQ

The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence, initially proposed by the European Commission in April 2021. Adopted by the European Council and European Parliament last year, it outlines a general approach policy that analyses and classifies AI systems according to their risk level and provides users with clear requirements and obligations regarding specific uses of AI. The EU AI Act is designed to ensure that AI systems are safe, reliable, and trustworthy. It also sets out rules for the development, deployment, and use of AI systems, including requirements for data protection, transparency,

The EU AI Act for financial services establishes an agreement preventing penalties and litigation costs from being incorporated in AI contractual agreements, making it a crucial part of the ongoing discourse around AI governance.

The European AI Alliance is a key platform for the development of EU-wide AI policies and regulations, enabling stakeholders to share their views and collaborate.

Non-compliance with the AI Act can result in hefty fines, ranging from €10 million to €40 million or 2% to 7% of a company’s global annual turnover.

Related Posts