Executive Order 14110: Understanding Its Impact on Federal Emergency Powers
Key Takeaways Executive Order (EO) 14110 is an order issued by the United Stats Federal Government ordering certain actions in
Risk-Based Categorization: Conformity assessments are most stringent for high-risk AI systems like those in critical infrastructure and law enforcement.
Self-Assessment for Certain AI Systems: Low and minimal risk AI systems may undergo self-assessment for conformity.
Third-Party Assessment Required: High-risk AI systems must undergo assessment by an independent third party or notified body.
Ongoing Compliance: AI systems must continuously comply with the requirements, necessitating regular monitoring and reporting.
The European Union’s approach to regulating Artificial Intelligence (AI) has been highlighted by the landmark proposal of the EU AI Act. With the goal of ensuring safety and compliance of AI systems, the Act brings forth a comprehensive legal framework aimed at managing the risks associated with the use of AI within the Member States. The regulation focuses on establishing uniform requirements for AI systems, which stand to not just protect the rights of individuals but also to foster trust and encourage investment in AI innovation across the region.
Central to the EU AI Act is the concept of conformity assessments, a set of procedures that AI system providers must undergo to demonstrate that their products meet the Act’s stringent requirements. The assessments are designed to vary in rigor, reflective of the perceived level of risk posed by the AI system in question. These procedures serve as a crucial checkpoint before AI products can enter the EU market, ensuring that they comply with predefined safety, transparency, and accountability standards.
Providers of high-risk AI systems face the most stringent conformity assessments, which generally involve a thorough examination of technical documentation, risk management processes, and the AI system’s lifecycle. These assessments can either be conducted internally for lower-risk applications, or externally by notified bodies for high-risk AI systems. The role of these assessments is pivotal in building a harmonized AI ecosystem within the EU, thereby striving to create an environment where AI technologies can be confidently and safely deployed.
The European Union’s Artificial Intelligence Act is a pivotal regulatory framework, aiming to manage AI technology and its deployment within the EU. It seeks a balance between innovation and the safeguarding of fundamental rights.
The EU Artificial Intelligence Act, proposed by the European Commission, pursues several critical goals:
The scope and application of the EU Artificial Intelligence Act are outlined as follows:
The conformity assessment procedures are critical for ensuring that high-risk AI systems comply with the EU’s stringent standards before being placed on the market or put into service.
Conformity assessments are processes that evaluate whether high-risk AI systems meet specific requirements laid out by the EU AI Act. They underscore compliance and are pivotal for maintaining consumer safety and trust in AI technologies by ensuring that these systems do not pose undue risks to fundamental rights.
Notified bodies are independent organizations designated by an EU Member State to carry out conformity assessments. They are instrumental in the certification process, as they review technical documentation and conduct audits to verify the compliance of high-risk AI systems with the EU AI Act. Notified bodies must possess the necessary expertise and impartiality to conduct their assessments effectively.
Compliance verification entails a comprehensive evaluation, often including testing and inspection of high-risk AI systems to ensure adherence to the predefined regulatory requirements. If an AI system is found to be compliant, it is awarded the CE marking, which indicates that the product has been assessed by the manufacturer and deemed to meet EU safety, health, and environmental protection requirements. This marking is crucial for AI systems to move freely within the European market.
The EU AI Act categorizes AI systems based on the level of risk they pose, with specific criteria delineated for those considered high-risk.
An AI system is classified as high-risk if it meets criteria outlined in Annex I of the EU AI Act. This includes AI technology that is intended to be used as a safety component of products or if the product itself is legislated under specific EU laws pertaining to critical infrastructure, education, legal and law enforcement, workers’ management and access to self-employment, essential private and public services, and the democratic process.
The EU AI Act introduces a risk-based approach to the regulation of AI systems, classifying them into several levels:
Unacceptable Risk: AI systems that manipulate human behavior to circumvent users’ free will (e.g., subliminal techniques), exploit vulnerabilities of specific groups of persons due to their age, physical or mental disability, or systems that allow for social scoring by public authorities.
High-Risk: Refer to the Criteria for High-Risk section above.
Limited Risk: AI systems that require specific transparency obligations, such as chatbots.
Minimal Risk: AI systems that are not categorized as high or unacceptable risk and can operate with minimal regulatory requirements. This constitutes the majority of AI applications.
There are certain uses of AI that the EU AI Act specifically bans, considering them to have an unacceptable risk. These systems can’t be placed on the European Union market or be put into service. The Act prohibits AI systems that:
These prohibitions reflect the EU’s commitment to safeguarding fundamental rights in the age of AI.
High-risk AI systems in the EU are subject to stringent oversight to align with fundamental rights and ensure safety, transparency, as well as robust data governance and protection.
High-risk AI systems must respect fundamental rights, including but not limited to non-discrimination, privacy, and data protection. They are required to:
For high-risk AI systems, EU regulations prioritize both safety and transparency. These systems should:
Data governance and protection are critical components in the deployment of high-risk AI systems. These systems must:
The EU AI Act institutes robust procedures for market surveillance and enforcement to ensure that AI systems comply with set regulations. This section addresses the responsibilities and powers of the authorities in enforcing compliance and the structure of penalties for non-conformity.
The designated authorities are armed with extensive powers to oversee the proper implementation of the AI Act. These powers include:
Ensuring compliance with the AI Act, surveillance authorities enhance market transparency and protect consumers by keeping non-compliant AI applications at bay.
The regulation underlines clear consequences for failure to adhere to the rules. Fines for non-compliance are structured as follows:
Non-Compliance Aspect | Fine Range |
---|---|
Minor Infringements | Up to €10 million or 2% of global annual turnover |
Major Infringements | Up to €20 million or 4% of global annual turnover |
Supplying Incorrect, Incomplete, or Misleading Information to Authorities | Up to €10 million or 2% of global annual turnover |
The enforcement mechanism is prepared to impose hefty fines to ensure entities take the necessary steps for compliance. These financial penalties are designed to be proportionate to the severity of the infringement and act as a deterrent against violations. Regulations demand that all market actors adhere strictly to the defined requirements to avoid penalties.
The European Union’s AI Act introduces robust structures for governance and accountability to oversee compliance with ethical standards in artificial intelligence. These structures are vital for maintaining trust and ensuring that AI systems adhere to regulations designed to protect EU citizens’ fundamental rights.
The European Artificial Intelligence Board (EAIB) serves as a key component in the governance framework. It is made up of representatives from each EU member state and the European Commission. The board’s primary responsibilities include:
National Supervisory Authorities (NSAs) represent the decentralized aspect of the framework, tasked with:
The EU AI Act sets a precedent with considerable influence on international trade and global norms. It creates a regulatory landscape for AI that actors outside the EU must navigate to participate in the European market.
The EU AI Act exhibits a strong extraterritorial effect, impacting entities outside the European Union. Non-EU companies must comply with the Act if their AI systems are used within the EU. For instance, if a South Korean company intends to export AI-driven medical devices to the EU, it must adhere to the EU AI Act’s requirements, irrespective of its domestic regulations. This has the potential to raise compliance costs and affect global product strategies.
The EU AI Act seeks harmonization with international norms, but it also may set new global standards by its influence.
There may be tension between the EU’s regulatory regime and other international frameworks or national laws. How this tension resolves could reshape regulatory approaches worldwide.
While the EU AI Act is still going through its final phases, businesses need to start getting prepared for their new obligation to provide assessments showing that they are compliant with the EU AI Act.
For more information about how you can demonstrate compliance with the EU AI Act, get in touch.
Drew is regulatory expert, specializing in AI regulation and compliance
Conformity assessments are procedures set out in the EU AI Act to ensure that AI systems meet specific safety, transparency, and accountability standards before being put into service, especially for high-risk applications.
For high-risk AI systems, conformity assessments are typically conducted by independent third parties or notified bodies. However, for lower-risk AI systems, the assessment can often be carried out internally by the developers or providers of the AI system.
If an AI system fails to meet the required standards in the conformity assessment, it cannot be marketed or put into service in the EU. The AI system must be modified and reassessed to ensure compliance with the necessary requirements of the EU AI Act.
Key Takeaways Executive Order (EO) 14110 is an order issued by the United Stats Federal Government ordering certain actions in
Key Takeaways AI is increasingly deployed in the healthcare context including in predictive analytics, diagnostics and post-intervention care. As with
Trust Innovate are the leading online AI compliance educators
© All Rights Reserved.