← Blog
Deadline: August 2, 2026 April 15, 2026 · 12 min read

EU AI Act High-Risk Categories: Annex III Complete Guide

The EU AI Act does not apply equally to all AI. Most AI is unregulated under it. But Annex III lists 8 specific categories where your AI must meet strict requirements before deployment. Here is exactly what is in each category, who it affects, and what compliance looks like.

Annex III compliance deadline: August 2, 2026

High-risk AI providers and deployers in Annex III categories must be compliant by August 2, 2026. EU AI Office enforcement begins then. Fines up to €30 million or 6% of global annual turnover.

What Is Annex III?

The EU AI Act uses a risk-based framework. Most AI systems fall into “limited risk” or “minimal risk” categories with few or no obligations. But Annex III lists specific high-risk categories where the stakes are high enough that detailed requirements apply.

If your AI system falls into one of these 8 categories, you have significant obligations as a provider (if you build it) or as a deployer (if you use it in your operations). The list can be amended by the European Commission — but the 8 categories below are locked in.

How to read these categories

For each category, an AI system is high-risk only if it is used as intended in that domain. A general-purpose AI model (like ChatGPT) is not inherently high-risk — it becomes high-risk when specifically deployed for one of these purposes. Using ChatGPT to draft emails is not high-risk. Using it to score loan applications is.

The 8 Annex III High-Risk Categories

Category 1

Biometric Identification & Categorisation

Examples of AI systems that fall here:

  • Facial recognition in public spaces (mostly prohibited — see Annex I)
  • Remote biometric identification of persons
  • AI that categorises people by biometric data (race, gender, political views)

Who is typically affected:

Law enforcement, border control, private companies using facial recognition in access control

Key boundary: Real-time remote biometric identification by law enforcement is prohibited except for narrow exceptions (child trafficking, terrorism, imminent threat). Post-hoc biometric ID is high-risk.

Category 2

Critical Infrastructure

Examples of AI systems that fall here:

  • AI managing water, gas, electricity, or heating supply
  • AI managing road traffic and railway transport
  • AI in digital infrastructure safety components

Who is typically affected:

Energy companies, water utilities, transport operators, smart grid operators

Key boundary: Only AI that is a safety component of critical infrastructure. A scheduling tool for a utility is not high-risk; an AI that decides when to cut power is.

Category 3

Education & Vocational Training

Examples of AI systems that fall here:

  • AI that determines access to educational institutions
  • AI that evaluates students in educational institutions
  • AI that monitors students during exams
  • AI that recommends educational pathways

Who is typically affected:

Universities, schools, online learning platforms, exam proctoring services, EdTech companies

Key boundary: AI tutoring tools are not high-risk. AI that gates access to education or evaluates performance for grades/degrees is.

Category 4

Employment, Workers Management & Self-Employment

Examples of AI systems that fall here:

  • AI for CV screening and candidate ranking
  • AI that makes or influences hiring/promotion/dismissal decisions
  • AI that monitors performance of employees
  • AI that allocates tasks based on personality profiling

Who is typically affected:

HR departments, ATS vendors, workforce management platforms, gig economy platforms

Key boundary: This is the category most SaaS HR companies need to worry about. Any AI that filters candidates or influences employment decisions is high-risk.

Category 5

Access to Essential Private and Public Services

Examples of AI systems that fall here:

  • AI in credit scoring and loan assessment
  • AI in insurance risk assessment
  • AI used to evaluate health risk or life insurance premiums
  • AI in social benefit allocation
  • AI in emergency services dispatch prioritisation

Who is typically affected:

Banks, insurance companies, fintech lenders, public health systems, government benefit agencies

Key boundary: Credit scoring AI is explicitly listed. If your AI outputs a score that affects loan approval or insurance pricing, it is high-risk.

Category 6

Law Enforcement

Examples of AI systems that fall here:

  • AI for individual risk assessment of crime or reoffending probability
  • AI as polygraph or lie detection tools
  • AI to detect emotional state during investigation
  • AI to predict criminal acts based on profiling
  • AI to analyse criminal evidence (CCTV analysis)

Who is typically affected:

Police forces, public prosecutors, courts, private security providing AI to law enforcement

Key boundary: AI polygraph / emotion detection is high-risk regardless of accuracy claims. Most law enforcement predictive AI falls here.

Category 7

Migration, Asylum & Border Control

Examples of AI systems that fall here:

  • AI to assess risk from persons at borders
  • AI to verify documents and biometrics at borders
  • AI for asylum claim assessment
  • AI to forecast irregular migration events

Who is typically affected:

Border agencies, migration ministries, asylum processing services

Key boundary: AI screening asylum applications is high-risk. These are typically government deployments, but vendors selling to border agencies are providers of high-risk AI.

Category 8

Administration of Justice & Democratic Processes

Examples of AI systems that fall here:

  • AI to assist courts in researching facts and applying law
  • AI for alternative dispute resolution
  • AI that influences elections or voter behaviour
  • AI that analyses political advertising targeting

Who is typically affected:

Courts, legal tech companies, election authorities, political campaign tools

Key boundary: AI research tools for lawyers are lower risk. AI that influences judicial decisions or election outcomes is high-risk.

What High-Risk AI Providers Must Do

If you build an AI system that falls into an Annex III category, you are a “provider” under the EU AI Act. Your obligations before placing it on the EU market:

1

Conformity Assessment

Most Annex III systems require self-assessment. Categories 1, 6 require third-party conformity assessment. You document that your AI meets the requirements.

2

Technical Documentation

Create and maintain detailed documentation: system description, training data, testing methodology, accuracy metrics, risk management measures, human oversight provisions.

3

EU Database Registration

Register your high-risk AI system in the EU AI database before deploying. This is a public register (with some exceptions for law enforcement).

4

CE Marking

Affix the CE mark once conformity assessment is complete. This is how you signal compliance to regulators and customers.

5

Quality Management System

Document your development process: data governance, testing procedures, post-market monitoring, incident reporting.

6

Post-Market Monitoring

Once deployed, monitor performance and report serious incidents to national market surveillance authorities.

What High-Risk AI Deployers Must Do

If you use a high-risk AI system in your operations (even one built by a vendor), you are a “deployer.” Your obligations:

Use it as intended

Only use the AI within the use case the provider documented. Using a hiring AI to make dismissal decisions when it was only validated for screening is a violation.

Ensure human oversight

High-risk AI requires meaningful human review. A human must be able to understand the AI's output, detect anomalies, and override it.

Conduct a DPIA

For high-risk AI processing personal data, a Data Protection Impact Assessment under GDPR is also required.

Train staff

People operating high-risk AI must have sufficient AI literacy to oversee it effectively. Training is required, not optional.

Monitor performance

Watch for drift, unexpected outputs, and bias over time. Report serious incidents to the provider and to national authorities.

Keep logs

Automatic logging is required where technically feasible. These logs must be kept for the duration required by law (varies by category).

The Fines for Non-Compliance

ViolationMaximum Fine
Providing non-compliant high-risk AI€30 million or 6% of global turnover
Providing incorrect or misleading information to authorities€20 million or 4% of global turnover
Other violations (obligations, transparency, etc.)€10 million or 2% of global turnover
Violations by SMEsLower of the two caps applies

Find out if your AI is high-risk — free in 60 seconds

Answer 4 questions. ComplianceIQ maps your AI systems to the correct Annex III category, shows your obligations, and generates a compliance roadmap.