Is Your Agent High-Risk?
The EU AI Act Classification Guide
Getting the classification wrong has consequences in both directions. Over-classify and you build compliance overhead you do not need. Under-classify and you face fines up to €15M. This guide walks you through the Annex III criteria with real examples.
TL;DR — Key Takeaways
- High-risk classification is defined by Annex III: 8 categories covering specific use-case domains, not technical capabilities.
- The critical factors are: what decisions the agent influences, who is affected, and what the consequences are.
- HR screening, medical assistance, financial lending, and legal research agents are almost always high-risk.
- Internal productivity agents with no consequential decision-making are generally not high-risk.
- Wrong classification can result in €15M fines or 3% of global turnover — document your classification rationale.
The Classification Question
The EU AI Act creates a tiered compliance structure. The tier your AI agent falls into determines how many obligations apply to you. Getting this right before you build is far cheaper than retrofitting compliance after deployment.
High-Risk Classification
Compliance obligations including risk management (Art. 9), data governance (Art. 10), technical documentation (Art. 11), logging (Art. 12), transparency (Art. 13), human oversight (Art. 14), and accuracy/robustness (Art. 15).
Not High-Risk Classification
Minimal obligations: transparency requirements (disclose AI to users if applicable) and general-purpose AI rules if you are a foundation model provider. No technical documentation, audit logs, or human oversight mandates.
The Cost of Getting it Wrong
Over-classification (unnecessary compliance)
Engineering overhead, slower deployment, higher operational cost
Under-classification (missed obligations)
Up to €15M or 3% global turnover + reputational damage
Annex III: The 8 High-Risk Categories
Annex III of the EU AI Act lists the specific domains where AI systems are considered high-risk. If your agent operates in any of these categories — even partially — you must apply the high-risk compliance framework. The classification is domain-based, not technology-based.
Biometrics
Remote biometric identification, emotion recognition, biometric categorization of people into sensitive categories.
Agent Examples
Identity verification agent, facial recognition screening agent
Critical Infrastructure
Safety components of water, gas, heating, electricity networks and road transport.
Agent Examples
Power grid optimization agent, traffic management agent, network security automation agent
Education
Determining access or admission to educational institutions, evaluating learning outcomes.
Agent Examples
Admissions screening agent, automated grading agent, student performance assessment agent
Employment
Recruitment, CV screening, promotion decisions, monitoring employee performance.
Agent Examples
HR screening agent, interview scheduling agent, performance review agent, workforce planning agent
Essential Services
Credit scoring, insurance risk assessment, life insurance underwriting, emergency services dispatch.
Agent Examples
Loan decisioning agent, insurance underwriting agent, credit limit adjustment agent
Law Enforcement
Risk assessment for criminal recidivism, polygraph-like systems, evidence reliability evaluation.
Agent Examples
Criminal risk assessment agent, predictive policing agent, evidence analysis agent
Migration and Asylum
Lie detection in border control, risk assessment for irregular migration, visa and asylum application processing.
Agent Examples
Visa processing agent, asylum claim assessment agent, border document verification agent
Administration of Justice
AI in dispute resolution, assisting courts in researching and interpreting facts, applying the law.
Agent Examples
Legal research agent, case outcome prediction agent, contract dispute analysis agent
Self-Assessment Decision Tree
Work through this decision tree for your AI agent. Each step narrows the classification. Document your answers — you will need them for your classification rationale.
What domain does your agent operate in?
Yes
If it falls into any of the 8 Annex III categories → proceed to Step 2
No
If none of the 8 categories apply → likely NOT high-risk. Document your reasoning.
Does the agent make or significantly influence consequential decisions?
Yes
If yes → likely HIGH-RISK. Proceed to Step 3.
No
If the agent is purely informational with no decision influence → may not be high-risk. Consult legal counsel.
Are real people affected by the agent's outputs?
Yes
If yes, and those effects are material (access to services, employment, credit) → HIGH-RISK.
No
If the agent only affects internal systems with no human impact → further analysis needed.
What is the potential harm if the agent makes an error?
Yes
If a wrong decision could cause physical, financial, social, or legal harm to a person → HIGH-RISK.
No
If errors are easily correctable and have no real-world consequences on individuals → assess further.
Real-World Agent Examples: High-Risk or Not?
Classification is easier with concrete examples. Here is how 8 common agent archetypes classify under the EU AI Act, with the reasoning for each.
Customer Service Agent (General)
NOT HIGH-RISKHandles general inquiries, provides information, escalates issues. Does not make consequential decisions affecting people's rights or access to services. Standard transparency requirements apply.
HR Candidate Screening Agent
HIGH-RISKAnnex III Category 4 (Employment). Makes or influences decisions about who gets considered for a job. Even as a filtering tool, it significantly affects employment outcomes for real people.
Medical Diagnosis Assistant
HIGH-RISKAnnex III Category 5 (Essential Services) and likely subject to medical device regulation. Influences clinical decisions with direct health consequences. Full high-risk compliance plus medical device frameworks apply.
Code Generation Assistant
NOT HIGH-RISKGenerates code suggestions for developers. Does not make autonomous decisions affecting third parties. Unless deployed to generate code for critical infrastructure systems, this falls outside Annex III. Basic transparency requirements apply.
Financial Trading Agent
HIGH-RISKAnnex III Category 5 (Essential Services). Makes autonomous financial decisions with material consequences. Also subject to MiFID II and other financial regulation. Risk management and human oversight requirements are mandatory.
Internal IT Automation Agent
DEPENDSIf automating routine tasks (ticket routing, password resets) — likely NOT high-risk. If managing safety-critical infrastructure (power systems, network security controls, production deployments) — potentially Category 2 (Critical Infrastructure). Conduct a specific analysis of the systems it controls.
Content Moderation Agent
DEPENDSDepends on scale and context. For a small internal tool — likely not high-risk. For a platform making autonomous decisions about user accounts, content visibility, or access to services at scale — may trigger Category 5 or 8. The EU AI Act explicitly considers context of use.
Legal Research Agent
HIGH-RISKAnnex III Category 8 (Administration of Justice). Assists in legal research and case analysis, directly influencing legal outcomes for real people. Even as an advisory tool, the potential for harm through missed case law or incorrect legal analysis is consequential.
If High-Risk: Your 8 Compliance Obligations
If your agent is high-risk, the following eight obligations apply. All must be in place before August 2, 2026.
Risk Management System
Establish, implement, and maintain a documented risk management system throughout the lifecycle.
Data Governance
Training, validation, and testing data must be subject to appropriate governance and quality checks.
Technical Documentation
Prepare technical documentation before market placement; keep it updated for the system's lifetime.
Automatic Logging
Implement automatic record-keeping that logs operations with sufficient detail for post-deployment audit.
Transparency
Users must be informed they are interacting with a high-risk AI system with sufficient information to exercise oversight.
Human Oversight
Design the system to enable effective human oversight — including the ability to interrupt, monitor, and override.
Accuracy & Robustness
Maintain appropriate levels of accuracy and resilience to errors, including adversarial manipulation.
Conformity Assessment
Undergo conformity assessment before deployment. For some categories, this requires a notified body.
If Not High-Risk: Still Recommended Controls
Even agents that do not fall under high-risk classification benefit from runtime governance. The absence of a legal mandate does not mean the risks do not exist — it means the consequences of getting it wrong are commercial rather than regulatory. Three baseline controls are recommended for all production agents.
Tool Allowlist
Define what tools your agent is allowed to call. Block everything else. This prevents scope creep and accidental data access regardless of risk tier.
Audit Logging
Log every tool call with decision and timestamp. Even if not legally required, this is essential for debugging, billing reconciliation, and incident response.
Budget Cap
Set a hard cost ceiling per session and per day. Infinite loops are not a high-risk-exclusive problem — they happen to all agents and cost real money.
Documentation: Prove Your Classification
Whether you classify your agent as high-risk or not, you need to document the reasoning. A national AI supervisory authority can question your classification. Without documentation, you cannot defend a "not high-risk" determination, and the default assumption will likely not be in your favor.
Classification Rationale Document Template
Agent Name & Version
The specific system being classified, including version number and deployment date.
Use Case Description
What the agent does, what tools it has access to, and what decisions it influences.
Annex III Analysis
For each of the 8 Annex III categories: does the agent operate in this domain? Why or why not?
Affected Parties
Who interacts with or is affected by the agent's outputs? Are there third parties who are not the users?
Classification Decision
High-risk or not high-risk, with explicit reference to the Annex III category or the reason for exclusion.
Review Date
Classifications should be reviewed annually or when the agent's functionality changes materially.
When to Re-Classify
A classification is not permanent. If your agent gains new tools, is deployed in a new context, or begins influencing decisions in a new domain, re-run the classification analysis. An agent that starts life as a text summarizer and evolves into an HR decision support tool crosses the high-risk threshold the moment it starts influencing employment decisions.
Frequently Asked Questions
What makes an AI system 'high-risk' under the EU AI Act?
The EU AI Act Annex III defines high-risk AI systems across 8 categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration/asylum, and administration of justice. Any AI agent operating in these domains is likely high-risk.
Are all AI agents considered high-risk?
No. AI agents used purely for text generation, creative tasks, or internal productivity without consequential decision-making are generally not high-risk. Agents that make decisions affecting people's lives, access to services, or operate in regulated industries are typically high-risk.
What happens if I wrongly classify my AI as not high-risk?
Incorrect classification that leads to non-compliance can result in enforcement action from national AI supervisory authorities. The penalties mirror those for non-compliance: up to €15M or 3% of global turnover.