Article 9: Risk Management.
Build a System That Satisfies Regulators
Article 9 is not a one-time risk assessment. It demands a living system that identifies, mitigates, monitors, and documents AI risks throughout your agent's entire operational life. Here is how to build one that actually satisfies the regulation.
TL;DR — Key Takeaways
- Article 9 requires a risk management system — not just a risk assessment. It must be ongoing, not a checkbox.
- You must identify risks, evaluate them, implement controls, monitor residual risk, and document everything.
- The five agent risk categories are: unauthorized tool execution, data exfiltration, budget abuse, prompt injection, and regulatory violations.
- DENY policies eliminate high-likelihood/high-impact risks. REQUIRE_APPROVAL manages residual risks needing human judgment.
- SupraWall's policy engine, audit logs, and compliance dashboard are a ready-made Article 9 risk management system.
Article 9 Plain English
Article 9 of the EU AI Act requires that providers of high-risk AI systems establish, implement, document, and maintain a risk management system. The word "maintain" is doing significant work here — this is not a point-in-time assessment you complete before launch. It is a continuous process with ongoing obligations.
The 4 Lifecycle Obligations
Before deployment: identify and document all foreseeable risks. Create a risk register. Define your control architecture.
At deployment: technical controls must be active and functioning. Policies configured, monitoring live, approval queues operational.
Continuously: maintain records of the risk management system — the risks identified, controls deployed, and test results.
Post-deployment: review and update the risk management system as the AI system evolves, new risks emerge, or incidents occur.
The lifecycle requirement is what catches most teams off guard. You cannot build a risk management system, pass it through legal review, and file it away. Article 9 requires that the system remain current and that you can demonstrate it was active during the period an authority is investigating.
Risk Identification for AI Agents
Article 9 requires identification of all reasonably foreseeable risks associated with your AI system. For autonomous agents, these fall into five categories. Each must be documented in your risk register with a description, affected parties, and potential severity.
Unauthorized Tool Execution
CriticalAgent calls tools outside its intended scope — accessing unauthorized APIs, executing system commands, or operating on data it should not touch. Can result from prompt injection or planning failures.
Data Exfiltration
CriticalAgent transmits sensitive, proprietary, or personal data to external endpoints. May be deliberate (injected instruction) or accidental (overly broad tool scope).
Budget and Resource Abuse
HighAgent enters infinite loops, makes excessive API calls, or consumes tokens at a rate that causes financial harm. Denial-of-wallet attacks exploit agents with no cost controls.
Prompt Injection
HighMalicious content in the agent's environment (documents, emails, web pages) redirects the agent to act against operator instructions. Externally introduced risk.
Regulatory Violations
SevereAgent takes actions that violate applicable law — sending unsolicited communications, processing personal data without basis, making discriminatory decisions.
Risk Assessment: The Likelihood × Impact Matrix
Article 9 requires that identified risks be evaluated — not just listed. The standard approach is a likelihood × impact matrix. For each risk, assess how likely it is to occur and how severe the impact would be if it did. This produces a risk level that drives your control selection.
Risk Level Matrix — Control Selection Guide
Likelihood ↓ / Impact →
Low Impact
High Impact
High Likelihood
Medium Risk
REQUIRE_APPROVAL
Critical Risk
DENY + document
Low Likelihood
Low Risk
ALLOW + log
High Risk
REQUIRE_APPROVAL
The matrix drives your policy selection: Critical risks (high likelihood + high impact) require DENY policies that prevent the action entirely. High risks (low likelihood + high impact, or high likelihood + low impact) warrant REQUIRE_APPROVAL. Low risks can be ALLOW with logging for monitoring.
Risk Mitigation Controls
Article 9 requires that identified risks be mitigated by appropriate controls. For AI agents, controls must operate at the action layer — not the language layer. Here is the mapping from risk category to SupraWall control.
Risk
Unauthorized Tool Execution
Control
Tool Allowlist (DENY by default)
Implementation
Define an explicit allowlist. All tools not on the list return DENY automatically.
Risk
Data Exfiltration
Control
Scope Isolation + PII Scrubbing
Implementation
Restrict agent to specific data namespaces. Auto-redact PII from log entries and outbound calls.
Risk
Budget / Resource Abuse
Control
Budget Caps (token + cost limits)
Implementation
Set hard per-session and per-day limits. Agent halts when cap is reached — not just warned.
Risk
Prompt Injection
Control
REQUIRE_APPROVAL on sensitive tools
Implementation
Any tool that could be weaponized by injected instructions (email, file write, external HTTP) requires human sign-off.
Risk
Regulatory Violations
Control
Tool Blocklist for non-compliant operations
Implementation
DENY policies on tools that would constitute regulatory violations — e.g., bulk_contact_send, decision_record_write.
Residual Risk Management
No control eliminates risk entirely. After implementing your mitigation controls, residual risks remain — and Article 9 requires that these be explicitly documented and managed. The key distinction is between accepted residual risk (documented, proportionate, with justification) and unmitigated risk (not addressed, which is non-compliant).
Unmitigated Risk (Non-Compliant)
A risk that has been identified but no control has been implemented. There is no documentation of why it was left unaddressed. Regulators will treat this as negligence — there is no excuse for a documented risk with no corresponding control.
Accepted Residual Risk (Compliant)
A risk where controls exist but cannot eliminate it fully. The residual risk is explicitly documented, its level is proportionate, the justification is recorded, and it is subject to ongoing monitoring. This is acceptable and expected.
# Residual risk documentation template
{
"risk_id": "RISK-003",
"name": "Novel prompt injection vectors",
"controls_applied": ["REQUIRE_APPROVAL on email.send", "PII_scrubbing"],
"residual_level": "LOW",
"justification": "Approval queue creates human checkpoint before external action. Novel vectors cannot bypass REQUIRE_APPROVAL.",
"monitoring": "block_rate_dashboard",
"review_cycle": "monthly"
}
Continuous Monitoring Requirements
Article 9 explicitly requires ongoing monitoring as part of the risk management system. This is not just post-deployment review — the regulation expects that you have active visibility into whether risks are materializing and whether controls are functioning. SupraWall provides three monitoring mechanisms that together satisfy this requirement.
Block %
Block Rate Dashboard
Shows what percentage of tool calls are being denied. A sudden spike in block rate indicates either a policy misconfiguration or an active attack. Review weekly.
$ Spent
Budget Consumption Alerts
Tracks per-agent token and cost consumption against caps. Alert thresholds trigger before the cap is hit, giving operators time to investigate anomalous behavior.
Report
Monthly Compliance Reports
Auto-generated summary of risk events, approval queue activity, and policy hit rates for the period. Provides the ongoing documentation Article 9 requires.
Monitoring Cadence Recommendation
Daily
Review block rate anomalies and approval queue backlog
Weekly
Audit budget consumption across all active agents
Monthly
Generate and archive the full compliance report + HOE export
Generating the Article 9 Evidence Package
When a national AI supervisory authority requests evidence of Article 9 compliance, you need to produce a structured documentation package covering the entire lifecycle of your risk management system. Here is the complete list of what to prepare.
Risk Register
RequiredA complete list of all identified risks, their likelihood and impact ratings, risk level, assigned control, and residual risk assessment. Minimum one page per risk category.
Control Descriptions
RequiredFor each control in your risk management system: what it does, how it is configured, and which risk it mitigates. Export your SupraWall policy configuration as evidence.
Test Results
RequiredEvidence that controls were tested and are functioning. For SupraWall: policy evaluation test logs showing DENY responses for blocked tools, kill-switch propagation tests.
Monitoring Reports
RequiredThe last 6 months of monthly compliance reports showing block rates, approval queue activity, and budget consumption. Demonstrates the system is being actively monitored.
Incident Log
If applicableAny events where the risk management system was invoked in response to an actual risk materializing — unusual block spikes, blocked injection attempts, approval rejections.
# Generate Article 9 evidence package via SupraWall API
POST /api/v1/compliance/export
{
"type": "article_9_risk_management",
"period_start": "2026-01-01",
"period_end": "2026-03-31",
"include": [
"risk_register",
"policy_configuration",
"monitoring_reports",
"incident_log"
]
}
# Returns structured JSON + PDF summary
Frequently Asked Questions
What does Article 9 of the EU AI Act require?
Article 9 requires that providers of high-risk AI systems establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. This includes identifying risks, evaluating them, implementing risk mitigation, and testing residual risks.
What is a 'risk management system' for AI agents?
For AI agents, a risk management system includes: documented identification of potential harms (data exfiltration, unauthorized actions, budget overruns), technical controls that mitigate those risks, ongoing monitoring of residual risk, and regular review and updates.
How does SupraWall constitute a risk management system?
SupraWall's policy engine maps directly to Article 9: DENY policies mitigate identified risks, REQUIRE_APPROVAL controls manage residual risks requiring human judgment, audit logs provide continuous monitoring, and the compliance dashboard generates the required documentation.