EU AI Act enforcement begins August 2, 2026.
Board-Level AI Governance

Your Company Is Using AI.
Your Board Will Ask About Compliance in 90 Days.

EU AI Act fines reach €35 million or 7% of global annual revenue. Boards are now asking the compliance question. The question is whether you have an answer.

The board conversation happening at Fortune 500 and Series B companies now

""We've seen the EU AI Act headlines. We use AI across HR, finance, and operations. Are we compliant? If not, what's the exposure? Who is accountable?" This question is now appearing on audit committee agendas across Europe. The CEOs who can answer it are the ones who started six months ago."

The Five Questions Your Board Will Ask

Prepare your answers before they ask.

Q: Are we EU AI Act compliant?

A: This requires knowing which AI tools you use, which are High-Risk under Annex III, and whether each one has: a DPIA, an audit trail, a human oversight mechanism, and documented risk management.

Q: What is our maximum fine exposure?

A: €35 million or 7% of global revenue — whichever is higher — for prohibited AI use. €15 million or 3% for other violations. Both can apply simultaneously with GDPR fines.

Q: Who is personally accountable?

A: The DPO is accountable for GDPR. The EU AI Act creates a "provider" and "deployer" liability structure. If you deploy a third-party AI tool for a High-Risk use case, you are responsible for compliance — not the AI vendor.

Q: Do we have an AI risk register?

A: EU AI Act Article 9 requires a documented risk management system for the full lifecycle of any High-Risk AI. Without a register, you cannot demonstrate compliance.

Q: What happens if an employee uses AI incorrectly?

A: The deployer organization — your company — is liable for how its employees use AI in professional contexts. Informal ChatGPT use by HR or clinical staff is your compliance exposure, not the employee's.

Department-by-Department Risk Map

Your exposure by function.

HRcritical
EU AI Act Annex III Cat. 4 + GDPR Art. 22

CV screening, performance monitoring, hiring AI

Financecritical
DORA Art. 8 + EU AI Act Annex III

Credit decisions, fraud detection, trading AI

Healthcarecritical
EU AI Act Annex III Cat. 5 + GDPR Art. 9

Clinical AI, triage, patient benefits

Legalhigh
Professional privilege + GDPR Art. 5(1)(b)

Document AI, LLM research, discovery AI

Operationsmedium
EU AI Act Annex III Cat. 2 (if critical infrastructure)

Logistics AI, infrastructure management AI

Related Compliance Templates

The 30-Day Executive Action Plan

Concrete steps. Measurable outputs.

Week 1

AI Inventory

Identify every AI tool in use across all departments. Include informal ChatGPT usage. This is your baseline.

Week 1

Risk Classification

For each tool, determine whether it triggers Annex III. Use the EU AI Act Annex III Category list as the classification framework.

Week 2

DPIA for High-Risk Tools

Commission a Data Protection Impact Assessment for every High-Risk AI system identified. This is a legal requirement, not optional.

Week 2

Policy Engine Deployment

Enforce which AI tools can process which categories of data via a technical policy layer — not just an acceptable use policy. Policy documents do not satisfy Article 9 requirements.

Week 3

Audit Trail Activation

Implement immutable logging of all High-Risk AI decisions. SupraWall's SDK provides signed, tamper-proof audit records at the tool call boundary.

Two Ways to Solve This

Whether you want to implement it yourself or speak to an expert.

Business Path (C-Suite)

Book an Executive Briefing

30-minute board-ready AI compliance assessment.

Book Executive Call

Technical Path (Developers)

See the Compliance Evidence Kit

Technical templates for your engineering team.

View Technical SDK