Building an AI Incident Response Plan
The EU AI Act requires companies to report serious incidents involving high-risk AI systems within defined timeframes. GDPR breach obligations may also apply. Most companies have cybersecurity incident response plans — but not AI-specific ones. This guide covers exactly what you need to build.
EU AI Act Article 73 — Serious incident reporting
Providers of high-risk AI systems must report serious incidents to market surveillance authorities immediately — and without undue delay (typically interpreted as within 72 hours of awareness for life-threatening incidents, 15 days for other serious incidents). Without an AI incident response plan, you will almost certainly miss these deadlines when an incident occurs.
What Counts as an AI Incident?
An AI incident is any event or near-miss involving an AI system that causes or could cause harm. The EU AI Act defines a serious incident as one that results in:
- Death or serious harm to the health of natural persons
- Serious damage to property or the environment
- Serious breach of fundamental rights
- Serious disruption to critical infrastructure
But AI incidents include a much broader range of events that require internal response even when EU AI Act reporting is not triggered:
| Incident Type | Examples | External Reporting? |
|---|---|---|
| Harmful output | AI generates discriminatory decisions, dangerous advice, or content that harms someone | Yes — if serious harm results |
| Systematic bias discovered | Audit reveals AI denies loans to protected class at significantly different rates | Maybe — ECOA/CFPB (US), EU AI Act (EU) |
| Drift-induced degradation | AI model accuracy degrades significantly post-deployment causing systematic errors | Maybe — if high-risk system |
| Data poisoning / adversarial attack | AI model manipulated by attackers to produce intended bad outputs | Yes — GDPR (if personal data), AI Act (if high-risk) |
| Unauthorised AI use discovered | Employee uses shadow AI to process customer personal data without approval | Maybe — GDPR if personal data breach |
| Third-party AI failure | Vendor AI service causes harm to your users via your product | Yes — you are the deployer, you must report |
| Near-miss | AI made a harmful decision that was caught by human review before execution | No — but must be logged and reviewed |
The Five Phases of AI Incident Response
Phase 1: Detection and Identification
- Detection triggers: automated monitoring alerts, user complaints, employee reports, media/regulator contact
- First responder documents: what AI system, what happened, when, who was affected
- Assign an incident lead within 1 hour of detection
- Open incident record in your tracking system (do not rely on email chains)
Phase 2: Classification and Escalation
- Classify severity: P1 (life-threatening / serious harm), P2 (significant harm, regulatory trigger), P3 (limited harm), P4 (near-miss)
- Determine if personal data was involved (triggers GDPR 72-hour clock)
- Determine if this is a high-risk AI system under EU AI Act (triggers Article 73)
- Notify: P1 → DPO + Legal + CEO within 1 hour. P2 → DPO + Legal within 4 hours. P3/P4 → Compliance team within 24 hours.
Phase 3: Containment
- Suspend or isolate the affected AI system if harm is ongoing
- Preserve all evidence: model version, input data, output data, timestamps, configuration
- If personal data breach: preserve evidence for GDPR documentation obligations
- Notify affected parties as required (Article 34 GDPR notification to individuals if high risk)
Phase 4: Investigation and External Reporting
- Root cause analysis: training data issue, model drift, adversarial input, deployment error, human misuse?
- EU AI Act Article 73: notify national market surveillance authority (serious incident, without undue delay)
- GDPR Article 33: notify supervisory authority within 72 hours if personal data breach
- Document investigation findings in writing — this becomes your regulatory evidence
Phase 5: Recovery and Lessons Learned
- Implement fix: retrain model, update data, add human review checkpoint, or decommission system
- Verify fix effectiveness before returning AI system to production
- Conduct post-incident review within 2 weeks: what worked, what failed, what to improve
- Update AI incident response plan, monitoring, and detection mechanisms based on findings
Regulatory Reporting Obligations and Timelines
| Regulation | Trigger | Deadline | Who to Notify |
|---|---|---|---|
| EU AI Act Art. 73 | Serious incident involving high-risk AI system | Without undue delay (72h for life-threatening) | National market surveillance authority |
| GDPR Art. 33 | Personal data breach (risk to individuals) | Within 72 hours of awareness | Lead supervisory authority |
| GDPR Art. 34 | Personal data breach with high risk to individuals | Without undue delay | Affected individuals directly |
| FDA MDR (US medical) | AI medical device malfunction causing harm | Within 30 days (serious injury), 5 days (urgent public health) | FDA |
| SEC Cyber Disclosure (US public) | Material cybersecurity incident | Within 4 business days of materiality determination | SEC Form 8-K |
Who Needs to Be in Your AI Incident Response Team
Incident Lead
Owns the response. Decision authority on containment, escalation, and external notifications. Usually: Head of Compliance or DPO.
Technical Investigator
Analyses model behaviour, retrieves logs, identifies root cause. Usually: AI/ML engineer or data scientist.
Legal / DPO
Assesses regulatory reporting obligations. Drafts notifications to authorities. Coordinates with external counsel if needed.
Communications
Manages internal communications and, if needed, external/media communications. Prevents premature disclosure.
Product / Business Owner
Decision authority on suspending the AI system if that impacts customers or product commitments.
Executive Sponsor
Receives P1 escalations within 1 hour. Approves decisions with material business or legal impact.
What to Document During an AI Incident
Documentation is your legal protection. If regulators investigate, they will ask for an incident log. Capture the following in real time, not after the fact:
- Date and time of initial detection
- Who detected the incident and how
- Which AI system and version was involved
- What inputs the system received
- What outputs the system produced
- Who was affected and how
- What containment actions were taken and when
- Who was notified internally and when
- What root cause was identified
- What fix was implemented and when
- Who approved return to production
Building the Plan: Minimum Viable AI Incident Response
For a company with limited resources, the minimum viable AI incident response plan is a single document covering:
- Incident classification criteria (P1–P4 with examples relevant to your AI systems)
- First notification chain (who calls who for each severity level, with phone numbers)
- Containment decision authority (who can take an AI system offline and when)
- Regulatory reporting checklist (EU AI Act Art. 73, GDPR Art. 33 — with templates)
- Incident log template (the fields listed in the section above)
- Post-incident review schedule (within 2 weeks of incident close)
This document should be tested with at least one tabletop exercise per year — walk through a hypothetical incident using the plan. You will find gaps before a real incident does.
AI Incident Tracking Built In
ComplianceIQ includes AI incident logging, regulatory reporting checklists, and post-incident review workflows — so your team has the infrastructure to respond correctly when an incident occurs.
Get Your Incident Response Template