← Blog
Incident Response April 17, 2026 · 12 min read

Building an AI Incident Response Plan

The EU AI Act requires companies to report serious incidents involving high-risk AI systems within defined timeframes. GDPR breach obligations may also apply. Most companies have cybersecurity incident response plans — but not AI-specific ones. This guide covers exactly what you need to build.

EU AI Act Article 73 — Serious incident reporting

Providers of high-risk AI systems must report serious incidents to market surveillance authorities immediately — and without undue delay (typically interpreted as within 72 hours of awareness for life-threatening incidents, 15 days for other serious incidents). Without an AI incident response plan, you will almost certainly miss these deadlines when an incident occurs.

What Counts as an AI Incident?

An AI incident is any event or near-miss involving an AI system that causes or could cause harm. The EU AI Act defines a serious incident as one that results in:

But AI incidents include a much broader range of events that require internal response even when EU AI Act reporting is not triggered:

Incident TypeExamplesExternal Reporting?
Harmful outputAI generates discriminatory decisions, dangerous advice, or content that harms someoneYes — if serious harm results
Systematic bias discoveredAudit reveals AI denies loans to protected class at significantly different ratesMaybe — ECOA/CFPB (US), EU AI Act (EU)
Drift-induced degradationAI model accuracy degrades significantly post-deployment causing systematic errorsMaybe — if high-risk system
Data poisoning / adversarial attackAI model manipulated by attackers to produce intended bad outputsYes — GDPR (if personal data), AI Act (if high-risk)
Unauthorised AI use discoveredEmployee uses shadow AI to process customer personal data without approvalMaybe — GDPR if personal data breach
Third-party AI failureVendor AI service causes harm to your users via your productYes — you are the deployer, you must report
Near-missAI made a harmful decision that was caught by human review before executionNo — but must be logged and reviewed

The Five Phases of AI Incident Response

Phase 1: Detection and Identification

  • Detection triggers: automated monitoring alerts, user complaints, employee reports, media/regulator contact
  • First responder documents: what AI system, what happened, when, who was affected
  • Assign an incident lead within 1 hour of detection
  • Open incident record in your tracking system (do not rely on email chains)

Phase 2: Classification and Escalation

  • Classify severity: P1 (life-threatening / serious harm), P2 (significant harm, regulatory trigger), P3 (limited harm), P4 (near-miss)
  • Determine if personal data was involved (triggers GDPR 72-hour clock)
  • Determine if this is a high-risk AI system under EU AI Act (triggers Article 73)
  • Notify: P1 → DPO + Legal + CEO within 1 hour. P2 → DPO + Legal within 4 hours. P3/P4 → Compliance team within 24 hours.

Phase 3: Containment

  • Suspend or isolate the affected AI system if harm is ongoing
  • Preserve all evidence: model version, input data, output data, timestamps, configuration
  • If personal data breach: preserve evidence for GDPR documentation obligations
  • Notify affected parties as required (Article 34 GDPR notification to individuals if high risk)

Phase 4: Investigation and External Reporting

  • Root cause analysis: training data issue, model drift, adversarial input, deployment error, human misuse?
  • EU AI Act Article 73: notify national market surveillance authority (serious incident, without undue delay)
  • GDPR Article 33: notify supervisory authority within 72 hours if personal data breach
  • Document investigation findings in writing — this becomes your regulatory evidence

Phase 5: Recovery and Lessons Learned

  • Implement fix: retrain model, update data, add human review checkpoint, or decommission system
  • Verify fix effectiveness before returning AI system to production
  • Conduct post-incident review within 2 weeks: what worked, what failed, what to improve
  • Update AI incident response plan, monitoring, and detection mechanisms based on findings

Regulatory Reporting Obligations and Timelines

RegulationTriggerDeadlineWho to Notify
EU AI Act Art. 73Serious incident involving high-risk AI systemWithout undue delay (72h for life-threatening)National market surveillance authority
GDPR Art. 33Personal data breach (risk to individuals)Within 72 hours of awarenessLead supervisory authority
GDPR Art. 34Personal data breach with high risk to individualsWithout undue delayAffected individuals directly
FDA MDR (US medical)AI medical device malfunction causing harmWithin 30 days (serious injury), 5 days (urgent public health)FDA
SEC Cyber Disclosure (US public)Material cybersecurity incidentWithin 4 business days of materiality determinationSEC Form 8-K

Who Needs to Be in Your AI Incident Response Team

Incident Lead

Owns the response. Decision authority on containment, escalation, and external notifications. Usually: Head of Compliance or DPO.

Technical Investigator

Analyses model behaviour, retrieves logs, identifies root cause. Usually: AI/ML engineer or data scientist.

Legal / DPO

Assesses regulatory reporting obligations. Drafts notifications to authorities. Coordinates with external counsel if needed.

Communications

Manages internal communications and, if needed, external/media communications. Prevents premature disclosure.

Product / Business Owner

Decision authority on suspending the AI system if that impacts customers or product commitments.

Executive Sponsor

Receives P1 escalations within 1 hour. Approves decisions with material business or legal impact.

What to Document During an AI Incident

Documentation is your legal protection. If regulators investigate, they will ask for an incident log. Capture the following in real time, not after the fact:

Building the Plan: Minimum Viable AI Incident Response

For a company with limited resources, the minimum viable AI incident response plan is a single document covering:

  1. Incident classification criteria (P1–P4 with examples relevant to your AI systems)
  2. First notification chain (who calls who for each severity level, with phone numbers)
  3. Containment decision authority (who can take an AI system offline and when)
  4. Regulatory reporting checklist (EU AI Act Art. 73, GDPR Art. 33 — with templates)
  5. Incident log template (the fields listed in the section above)
  6. Post-incident review schedule (within 2 weeks of incident close)

This document should be tested with at least one tabletop exercise per year — walk through a hypothetical incident using the plan. You will find gaps before a real incident does.

AI Incident Tracking Built In

ComplianceIQ includes AI incident logging, regulatory reporting checklists, and post-incident review workflows — so your team has the infrastructure to respond correctly when an incident occurs.

Get Your Incident Response Template