EU AI Act Post-Market Monitoring: Requirements for High-Risk AI Systems
EU AI Act compliance does not end at deployment. Article 72 requires every high-risk AI system to have a documented post-market monitoring plan. Article 73 requires serious incidents to be reported to regulators within 72 hours. Deployment is the beginning of compliance — not the end.
What the EU AI Act Requires After Deployment
Article 72 — Post-Market Monitoring
High-risk AI system providers must establish and document a post-market monitoring system proportionate to the nature of the AI technology and the risks of the AI system. The monitoring system must actively and systematically collect, document, and analyse data on the performance of high-risk AI systems throughout their lifetime.
Applies to: All providers of high-risk AI systems (Annex III + Annex I)
Article 73 — Serious Incident Reporting
Providers and deployers must report serious incidents and malfunctions to the relevant national market surveillance authority without undue delay. Reports must include information about the incident, corrective measures taken, and, where applicable, measures taken to notify affected users and deployers.
Deadline: Within 72 hours of becoming aware (immediately for death/serious injury)
Provider vs Deployer: Different Obligations
Post-market monitoring obligations apply differently depending on your role in the AI supply chain:
Provider (AI system developer/vendor)
Deployer (company using high-risk AI)
What a Post-Market Monitoring Plan Must Include
The EU AI Act does not prescribe an exact format for post-market monitoring plans, but the required content can be derived from Articles 72, 10, and 9. A compliant plan covers six areas:
1. Performance metrics
Define key performance indicators (KPIs) that will be tracked in production. These must include accuracy, error rates, and bias indicators across demographic subgroups.
Examples
2. Data drift detection
Monitor whether the real-world data the model receives in production diverges from the training distribution. Significant drift can degrade model performance and introduce new biases.
Examples
3. Feedback loop
Mechanism for collecting information about actual model outcomes — did the AI decision lead to the correct or intended result? Required to identify performance degradation over time.
Examples
4. Bias and fairness monitoring
Continuous assessment of whether the AI system produces disparate outcomes across protected characteristics. EU AI Act Article 10 requirement extends to production monitoring.
Examples
5. Incident tracking
Log, categorise, and track all incidents — including near-misses, user complaints, and operational anomalies. Required for both Article 73 serious incident reporting and post-market plan documentation.
Examples
6. Model update management
Process for retraining, updating, or rolling back the AI model. Each significant update may require a new conformity assessment if it materially changes performance characteristics.
Examples
What Counts as a "Serious Incident" Under Article 73
The EU AI Act defines a serious incident as any incident that directly or indirectly leads to:
Death or serious injury caused by an AI decision or AI failure
Serious disruption in the management and operation of critical infrastructure
Infringement of obligations under Union law intended to protect fundamental rights
Serious and irreversible damage to property, the environment, or society
Important: "Directly or indirectly"
The phrase "directly or indirectly leads to" means the AI system does not need to be the sole cause of the harm. If an AI decision contributed to a harmful outcome — even alongside human error — it may qualify as a serious incident.
Incident Reporting Timeline
Serious incident involving death or risk to health
Serious incident (other)
Near-serious incident
Malfunctioning with safety implications
Operating Log Retention Requirements
EU AI Act Article 12 requires high-risk AI systems to automatically generate logs. Deployers must retain those logs for the following minimum periods:
General high-risk AI systems
6 months minimumFrom the date of each operation or decision logged
AI in critical infrastructure
1 year minimumInfrastructure operators may face stricter sectoral requirements
AI in recruitment and employment
Duration of employment relationship + post-termination period under applicable labour lawGDPR retention limitations apply concurrently
AI in credit, insurance, or financial services
Per sector-specific regulator guidance; typically 3–7 yearsMiFID II, DORA, and Solvency II may require longer retention
Post-Market Monitoring Implementation Checklist
Post-market monitoring plan documented and included in technical documentation
Performance KPIs defined with baseline measurements from validation
Data drift monitoring configured and alert thresholds set
Bias monitoring in place across all protected characteristics relevant to the use case
Incident classification matrix documented (serious / near-serious / minor)
Regulatory notification process documented with responsible owner identified
Operating log retention policy implemented with automated archival
Feedback loop from deployers to provider established (or internal if you are both)
Model update change management process includes re-assessment trigger criteria
Annual post-market monitoring review scheduled in compliance calendar
Track AI Compliance Monitoring in One Place
ComplianceIQ tracks compliance score drift, regulatory changes, and AI system monitoring obligations across all your jurisdictions — with alerts when action is required.
Run a Free Risk Assessment