← Blog
EU AI ActAug 2 Deadline April 17, 2026 · 12 min read

EU AI Act Post-Market Monitoring: Requirements for High-Risk AI Systems

EU AI Act compliance does not end at deployment. Article 72 requires every high-risk AI system to have a documented post-market monitoring plan. Article 73 requires serious incidents to be reported to regulators within 72 hours. Deployment is the beginning of compliance — not the end.

What the EU AI Act Requires After Deployment

Article 72 — Post-Market Monitoring

High-risk AI system providers must establish and document a post-market monitoring system proportionate to the nature of the AI technology and the risks of the AI system. The monitoring system must actively and systematically collect, document, and analyse data on the performance of high-risk AI systems throughout their lifetime.

Applies to: All providers of high-risk AI systems (Annex III + Annex I)

Article 73 — Serious Incident Reporting

Providers and deployers must report serious incidents and malfunctions to the relevant national market surveillance authority without undue delay. Reports must include information about the incident, corrective measures taken, and, where applicable, measures taken to notify affected users and deployers.

Deadline: Within 72 hours of becoming aware (immediately for death/serious injury)

Provider vs Deployer: Different Obligations

Post-market monitoring obligations apply differently depending on your role in the AI supply chain:

Provider (AI system developer/vendor)

Art.72 Monitoring: Must have a post-market monitoring plan covering the entire lifecycle of the AI system.
Art.73 Reporting: Must report serious incidents and malfunctions to the market surveillance authority within applicable timeframes.
Documentation: Technical documentation must include a description of the post-market monitoring plan.

Deployer (company using high-risk AI)

Art.72 Monitoring: Must monitor performance of the AI system in their specific use context. Must report performance issues to the provider.
Art.73 Reporting: Must report incidents to the provider and to competent authorities if health, safety, or fundamental rights are impacted.
Documentation: Operating logs must be maintained for minimum 6 months for general high-risk AI; longer for specific sectors.

What a Post-Market Monitoring Plan Must Include

The EU AI Act does not prescribe an exact format for post-market monitoring plans, but the required content can be derived from Articles 72, 10, and 9. A compliant plan covers six areas:

1. Performance metrics

Define key performance indicators (KPIs) that will be tracked in production. These must include accuracy, error rates, and bias indicators across demographic subgroups.

Examples

Prediction accuracy by demographic groupFalse positive / false negative ratesConfidence score distributionsOutput drift over time

2. Data drift detection

Monitor whether the real-world data the model receives in production diverges from the training distribution. Significant drift can degrade model performance and introduce new biases.

Examples

Statistical tests on input feature distributionsPopulation stability index (PSI)Covariate shift detectionConcept drift monitoring

3. Feedback loop

Mechanism for collecting information about actual model outcomes — did the AI decision lead to the correct or intended result? Required to identify performance degradation over time.

Examples

User feedback on AI decisionsDownstream outcome trackingHuman reviewer decision overridesComplaint correlation analysis

4. Bias and fairness monitoring

Continuous assessment of whether the AI system produces disparate outcomes across protected characteristics. EU AI Act Article 10 requirement extends to production monitoring.

Examples

Disparate impact ratios by groupEqual opportunity metricsDemographic parity trackingRegular bias audit schedule

5. Incident tracking

Log, categorise, and track all incidents — including near-misses, user complaints, and operational anomalies. Required for both Article 73 serious incident reporting and post-market plan documentation.

Examples

Incident severity classification matrixRoot cause analysis processCorrective action trackingRegulatory reporting log

6. Model update management

Process for retraining, updating, or rolling back the AI model. Each significant update may require a new conformity assessment if it materially changes performance characteristics.

Examples

Model version controlA/B test framework for updatesRollback triggers and proceduresChange impact assessment process

What Counts as a "Serious Incident" Under Article 73

The EU AI Act defines a serious incident as any incident that directly or indirectly leads to:

Death or serious injury caused by an AI decision or AI failure

Serious disruption in the management and operation of critical infrastructure

Infringement of obligations under Union law intended to protect fundamental rights

Serious and irreversible damage to property, the environment, or society

Important: "Directly or indirectly"

The phrase "directly or indirectly leads to" means the AI system does not need to be the sole cause of the harm. If an AI decision contributed to a harmful outcome — even alongside human error — it may qualify as a serious incident.

Incident Reporting Timeline

1

Serious incident involving death or risk to health

Deadline: Immediately, without delay — interpreted as within 24 hoursTo: National market surveillance authorityAction: Notify and begin investigation
2

Serious incident (other)

Deadline: 72 hours of becoming awareTo: National market surveillance authorityAction: Initial notification, corrective action plan to follow
3

Near-serious incident

Deadline: 15 days for initial assessmentTo: Internal documentation; authority if requestedAction: Investigate, document, determine if serious incident threshold met
4

Malfunctioning with safety implications

Deadline: 30 days for report to authority if requestedTo: National market surveillance authorityAction: Corrective action, user notification if harm possible

Operating Log Retention Requirements

EU AI Act Article 12 requires high-risk AI systems to automatically generate logs. Deployers must retain those logs for the following minimum periods:

General high-risk AI systems

6 months minimum

From the date of each operation or decision logged

AI in critical infrastructure

1 year minimum

Infrastructure operators may face stricter sectoral requirements

AI in recruitment and employment

Duration of employment relationship + post-termination period under applicable labour law

GDPR retention limitations apply concurrently

AI in credit, insurance, or financial services

Per sector-specific regulator guidance; typically 3–7 years

MiFID II, DORA, and Solvency II may require longer retention

Post-Market Monitoring Implementation Checklist

Post-market monitoring plan documented and included in technical documentation

Performance KPIs defined with baseline measurements from validation

Data drift monitoring configured and alert thresholds set

Bias monitoring in place across all protected characteristics relevant to the use case

Incident classification matrix documented (serious / near-serious / minor)

Regulatory notification process documented with responsible owner identified

Operating log retention policy implemented with automated archival

Feedback loop from deployers to provider established (or internal if you are both)

Model update change management process includes re-assessment trigger criteria

Annual post-market monitoring review scheduled in compliance calendar

Track AI Compliance Monitoring in One Place

ComplianceIQ tracks compliance score drift, regulatory changes, and AI system monitoring obligations across all your jurisdictions — with alerts when action is required.

Run a Free Risk Assessment