What Happens If You Don't Comply with the EU AI Act?
Fines of up to €35M. Market withdrawal orders. Civil liability. Reputational damage. Here is what the consequences actually look like — and how enforcement will realistically work.
The penalty tiers
The EU AI Act has three levels of administrative fines, structured similarly to GDPR:
For SMEs and startups, the fines are proportionally capped. The thresholds are designed so that a startup with €1M in revenue does not face the same absolute fine as a company with €10B. But "proportional" still means 7% of revenue for the most serious violations — for a €5M ARR startup, that is €350,000.
How violations are discovered
Understanding how enforcement actually works is important. Regulators do not audit every company from day one. EU AI Act enforcement will realistically begin in a few ways:
Market surveillance sweeps
National Market Surveillance Authorities (MSAs) will conduct sector-by-sector audits. Expect the first sweeps to focus on the highest-risk sectors: hiring tools in major employment markets, credit scoring AI, healthcare diagnostic AI, and law enforcement-adjacent tools.
Complaints from individuals
Any person who believes an AI system violating the EU AI Act affected them can file a complaint with their national MSA. For hiring AI specifically, expect rejected candidates to file complaints when they believe they were unfairly screened out by an algorithm.
Competitor complaints
Competitors can report non-compliant AI systems to regulators. This is already common in GDPR enforcement. A compliant company may report a non-compliant competitor to level the playing field.
Media investigations
Investigative journalists have successfully triggered GDPR enforcement through public reporting. The same pattern will emerge with AI. An article showing that Company X's hiring AI discriminates against women can trigger immediate regulatory investigation.
Self-disclosure
Some violations will be self-reported — especially payment failures, which is required. When your AI system causes harm, the EU AI Act requires providers to notify the relevant national authority. Proactive disclosure typically results in lower fines.
What happens during an enforcement action
Based on the GDPR enforcement model (which the EU AI Act is designed similarly to):
- Investigation opened: You receive notice that the national MSA is investigating. You must cooperate and provide documentation.
- Request for information: You will be asked to provide your technical documentation, risk management records, conformity assessment, and logs.
- On-site inspection: The MSA may conduct an on-site inspection of your facilities and systems.
- Preliminary findings: The MSA issues preliminary findings and you have the right to respond.
- Decision: The MSA issues a decision with remediation requirements and potentially a fine.
- Remediation period: If you are ordered to fix the issue, you have a time period to do so. Failure to remediate can result in market withdrawal orders — banning you from operating in the EU.
This process typically takes 6–18 months for significant cases. GDPR investigations have sometimes taken several years. Do not expect overnight enforcement actions for most violations.
Consequences beyond the fine
The administrative fine is not always the worst outcome:
- Market withdrawal orders: You can be banned from offering the AI system in the EU until you comply. For a SaaS company with EU customers, this effectively means zero EU revenue until you fix the problem.
- Civil liability: The EU AI Act creates civil liability for harms caused by high-risk AI. Individuals who suffer harm from a non-compliant AI system can sue the provider. Class actions are possible.
- Reputational damage: GDPR fines became news. EU AI Act enforcement actions will become news — especially for named companies. "Company X's hiring AI found non-compliant" is the kind of headline that affects enterprise sales.
- Contract termination: Enterprise customers are increasingly including AI compliance warranties in contracts. Non-compliance may trigger contract terminations.
- Insurance impacts: Cyber insurance policies are beginning to include AI compliance provisions. Non-compliance may void coverage.
Who will actually be fined first
Regulatory resources are limited. Expect the early enforcement to focus on:
- Large companies with resources to pay meaningful fines
- Clear and public violations (an AI system that is provably discriminating)
- High-profile sectors: hiring, credit, healthcare
- Companies that previously had GDPR violations (pattern of non-compliance)
- Prohibited practices violations — the most serious category with the highest penalties
Startups and SMEs using off-the-shelf AI tools for low-risk purposes are unlikely to be targeted in the first wave of enforcement. However, high-risk AI users in any company size are at risk.
The cost of compliance vs. non-compliance
EU AI Act compliance for a mid-market company with high-risk AI typically costs:
- $10,000–$50,000 in software and tools
- 40–120 hours of staff time
- $5,000–$20,000 in legal review
- Ongoing: 5–10 hours/month for monitoring and documentation
Compare that to Tier 2 fines starting at €15M for serious violations. The math strongly favors compliance. Even for minimal risk AI with no real compliance obligation, a brief audit to confirm your classification is minimal risk (and document that conclusion) costs almost nothing.
Know your exposure before enforcement begins
ComplianceIQ calculates your compliance risk and the potential penalty exposure for your specific AI systems — so you know where to focus before an investigator does.
Calculate your penalty exposure →