5 AI Compliance Mistakes That Cost Companies Millions
The most expensive AI compliance failures are not caused by ignorance of the law. They are caused by structural assumptions that regulators — and courts — have consistently rejected. Here is what keeps happening, and how to avoid it.
AI regulation enforcement is still in early stages globally — but early enforcement reveals patterns. The companies that have faced the largest fines and most disruptive regulatory actions share recognisable failure modes. Understanding these patterns is more useful than any checklist.
Assuming "we use a vendor" transfers the liability
The vendor doesn't absorb your compliance obligations.
What went wrong
In 2023, online tutoring company iTutorGroup paid $365,000 to settle EEOC charges after its AI hiring tool automatically rejected female applicants over 55 and male applicants over 60. iTutorGroup's defence — that the tool was built by a vendor — was not accepted.
The EEOC's position is explicit in its 2023 technical guidance: "Title VII and the other civil rights laws apply to employers, not just to the entities that develop AI and other software tools." If your AI hiring tool discriminates, you are liable, regardless of who built it.
This pattern appears in every jurisdiction with AI regulation. Under the EU AI Act, deployers of high-risk AI systems have their own compliance obligations that cannot be passed to the provider. ECOA and FCRA apply to lenders using AI credit models regardless of model provenance. NYC Local Law 144 applies to the employer, not the ATS vendor.
How to avoid this mistake
- ·Before deploying any third-party AI, request bias audit results and technical documentation from the vendor.
- ·Include contractual representations that the tool meets applicable compliance requirements for your use case.
- ·Run your own disparate impact testing on any AI used for hiring, lending, or housing decisions.
- ·Under the EU AI Act, ask your high-risk AI providers for their declaration of conformity — they are required to provide it.
- ·Keep records of your due diligence. If enforcement comes, documented vendor evaluation is meaningful evidence.
Building compliance documentation after the fact
Regulators can tell the difference between a governance program and a compliance filing.
What went wrong
Amazon's widely reported abandonment of its AI hiring tool in 2018 — after discovering it systematically downgraded resumes mentioning "women's" colleges and other female-coded language — is often cited as an AI ethics case. It is equally an AI compliance case.
The system was built without documented bias testing procedures, without an equity review at design stage, and without human oversight checkpoints. When the bias was discovered internally, Amazon had to discard a system it had spent years building because there was no compliance framework to fix it against.
The pattern recurs. When regulators request documentation of an AI governance program — as the FTC has in its AI inquiries, as CFPB examiners do for credit models, as EU data protection authorities require — a document produced the week before the audit is immediately suspect. It lacks version history. The dates don't match operational records. It contradicts other documentation.
The EU AI Act explicitly requires that technical documentation be "drawn up before" a high-risk AI system is placed on the market. ECOA compliance is an ongoing obligation, not a one-time filing. Governance programs need to predate enforcement, not respond to it.
How to avoid this mistake
- ·Start an AI registry today: a simple spreadsheet listing every AI system you use, what decisions it affects, what data it uses, who approved it.
- ·For any AI used in consequential decisions, document your bias testing methodology before deployment — not after.
- ·Version-control your AI documentation. Regulators look at whether your governance process has a history.
- ·For EU AI Act high-risk systems: technical documentation must exist before deployment. Draft it as you build.
- ·Treat AI compliance the way you treat financial reporting: ongoing, documented, auditable — not reactive.
Missing the "solely automated" threshold
Having a human in the loop is not the same as meaningful human oversight.
What went wrong
The Dutch tax authority's child benefit scandal, which brought down a government and created a €3.7 billion remediation fund, is the most extreme example of what happens when "human oversight" exists on paper but not in practice. The system flagged families as potentially fraudulent based on algorithmic scoring; human reviewers were pressured to approve the AI's decisions without independent review.
GDPR Article 22 prohibits decisions "based solely" on automated processing for decisions with significant effects. The Dutch system had humans technically in the loop — but the humans were rubber-stamping the AI, not reviewing it. The EDPB guidance on Article 22 is clear: "A human reviewing the decision must genuinely influence the outcome."
This comes up constantly. A credit model's output goes to an underwriter who approves it in seconds without review — this may still be "solely automated" in regulators' eyes. An AI résumé ranker narrows 10,000 applications to 20 for human review — the 9,980 never seen by a human have been "solely automated" rejected.
The EU AI Act's human oversight requirement for high-risk systems makes the same point in more detail: humans must be able to understand, monitor, and override AI output — and this must be built into the system design, not added as a checkbox.
How to avoid this mistake
- ·Map where in your decision process AI outputs reach humans. Is the human genuinely reviewing, or approving?
- ·Design human review to present the AI's reasoning, not just its conclusion. Reviewers cannot override what they cannot see.
- ·For hiring: ensure humans see a reasonable applicant pool, not only AI-filtered finalists.
- ·For credit: adverse action notices must identify specific AI factors — "algorithmic risk score" is not sufficient under CFPB guidance.
- ·For EU high-risk AI: document how human oversight is implemented technically, not just as policy.
Ignoring sector-specific obligations while focusing on AI law
AI compliance doesn't replace existing sector law — it adds to it.
What went wrong
In 2024, Workday faced a class action alleging its AI hiring tool violated both the Fair Credit Reporting Act and EEOC anti-discrimination requirements. The case alleged that Workday's AI acted as a "consumer reporting agency" — because it was gathering and processing information about applicants' backgrounds and using it in employment decisions — without providing the required FCRA disclosures.
This case illustrates the mistake many compliance teams make: they focus on dedicated AI law (EU AI Act, state AI bills) while overlooking how existing sector law interacts with AI systems. FCRA was not written for AI, but it applies to AI systems that function like consumer reporting. HIPAA was not written for AI, but it applies to AI that processes protected health information. ECOA was not written for AI, but it applies to AI credit scoring models.
In practice, a financial institution's AI compliance burden includes ECOA, FCRA, CFPB guidance, state consumer protection law, and the EU AI Act — in that order, because ECOA enforcement is active and immediate, while the EU AI Act's high-risk requirements become enforceable in August 2026.
How to avoid this mistake
- ·For each AI system, identify not just applicable AI law, but what sector-specific laws govern the decision domain.
- ·Credit AI: ECOA adverse action notices, FCRA applicability, CFPB model risk management guidance.
- ·Healthcare AI: HIPAA BAA with all vendors, FDA SaMD classification if clinical, EU AI Act high-risk.
- ·Hiring AI: EEOC Title VII, FCRA (if acting as CRA), NYC LL144, Illinois AIVIA, Colorado SB 205.
- ·Insurance AI: state fair discrimination rules, Colorado HB 23-1267, EU AI Act.
Underestimating the EU AI Act's reach
The EU AI Act applies wherever your AI affects EU residents — not just where your company is registered.
What went wrong
OpenAI's ChatGPT was suspended in Italy in March 2023 after the Italian data protection authority (Garante) found insufficient legal basis for processing Italian users' data and no age verification mechanism. OpenAI had to implement changes and a transparency notice to restore access within 30 days. The suspension affected all Italian users and created significant commercial disruption.
Clearview AI was fined €20M in Italy, €20M in France, £7.5M in the UK, and €30.5M in the Netherlands — all for processing EU residents' facial recognition data without legal basis. Clearview operated from the United States. The EU's extra-territorial jurisdiction applied because EU residents were affected.
The EU AI Act follows the same extra-territorial principle as GDPR. The law applies to "providers placing AI systems on the EU market" and to "deployers of AI systems located in the EU" — but also to "providers and deployers of AI systems located in a third country, where the output produced by the AI system is used in the Union." If your AI affects EU residents, the Act applies to you, regardless of where your company is incorporated.
US companies frequently make the mistake of treating EU compliance as an afterthought — something they'll address if they ever "expand to Europe." If their SaaS product has EU users, EU regulation already applies.
How to avoid this mistake
- ·Check whether you have EU users. If yes, GDPR applies. Check which of your AI systems affect EU residents.
- ·For AI systems affecting EU users: review EU AI Act risk classification. If high-risk, August 2, 2026 is your deadline.
- ·US companies: register an EU representative if required (similar to GDPR representative requirement).
- ·Prohibited AI practices ban applies globally — if your AI function is banned under the EU AI Act, you cannot offer it to EU users, period.
- ·General purpose AI model providers: if your model is available in the EU (including via API), you have transparency obligations under the Act regardless of where you are incorporated.
The common thread
These five mistakes share a single underlying cause: treating AI compliance as a legal filing exercise rather than an operational governance program. The companies that avoid them share a different mental model — they treat AI governance the way mature organisations treat financial controls: documented, ongoing, auditable, and embedded in how the system actually operates.
This is not a high bar. It does not require a dedicated compliance team or enterprise software. It requires an AI inventory, documented testing procedures, a meaningful human review process, and honest assessment of which regulations apply based on what your AI does — not where you are incorporated.
Check your AI compliance risk — free
ComplianceIQ maps your AI systems against 108+ jurisdictions and tells you exactly what you need to do — and in what order.
Get my free risk report