AI Compliance Audit: How to Prepare and What to Expect
AI compliance audits are increasing. EU AI Act market surveillance is active. DPAs are asking about AI in GDPR audits. CFPB and EEOC are reviewing AI in regulated industries. Here is how these audits work, what regulators examine, and how to be ready before one starts.
Types of AI compliance audits
Regulatory inspection
Regulatory inspections focus on whether you are meeting your legal obligations. They examine documentation, technical systems, and governance processes. They can interview staff.
Contractual audit right
Customer exercises audit right under a data processing agreement or contract. Focus is usually on your data handling and security practices. Narrower scope than regulatory audit.
Self-assessment / certification audit
You are pursuing certification. Auditors assess whether your AI management system meets the standard's requirements. Findings lead to certification, observations, or non-conformities.
Bias audit (NYC LL144 and similar)
Auditor receives your AI system, training data, or system outputs and tests for disparate impact across protected characteristics. Produces a published audit summary.
How an AI compliance audit unfolds
Pre-audit preparation (2–4 weeks)
Identify what systems are in scope
Create an inventory of all AI systems used in your business. For a regulatory audit, in-scope systems are typically those that process the personal data or make the types of decisions the authority regulates. For an EU AI Act audit, in-scope systems are high-risk AI systems.
Gather documentation for each in-scope system
For each system: system record, risk assessment/DPIA, bias testing records, explainability documentation, and monitoring logs. Check that all documents are current (not created this week, but also not 3 years old).
Interview your own team
Before auditors do. Ask the product team how human oversight actually works. Ask operations how they handle explanation requests. Ask engineering how they detect model drift. Audit findings often come from gaps between what the documentation says and what staff actually do.
Fix what you find
If your self-review finds gaps — missing DPIA, outdated bias test, explanation process that exists on paper but not in practice — fix them before the audit. Document that you fixed them and when. Do not wait for auditors to find problems that you identified yourself.
Document review (days 1–5 of audit)
What auditors look for in AI documentation
EU AI Act: Technical documentation (Article 11 content), risk assessment, human oversight procedures, post-market monitoring logs, conformity assessment. GDPR: DPIA, Records of Processing Activities (RoPA Article 30), consent mechanisms, data subject rights procedures. US federal (CFPB): Model risk management documentation, adverse action reason code documentation, disparate impact analysis.
Red flags that prompt deeper investigation
Documentation created recently but claiming to cover past periods. Identical DPIAs for different systems (copy-paste). Bias testing results that are suspiciously good (directionally uniform performance across all groups is rare). Monitoring logs that show no alerts for 12+ months for a high-volume system. Policies that reference "the team" without naming a specific person accountable.
Technical inspection (days 3–8 of audit)
What technical inspection covers
Regulators can request: access to the AI system in a controlled environment, sample outputs for specific input cases, demonstration of human override capability, demonstration of data subject rights fulfillment (can you export an individual's data, can you delete it). They will typically NOT request model weights, proprietary training code, or trade secrets without specific cause.
Testing the human oversight claim
A common audit technique: present a case where the AI produces a clearly wrong or biased output, and observe whether the human reviewer catches and overrides it. If your documentation says humans can override the AI but the interface makes override difficult or audit-logged differently from approvals, this becomes a finding.
Staff interviews (days 5–10)
Who gets interviewed
Usually: the person responsible for AI governance (DPO, CISO, VP Engineering, or whoever is named in documentation), the product manager or engineer responsible for the AI system, a front-line staff member who uses AI outputs in their work. The interview compares what they say to what the documentation says.
Questions you should be able to answer
"What do you do when the AI makes a decision you disagree with?" "How do you know if the AI is performing worse than expected?" "If a customer says the AI decision about them was wrong, what happens?" "When did you last review the bias testing results for this system?" "Who approved deploying this AI system?"
Findings and remediation (post-audit)
Types of findings
Critical finding: systemic violation that must be remediated immediately or face sanction. Regulators may require a corrective action plan with deadlines. Major finding: significant gap in compliance — must be remediated within a defined period (often 30–90 days). Minor finding / observation: documented gap or improvement opportunity. Response not mandatory but recommended.
How to respond to findings
Acknowledge immediately — do not dispute findings by email on the day of the audit report. Request a meeting to discuss remediation timelines if the standard deadline is not achievable. Provide a written corrective action plan with specific steps, responsible persons, and dates. Document your remediation as you go — regulators may request evidence of completion.
Pre-audit preparation checklist
Documentation
- AI system inventory: complete, current list of all AI systems in production
- System records: one per AI system with purpose, inputs, outputs, accountability
- DPIA / risk assessment: for all high-risk systems, dated and reviewed within 12 months
- Bias testing: most recent test results, methodology, auditor name if external
- Monitoring logs: at least 6 months of performance monitoring entries
- Explanation templates: what you provide to individuals who request explanation
- Data subject rights procedures: written, not just a policy statement
Governance
- Named AI accountability: a specific person is responsible for AI governance (not a team)
- Approval process: documentation of who approved deploying each AI system
- Review schedule: evidence that governance reviews actually happen (meeting minutes, dated records)
- Vendor agreements: DPA or AI-specific agreements with all AI vendors
- Incident response: a documented process for responding to AI errors or discrimination findings
Technical controls
- Human override is technically possible and creates a separate audit log
- Data subject rights can be fulfilled technically: individual data export and deletion work
- AI outputs are logged with sufficient context to reconstruct any decision
- Model version tracking: you know which model version made which historical decisions
Staff readiness
- AI system owners can explain how their system works and how oversight is performed
- Front-line staff know what to do when they disagree with an AI recommendation
- Your DPO (if appointed) has reviewed AI documentation in the past 12 months
- You have a contact list for your AI vendors in case auditors request vendor information
The difference between audit-ready and not
Not audit-ready
- ✕No AI system inventory
- ✕DPIAs exist for some systems but not the high-risk ones
- ✕Bias testing done once at launch, not since
- ✕Documentation created after receiving audit notice
- ✕"Human oversight" means a dashboard exists, not that anyone reviews it
- ✕Explanation process is a policy statement, not an actual mechanism
Audit-ready
- ✓Complete AI inventory with version history
- ✓DPIA for every system processing personal data
- ✓Annual (minimum) bias testing with published or available results
- ✓Documentation predates deployment by months
- ✓Monitoring logs show actual alerts and human responses
- ✓Explanation requests are tracked and fulfilled in documented time
Do not try to conceal problems from auditors
Regulators approach AI audits with technical staff. They understand bias statistics, model risk, and documentation patterns. Auditors frequently say that the worst outcomes — highest fines, most extensive remediation orders — come not from the underlying compliance gaps, but from attempts to conceal them.
If you have a compliance gap, disclose it proactively with a remediation plan. GDPR enforcement practice shows that proactive disclosure and good-faith remediation consistently receive lower fines than concealment followed by mandatory remediation. The same principle applies under the EU AI Act.
Know your compliance gaps before auditors do
ComplianceIQ maps your AI systems against 108+ jurisdictions and identifies the documentation and testing gaps before a regulatory audit finds them.
Get my free risk report