Documentation GuideTemplates Included 12 min read

How to Document AI Decision-Making for Compliance

EU AI Act, GDPR, ECOA, and US state AI laws all require that you document how AI decisions are made. Not the algorithm. Not the model weights. The governance record — what the system does, how you tested it, how you monitor it, and how you explain it. Here is exactly what to document, in what format, with templates.

Updated April 2026 · by ComplianceIQ Editorial

Why documentation is checked before features

Regulators cannot audit your model weights or training code. What they audit is your documentation. An organisation with poor documentation but a good model looks non-compliant. An organisation with thorough documentation and a mediocre model looks structured and accountable. Documentation is not proof of compliance — but its absence is strong evidence of non-compliance.

1

AI System Record

EU AI Act (Article 11)GDPR Article 30Colorado SB 205

Establishes what AI system is in use, its purpose, and its deployment context. The foundation of all other documentation.

What to include

  • System name, version, and vendor (if third-party)
  • Intended use case and intended users
  • Deployment date and environment (production, staging)
  • Data inputs: what data sources feed the system
  • Data outputs: what decisions or outputs the system produces
  • How outputs are used in business processes
  • Who is accountable for the system's performance
  • Review schedule (at minimum annual)

Template / example entry

System Name: Customer Credit Risk Model v3.2
Vendor: Internal / [External vendor name if applicable]
Purpose: Predict creditworthiness of loan applicants; outputs probability of default (0–1) used in underwriting decision
Deployment Date: 2025-09-14
Inputs: Application data (income, employment, debt), credit bureau data (Equifax)
Outputs: Risk score 0–100, recommendation (Approve / Review / Decline)
Usage: Score → underwriter review → final decision (no automatic approval for scores <40)
Accountable: Head of Credit Risk, [Name], credit-risk@company.com
Review Schedule: Annual or when regulatory requirements change

Practical note: For EU AI Act high-risk systems, the technical documentation (Article 11) must also include design methodology, training data description, accuracy metrics, known limitations, and standards applied.

2

Risk Assessment / DPIA

GDPR Article 35 (DPIA)EU AI Act (fundamental rights impact assessment)Colorado SB 205

Demonstrates that you assessed the risks of the AI system before deployment — not after a problem occurred.

What to include

  • Description of the processing and its purpose
  • Assessment of necessity and proportionality
  • Identification of risks to individuals' rights and freedoms
  • Mitigation measures for each identified risk
  • Residual risk assessment after mitigation
  • Consultation with DPO (if applicable)
  • Date of assessment and next review date

Template / example entry

RISK: AI credit model may proxy for protected characteristics (race, national origin via postal code data)
LIKELIHOOD: Medium — postal code correlates with race in many jurisdictions
IMPACT: High — credit denial has significant life impact; potential ECOA/FHA violation
MITIGATION:
  1. Remove postal code from model inputs
  2. Run disparate impact analysis quarterly by race, sex, national origin
  3. Document adverse action reason codes at individual level
  4. External audit of model annually
RESIDUAL RISK: Low-Medium (mitigation reduces but doesn't eliminate risk)
REVIEWED BY: [DPO name], [Date]

Practical note: GDPR DPIAs are required when processing is "likely to result in a high risk." Most AI systems processing personal data at scale, involving profiling, or affecting vulnerable individuals meet this threshold.

3

Bias and Fairness Testing Record

NYC Local Law 144 (annual bias audit)Colorado SB 205EU AI Act Article 10ECOA/FHA (lending AI)EEOC guidance (hiring AI)

Demonstrates that you tested the AI system for discriminatory outcomes — a key differentiator between adequate and inadequate AI governance.

What to include

  • Test date and tester (internal team or external auditor)
  • Protected characteristics tested: race, sex, age, national origin, etc.
  • Methodology: which statistical tests were used (disparate impact ratio, 4/5ths rule, etc.)
  • Test dataset: what data was used, how representative it is
  • Results by group: selection rates, approval rates, or outcome rates for each protected group
  • Whether any significant disparities were found
  • Actions taken if disparities found
  • Next test date

Template / example entry

Test Date: 2026-01-15
Tester: [External auditor firm name] — independent as required by NYC LL144
Protected Characteristics Tested: Sex, Race/Ethnicity
Methodology: Disparate Impact Ratio (DIR) analysis; 4/5ths rule; regression analysis controlling for legitimate factors

RESULTS:
  Sex: Female DIR = 0.87 (87% of male selection rate) — within 4/5ths threshold (0.80 minimum)
  Race: Black/African American DIR = 0.79 — BELOW 4/5ths threshold — REQUIRES REMEDIATION

ACTIONS REQUIRED:
  1. Investigate root cause of racial disparity
  2. Document investigation and remediation steps
  3. Retest within 90 days

Next Scheduled Test: 2027-01-15 (or earlier if remediation requires)

Practical note: NYC Local Law 144 requires that bias audits be conducted by an "independent auditor" — not internal staff. External audit reports must be published on the employer's website.

4

Automated Decision Explainability Record

GDPR Article 22LGPD Article 20Colorado SB 205CFPB adverse action guidanceEU AI Act Article 26

Documents how you explain AI decisions to affected individuals. Required for credit decisions (ECOA adverse action), EU/Brazil data subject requests, and high-risk AI under EU AI Act.

What to include

  • What information the system provides when it makes a decision
  • How the system identifies the key factors that drove the decision
  • What a data subject receives when they request an explanation
  • Response time for explanation requests
  • Who handles explanation requests
  • Template of the explanation you provide

Template / example entry

ADVERSE ACTION NOTICE TEMPLATE (ECOA/FCRA compliant):

Your application was not approved. The principal reasons are:
  1. Debt-to-income ratio too high (your ratio: 58%; threshold: 43%)
  2. Length of credit history below threshold (your history: 2 years; threshold: 5 years)
  3. Number of recent credit inquiries: 4 in 12 months

If you believe this decision was based on inaccurate information, you have the right to:
  - Request a free copy of your credit report from the bureau(s) used
  - Dispute inaccurate information directly with the bureau
  - Reapply after addressing the factors above

Credit bureau used: [Bureau name and contact]
Decision date: [Date]
This decision was made using [automated/automated with human review] processing.

Practical note: Under GDPR Article 22, individuals have the right to "obtain human intervention" when subject to solely automated decisions. You must have a documented process for this request — not just a policy that it exists.

5

Model Performance Monitoring Log

EU AI Act Article 72 (post-market monitoring)CFPB model risk managementColorado SB 205 (annual review)

Demonstrates that you monitor AI performance after deployment. Regulators distinguish between companies that test once at deployment and those that monitor continuously.

What to include

  • Performance metrics tracked and their acceptable ranges
  • Monitoring frequency (real-time, daily, weekly, monthly)
  • Who reviews the monitoring data
  • Threshold for escalation or model review
  • History of alerts and responses
  • Model drift detection methodology

Template / example entry

MONITORING SCHEDULE: Customer Churn Prediction Model
Frequency: Weekly automated report, monthly human review

METRICS TRACKED:
  - Precision / Recall (alert if drops >5% from baseline)
  - Disparate impact ratio by age group (alert if any group <0.80)
  - Feature distribution drift (alert if input distributions shift >2 standard deviations)
  - Decision volume anomalies (alert if decisions/hour exceeds 3× baseline)

ESCALATION:
  - <5% performance drop: Document, monitor weekly
  - 5-10% performance drop: Human review within 5 business days
  - >10% performance drop OR disparate impact alert: Suspend model, human decisions only until reviewed

LOG: 2026-02-14 — Precision drop from 0.82 to 0.79 (within threshold). Monitoring increased to daily.

Practical note: The EU AI Act's post-market monitoring requirement (Article 72) is often overlooked. Providers and deployers of high-risk AI must collect and analyze performance data after deployment, not only before.

How to organise your AI documentation

Create a documentation folder for each AI system, named clearly: ai-systems/[system-name]/. Inside, keep these five document types — or one combined document with five sections. Version-control it (Git, SharePoint version history, or a dated naming convention).

system-record.mdAI System Record — what the system is and does
risk-assessment.mdRisk Assessment / DPIA — what risks you identified and mitigated
bias-testing/Bias Testing Records — one file per test date, with full results
explainability.mdExplainability Record — how you explain decisions and to whom
monitoring-log.mdMonitoring Log — ongoing performance data and alerts

Each document should have a header showing: date created, date last reviewed, and who reviewed it. This version history is what demonstrates to regulators that governance is ongoing, not reactive.

Documentation mistakes regulators flag

Know which documents you actually need

ComplianceIQ tells you which regulations apply to your AI systems — and which documentation each regulation requires. Free risk report, no signup.

Get my free risk report

Related reading