← Blog
Explainability April 17, 2026 · 13 min read

AI Explainability Requirements: What the Law Requires and How to Deliver It

GDPR Article 22, EU AI Act Article 13, CPRA, and Colorado SB 205 all require that AI decisions be explainable — but each regulation defines "explainability" differently. Here is what each law requires, and which technical approaches satisfy each legal standard.

What Each Regulation Requires

Explainability requirements are not uniform. The trigger, standard, and format differ significantly across jurisdictions:

GDPR Article 22

Automated

Applies to: Automated decisions with significant effects on individuals

What it requires: Right to obtain meaningful information about the logic involved, as well as the significance and envisaged consequences of such processing.

Legal standard: Meaningful information about the logic — not necessarily a complete technical explanation. The EDPB has said it requires at least the factors used and their approximate weight.

When triggered: Request from data subject after an automated decision

EU AI Act Article 13

High-risk

Applies to: High-risk AI systems

What it requires: Instructions for use must include the capabilities and limitations, level of accuracy, and any known foreseeable misuse. Systems must generate logs enabling post-hoc explanation.

Legal standard: Proportionate to the role and expertise of the user. Requires audit trails enabling post-hoc reconstruction of decisions.

When triggered: Pre-deployment documentation requirement; ongoing for audit

CPRA (California)

Automated

Applies to: Automated decision-making with significant effects on California consumers

What it requires: On request: explanation of the logic involved in any automated decision-making, including: what personal information was used, the source, and the consequences for the consumer.

Legal standard: Must be specific to the individual's case — generic AI descriptions not sufficient.

When triggered: Consumer right exercised on request

Colorado SB 24-205

Consequential

Applies to: Consequential decisions about Colorado residents

What it requires: Notice of the AI's role in the decision. Right to appeal and request human review. Explanation of principal reasons for adverse decision.

Legal standard: Plain-language explanation accessible to a layperson.

When triggered: Adverse AI decision affecting Colorado residents

ECOA / FCRA (US)

Credit

Applies to: Credit decisions using AI

What it requires: Adverse action notice must state the principal reasons for adverse action. FCRA requires up to 4 specific reasons for credit denial.

Legal standard: Specific, actionable reasons that the consumer can address. "Score" alone is insufficient if the score is AI-derived.

When triggered: Any adverse action on credit application

What Does "Meaningful Explanation" Actually Mean?

The GDPR requires "meaningful information about the logic involved" — but does not define "meaningful." EDPB guidance and enforcement cases provide the clearest picture:

Not the algorithm itself

Courts have consistently held that a full technical explanation — the weights, the architecture, the training process — is not required. Trade secret protection can justify limiting technical disclosure. What matters is the practical explanation the individual can act on.

EDPB Guidelines on Art.22

The factors and their approximate weight

The explanation must name the factors that influenced the decision and give a sense of their relative importance. "Your credit score was below our threshold" is more meaningful than "our AI model evaluated your application."

EDPB + French CNIL guidance

Actionable for the individual

A meaningful explanation should tell the individual what they can do. "Your application was rejected because your debt-to-income ratio exceeded our threshold. You could improve your eligibility by reducing outstanding debt or increasing income." This is the gold standard.

Industry best practice + CPRA regulations

Case-specific, not generic

A generic statement — "Our AI considers many factors including credit history" — is not sufficient. The explanation must be specific to the individual's case. The CPPA ADMT regulations explicitly require individual-level explanation.

CPPA ADMT regulations 2024

Technical Explainability Methods: Which Satisfy Which Law

SHAP (SHapley Additive exPlanations)

What it does: Assigns each feature a contribution value for a specific prediction — shows which inputs pushed the model toward or away from a decision.

Legal fit: Excellent for GDPR and CCPA explanations — provides feature-level attribution per individual decision.

Limitation: Computationally expensive for real-time decisions; requires access to model internals (glass-box approach).

Best for: Credit scoring, insurance underwriting, loan approvals — anywhere feature attribution is the natural explanation unit.

LIME (Local Interpretable Model-agnostic Explanations)

What it does: Approximates the model behaviour locally around a specific data point using a simpler, interpretable model.

Legal fit: Good for post-hoc explanations where SHAP is not feasible; produces human-readable feature importance.

Limitation: Less stable than SHAP — running LIME twice on the same instance may produce different explanations.

Best for: Text and image AI where SHAP is not directly applicable; customer service AI decisions.

Counterfactual explanations

What it does: "What would you need to change for a different outcome?" Provides the minimal change to input data that would have resulted in a different AI decision.

Legal fit: Ideal for GDPR Art.22 — directly answers "what could I do differently?" Actionable for consumers.

Limitation: May reveal sensitive model thresholds if misused. Requires careful implementation to avoid gaming.

Best for: Credit decisions, hiring AI — anywhere the consumer can take corrective action.

Attention mechanisms (NLP)

What it does: For transformer-based language models, attention weights show which parts of the input the model focused on.

Legal fit: Useful for transparency but not sufficient alone for legal explanations — attention does not equal causation.

Limitation: Research shows attention weights do not always correlate with model decisions; treat as supplementary.

Best for: Text classification, sentiment analysis, document review AI — as a supplementary explanation layer.

Decision tree surrogate models

What it does: Train a simple decision tree to approximate the behaviour of a complex model. The tree's rules are inherently interpretable.

Legal fit: Good for generating rule-based explanations that regulators can audit; less accurate than SHAP per instance.

Limitation: Global surrogate, not per-instance — accuracy decreases as the complex model behaviour diverges from the tree.

Best for: Regulatory documentation and audit; understanding model behaviour at population level.

Black-Box AI: Can You Use It for High-Risk Decisions?

Deep learning models and large foundation models are often called "black boxes" because their internal logic is opaque. Are they compatible with explainability requirements?

The short answer: maybe, with safeguards

EU AI Act does not prohibit black-box models — but it requires that their outputs be explainable. SHAP, LIME, and counterfactual methods can generate post-hoc explanations that satisfy legal requirements even when the model internals are opaque. However, the explanations must be validated as accurate representations of the model's actual reasoning — a technically non-trivial requirement.

The practical approach for high-risk AI using complex models: use a combination of SHAP for feature attribution + counterfactual for consumer-facing explanation + audit logs for regulatory review. Document which approach was used, why, and what validation was done.

Explainability Compliance Checklist

Identify all AI systems making decisions that trigger explainability requirements (GDPR Art.22, EU AI Act Art.13, CCPA, Colorado)

Choose explainability technique per use case (SHAP for tabular data; counterfactuals for consumer-facing; decision trees for audit)

Validate that chosen technique accurately represents model behaviour

Build consumer-facing explanation generation into your AI decision pipeline (not a manual post-hoc step)

Test explanations for accessibility: plain language, no jargon, actionable

Build right-to-explanation fulfillment process: how does a consumer request an explanation? Who generates it? In what timeframe?

Log all AI decisions with sufficient detail to reconstruct an explanation retrospectively

Include explanation format in AI technical documentation (EU AI Act Art.13 requirement)

Train staff who respond to explanation requests to use the same terminology as the AI system

Periodically audit that explanations given are consistent with actual model behaviour (explanations should not drift from model)

Track Your Explainability Obligations

ComplianceIQ maps which explainability requirements apply to each of your AI systems — with task tracking and evidence collection for GDPR, EU AI Act, and US state law compliance.

Run a Free Risk Assessment