← Blog
DocumentationEU AI Act Art.13 April 17, 2026 · 11 min read

AI Model Cards: What They Are, What to Include, and When You Need One

A model card is a short document that describes an AI system's intended use, performance, limitations, and ethical considerations. Originally developed by Google in 2018, they are now industry standard — and they satisfy key EU AI Act transparency requirements.

What Is a Model Card?

A model card is a concise, standardised document that accompanies an AI model and describes key information that users and decision-makers need to understand and use the model appropriately. The concept was introduced by Mitchell et al. at Google in 2018 and has become the de facto standard for AI transparency documentation.

Model cards differ from technical documentation in that they are designed to be readable by non-technical stakeholders — business owners, compliance officers, regulators — not just engineers. A model card answers the questions a responsible user of an AI system needs to understand before deploying it.

EU AI Act alignment

EU AI Act Articles 11 and 13 require technical documentation and instructions for use for high-risk AI systems. Model cards, when comprehensive, satisfy the transparency and disclosure requirements of Article 13 — and provide the starting point for the Article 11 technical documentation file.

Who Needs a Model Card?

High-risk AI system deployers (EU AI Act)

Legally required

Article 13 requires transparency documentation equivalent to a model card. Model cards produced by providers must be made available to deployers. Deployers should produce their own for each deployment context.

GPAI model providers (EU AI Act)

Legally required

Article 53 requires technical documentation for GPAI models that downstream providers need to comply with their own obligations. A model card is the standard format for communicating this.

AI systems in regulated sectors (healthcare, finance, HR)

Legally required

FDA AI/ML-based software guidance, FCA model risk management guidance, and EU MDR all have transparency documentation requirements that model cards satisfy.

Any organisation deploying AI that affects people

Best practice. Model cards are the industry standard for responsible AI documentation. They demonstrate due diligence and provide a record for internal governance and external accountability.

Model Card Template: 7 Required Sections

A complete model card contains seven sections. For each section below, the EU AI Act requirement it satisfies is noted.

Model details

Basic facts about the model

  • Model name and version
  • Model type (classifier, regressor, generative, etc.)
  • Input and output types
  • Model architecture (at high level)
  • Training date and last update
  • Developer / responsible organisation
  • Contact for questions

EU AI Act: Article 11(1) — Technical documentation must include general description, design choices, purpose

Intended use

What the model is designed to do — and not do

  • Primary intended use cases
  • Primary intended users
  • Out-of-scope uses (what the model must NOT be used for)
  • Prohibited use cases

EU AI Act: Article 13(3)(b) — Instructions for use must include intended purpose and use conditions

Training data

What data the model was trained on

  • Training data sources
  • Training data size and characteristics
  • Preprocessing steps
  • Known biases or limitations in the training data
  • Whether personal data was used (and basis for use)

EU AI Act: Article 10(2)–(4) — Data governance requirements; Article 11 technical documentation

Performance metrics

How the model performs overall and across subgroups

  • Evaluation datasets and methodology
  • Overall performance metrics (accuracy, F1, AUC, etc.)
  • Performance by subgroup (demographic, domain, etc.)
  • Confidence thresholds and calibration
  • Performance in edge cases

EU AI Act: Article 9(7) — Accuracy, robustness, and cybersecurity measures; Article 10(2)(f) bias examination

Limitations and risks

What the model does not do well

  • Known failure modes
  • Out-of-distribution performance
  • Potential biases or fairness concerns
  • Sensitivity to input distribution shift
  • Security vulnerabilities (adversarial examples, prompt injection, etc.)

EU AI Act: Article 13(3)(b)(v) — Foreseeable misuse, unintended outcomes, and risks

Ethical considerations

How the model was evaluated for ethical risks

  • Ethical review process conducted
  • Protected characteristics examined for bias
  • Human rights considerations
  • Environmental impact (if relevant)

EU AI Act: Article 9 risk management system; fundamental rights impact assessment

Human oversight recommendations

How humans should supervise model use

  • Minimum human oversight requirements
  • Cases where human review is mandatory before acting on model output
  • Escalation criteria
  • Override mechanisms

EU AI Act: Article 14 — Human oversight measures; Article 13(3)(d) — human oversight capability description

Real-World Examples to Reference

GoogleBERT (2018)

The original model card paper — introduced by Mitchell et al. at Google. Established the format now used industry-wide.

Hugging FaceModel Hub requirement

Requires model cards for all models on the Hub. Standard template built into the platform. Over 200,000 model cards published.

OpenAIGPT-4 Technical Report

Extensive model card covering capabilities, limitations, safety evaluations. Available publicly.

AnthropicClaude Model Cards

Publishes model cards for each Claude version including safety evaluations, CBRN testing, and red-teaming results.

Common Pitfalls

Pitfall: One model card for all deployments

Fix: A model card describes a model in a specific deployment context. A base model card plus deployment-specific addenda for each use case is better practice.

Pitfall: Never updated after initial release

Fix: Model cards should be versioned. When the model is retrained, when limitations are discovered, or when the use case changes — update the card.

Pitfall: Performance metrics without subgroup breakdown

Fix: Overall accuracy is meaningless for compliance. EU AI Act Article 10 requires bias examination. Performance must be reported by demographic group.

Pitfall: Too technical — unreadable by compliance or legal

Fix: Model cards must be understandable by users, deployers, and regulators — not just ML engineers. Plain language in the limitations and ethical sections is a requirement, not a nice-to-have.

Generate model card templates in ComplianceIQ

ComplianceIQ generates model card templates pre-mapped to EU AI Act Article 13 requirements for each AI system in your inventory.

Start free