EU AI Act and General Purpose AI (GPAI) Models: What Providers Must Know
The EU AI Act contains a dedicated title for General Purpose AI models — the foundation models that power most of today's AI products. GPAI obligations have been in force since August 2, 2025. If you develop, fine-tune, or distribute a foundation model to EU customers, this applies to you.
What Is a General Purpose AI Model?
The EU AI Act defines a GPAI model as an AI model trained on large amounts of data using self-supervision at scale, that displays significant generality, and that is capable of performing a wide range of distinct tasks. Key points:
GPAI models are distinct from AI systems. A model is the underlying AI. An AI system integrates a model into a product or service.
The scope is intentionally broad — large language models, multimodal models, code generation models, and image generation models all fall within scope.
Open-source GPAI models receive a partial exemption: technical documentation and copyright compliance rules still apply, but some disclosure obligations are lighter.
The threshold for "systemic risk" designation is training using more than 10²⁵ FLOPs (floating point operations) — currently the territory of frontier labs.
Two Tiers of GPAI Obligation
All GPAI model providers
In force: August 2, 2025Applies to: All GPAI model providers placing models on EU market — regardless of parameter count
- Technical documentation — sufficient information for downstream providers to comply with their own EU AI Act obligations
- Information about training data including summary of copyrighted content used
- Compliance with EU copyright law for training data (Directive 2019/790 Article 4)
- Policy for complying with copyright opt-outs (robots.txt / TDM opt-out signals)
- Summary of training data published at the EU AI Office
GPAI models with systemic risk
In force: August 2, 2025Applies to: Models trained with >10²⁵ FLOPs. Current examples: GPT-4, Claude 3+, Gemini Ultra. Assessment may extend to smaller models in future.
- All obligations of standard GPAI providers PLUS:
- Adversarial testing (red-teaming) at least annually
- Incident and near-miss reporting to the AI Office
- Cybersecurity protection measures for model weights
- Energy efficiency reporting
- Governance and accountability measures for systemic risk identification
Are You a GPAI Provider? Five Scenarios
You develop a foundation model from scratch
Training data documentation, copyright compliance, EU AI Office registration required.
You fine-tune an open-source foundation model (Llama, Mistral, Falcon)
If the fine-tuned model is made available to others, you are a provider. The open-source base model exemption only covers the original publisher, not fine-tuners who redistribute.
You build an application on top of a GPAI model (API-based)
You are subject to high-risk AI obligations if applicable, but not GPAI-specific obligations. The GPAI provider must give you sufficient technical documentation to comply.
You distribute a third-party GPAI model under a private label
Rebranding or white-labelling a GPAI model makes you a provider with full obligations.
You use a GPAI model internally (not distributed)
Internal use without making the model available to others falls under deployer obligations, not GPAI provider obligations.
Training Data and Copyright: The Critical Obligation
One of the most operationally significant GPAI obligations is training data documentation and copyright compliance. Three requirements apply:
Training data summary
A sufficiently detailed summary of the training data used, to be published at the EU AI Office. This must be detailed enough for downstream providers to assess copyright risk.
Copyright compliance policy
GPAI providers must have a policy to comply with EU copyright law — specifically Directive 2019/790. This includes respecting rights holders' opt-out signals.
TDM opt-out compliance
Under Article 4 of Directive 2019/790, rights holders can opt out of text and data mining (TDM) for AI training via machine-readable signals (typically robots.txt). Providers must honour these.
Pending copyright cases against major AI labs (Getty Images, New York Times, etc.) may alter how these obligations are interpreted. Legal counsel should advise on your training data documentation and copyright compliance strategy.
The GPAI Code of Practice
The EU AI Act mandates a Code of Practice to operationalise GPAI obligations. The AI Office is developing it with industry input:
- The EU AI Office is developing a Code of Practice for GPAI model providers
- Expected finalisation: mid-2025. Providers who sign and adhere to the Code are presumed compliant
- The Code is developed by industry working groups — major AI labs participated in drafting
- Non-participation is not penalised, but signing is the fastest route to compliance presumption
- The Code covers: technical documentation standards, training data summaries, red-teaming methodology, incident reporting formats
Enforcement: Who Does What
EU AI Office
Central authority for GPAI model supervision. Conducts investigations, can impose fines, coordinates with national authorities.
National Market Surveillance Authorities
Responsible for supervising AI systems (the products built on GPAI models). Different body from the AI Office.
European Commission
Can request information from GPAI providers. Overarching governance.
GPAI penalties
Non-compliance with GPAI obligations: up to €15 million or 3% of global annual turnover. Providing incorrect, incomplete, or misleading information to the AI Office: up to €7.5 million or 1.5% of global annual turnover.
Track your GPAI compliance obligations
ComplianceIQ tracks EU AI Act obligations for both GPAI providers and downstream deployers, with deadline monitoring and documentation templates.
Start free