Open Source AI and the EU AI Act: What Developers Need to Know
The EU AI Act has a specific — and limited — exemption for open-source AI models. It does not exempt the companies that deploy open-source models in products that affect EU users. Here is the actual legal text, what it means for developers, and what you still need to do if you build on Llama, Mistral, or other open models.
The short answer
- →Open-source model publishers (Meta releasing Llama weights, Mistral releasing Mistral 7B): partially exempt from general-purpose AI model transparency obligations.
- →Companies deploying open-source models in EU-facing products: NOT exempt. You are the deployer/operator. All applicable obligations apply to you.
- →High-risk AI systems built on open-source models: full EU AI Act high-risk obligations apply. The base model being open-source does not change the risk classification of your application.
What the EU AI Act actually says about open source
Recital 102 and Article 2(12)
The EU AI Act's open-source provisions appear in Article 2(12) and the associated recitals. The provision creates an exemption for providers of AI models "released under a free and open-source licence that allows access, use, modification, and distribution of the model, and whose parameters, including the weights, are made publicly available."
The exemption applies to obligations in Articles 53 and 55 — which are the transparency and documentation obligations for general-purpose AI model providers.
EU AI Act Article 2(12) — paraphrased
"The provisions of this Regulation applicable to providers of general-purpose AI models shall not apply to providers of free and open-source models — unless those models are general-purpose AI models with systemic risk."
The "systemic risk" carve-out
Even open-source model publishers are not fully exempt if their model has "systemic risk." Under Article 51, a general-purpose AI model is presumed to have systemic risk if it was trained using more than 10^25 FLOPs of compute.
This threshold is important: Llama 3 70B was trained below this threshold. Llama 3 405B is near it. GPT-4 class models exceed it. The 10^25 FLOP threshold is effectively the line between "large open-source model" and "frontier model with systemic risk."
Open-source models above the systemic risk threshold must still comply with Article 55 obligations: adversarial testing, incident reporting to EU AI Office, cybersecurity protections, and energy efficiency disclosure.
What the open-source exemption covers — and what it does not
| Obligation | Open-source publisher | Developer deploying open-source |
|---|---|---|
| Technical documentation (Art. 11) | ✗ Exempt (below systemic risk) | ✓ Required if high-risk system |
| GPAI model transparency (Art. 53) | ✗ Exempt (below systemic risk) | ✗ Not applicable (GPAI for publishers) |
| Systemic risk obligations (Art. 55) | ✓ Required if >10^25 FLOPs | N/A (for system deployers) |
| High-risk AI system obligations (Ch. III) | N/A if only publishing weights | ✓ Required if application is high-risk |
| Human oversight mechanisms (Art. 14) | N/A if only publishing weights | ✓ Required if high-risk application |
| Post-market monitoring (Art. 72) | N/A if only publishing weights | ✓ Required if high-risk application |
| CE marking / EU declaration (Art. 48) | N/A if only publishing weights | ✓ Required for high-risk systems |
| Prohibited AI bans (Art. 5) | ✓ Cannot facilitate prohibited AI | ✓ Cannot deploy prohibited AI applications |
| GDPR compliance | ✓ If processing EU user data | ✓ Almost certainly required |
✓ = Obligation applies ✗ = Exempt N/A = Not applicable to this role
Practical scenarios for developers
You use Llama 3.2 to build a customer service chatbot for a retail company with EU customers
Meta published Llama under an open-source licence and is exempt from GPAI transparency obligations. You are the deployer of the chatbot. If the chatbot makes any decisions with significant effects on individuals, you must assess whether it is high-risk. A retail customer service chatbot is likely low-risk under EU AI Act classification, but you still cannot deploy prohibited AI functions (manipulation, deception) and you still have GDPR obligations for any personal data the chatbot processes.
Actions required:
- Assess chatbot risk level under EU AI Act Annex III — retail chatbot is likely low-risk.
- Implement GDPR requirements for personal data processed in conversations.
- Ensure no prohibited AI functions: no manipulation, no deception about AI identity.
- Provide transparency to users that they are interacting with AI (EU AI Act Article 50).
You fine-tune Mistral 7B on medical records and use it to support clinical decision-making for EU hospitals
Medical AI used for clinical decisions falls squarely within EU AI Act Annex III high-risk categories (Article 6, Annex III section 5b). The base model being open-source does not change your risk classification. You are deploying a high-risk AI system and must comply with Chapter III obligations: technical documentation, risk management, data governance, human oversight, transparency to users, post-market monitoring, CE marking, and EU declaration of conformity. Additionally, medical device regulation (MDR) may apply.
Actions required:
- Complete technical documentation (Article 11) for the high-risk system.
- Implement risk management system throughout the system lifecycle.
- Conduct conformity assessment — for medical AI, this typically requires external notified body involvement.
- Register the high-risk AI system in the EU AI Act database before market placement.
- Check MDR classification — AI systems used for diagnosis or treatment may be SaMD.
You build an internal HR tool using an open-source model that ranks job applicants
Employment AI is explicitly listed in EU AI Act Annex III as high-risk. AI used for "recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter applications, and to evaluate candidates in the course of interviews or tests" is high-risk regardless of the underlying model's open-source status. This also overlaps with NYC Local Law 144 (if any NY applicants), EEOC Title VII, and Colorado SB 205.
Actions required:
- Full EU AI Act high-risk compliance including technical documentation, bias testing, human oversight.
- Implement NYC LL144 annual bias audit if any NY applicants.
- Conduct disparate impact analysis under EEOC guidelines.
- Ensure HR staff can meaningfully review and override AI rankings.
You build a developer tool that lets users run open-source LLMs locally on their own machines
A tool that helps developers run local LLMs (like LM Studio, Ollama-type products) is generally low-risk under EU AI Act. The tool itself does not make decisions about individuals. However, you cannot facilitate prohibited AI practices. If your tool makes it easier to build social scoring systems, manipulative AI, or real-time biometric surveillance — even if those uses are possible but not intended — the EU AI Act's prohibited practices ban applies to any EU-market product.
Actions required:
- Assess whether your tool is likely to be used for prohibited AI applications.
- Add clear terms of service prohibiting prohibited AI use.
- Ensure compliance with EU AI Act Article 50 transparency obligations if end users interact with AI.
- Maintain GDPR compliance for any user data your tool collects.
Common misconceptions developers have
Myth: "The model is open source, so I don't have any AI Act obligations"
Reality: The open-source exemption applies to model publishers, not to companies that deploy models in products. If you deploy open-source AI in an EU-facing product, you are the deployer and all applicable obligations apply to you.
Myth: "I'm using an API, not running the model myself, so the AI Act doesn't apply to me"
Reality: Using an API makes you a deployer of the provider's AI system. Deployers have their own obligations under the EU AI Act — including human oversight, transparency to users, and not using the AI for prohibited purposes.
Myth: "My company is US-based, so EU law doesn't apply"
Reality: The EU AI Act has the same extra-territorial reach as GDPR. If your AI affects EU residents, the Act applies to you regardless of where your company is registered. OpenAI (US-based) was suspended in Italy. Clearview AI (US-based) was fined across multiple EU countries.
Myth: "I'm a small startup — AI Act enforcement is for big companies"
Reality: The EU AI Act explicitly includes provisions for SMEs and startups (reduced administrative burden, support from Digital Innovation Hubs). But the obligations still apply. Enforcement priorities may focus on larger companies initially, but DPAs have shown willingness to pursue smaller entities when there is clear harm.
Summary for developers
- 1.The open-source exemption is for model publishers (Meta, Mistral), not for everyone who uses those models.
- 2.If you deploy any open-source model in an EU-facing product, you are the deployer and EU AI Act obligations apply to your application.
- 3.The risk classification of your application depends on what it does, not what model powers it. A high-risk hiring AI built on Llama is still a high-risk hiring AI.
- 4.All AI systems that interact with EU users must not use prohibited AI practices, must be transparent about AI identity, and must comply with GDPR.
- 5.If your application is high-risk, you need technical documentation, risk management, human oversight, conformity assessment, and EU database registration — regardless of your model's open-source status.
Check if your AI application is high-risk
ComplianceIQ classifies your AI system under EU AI Act Annex III and tells you exactly which obligations apply. Free risk report in 4 questions.
Get my free risk report