EU AI Act Implementation Guide for SMEs — Step-by-Step
The EU AI Act applies to any company using AI to serve EU customers — regardless of where you are incorporated. This step-by-step guide covers exactly what SMEs need to do, in order of urgency.
Does the EU AI Act Apply to You?
The Act applies if you are a provider (you develop/sell AI) or a deployer (you use AI in your products/services) and any of the following are true:
- You are based in the EU
- Your AI system’s output is used in the EU (even if you are based outside the EU)
- You process personal data of EU residents using AI
This means a US startup using an AI hiring tool for EU candidates, or an Australian SaaS company using an AI chatbot for EU customers, is within scope.
SME provisions
The EU AI Act includes specific SME provisions: regulatory sandboxes with priority access for SMEs and startups, simplified conformity assessment pathways, and reduced fee structures for registration. Being an SME does not exempt you — but it does give you access to support.
The Enforcement Timeline: What’s Already Live
| Date | Event | What it means | Impact |
|---|---|---|---|
| Feb 2025 | Chapter I + II in force | Prohibited AI practices banned immediately | HIGHstop any real-time biometric, social scoring, subliminal AI uses |
| Aug 2025 | GPAI rules apply | General-purpose AI model obligations | MEDIUMif you train or distribute AI models; GPAI Code of Practice |
| Aug 2026 | High-risk AI rules apply (Annex I) | AI in regulated products (medical devices, vehicles) | HIGHconformity assessment, CE marking, technical documentation |
| Aug 2027 | High-risk AI rules apply (Annex III) | AI in hiring, credit, education, critical infrastructure | HIGHregistration, DPIA, human oversight, transparency |
Step 1: Build Your AI System Inventory (Week 1–2)
You cannot comply with a law you do not know applies to you. Start by listing every AI system your company uses or has built. Include:
- AI systems you provide to customers (your own products)
- AI systems you deploy internally (HR tools, credit scoring, chatbots)
- AI systems embedded in third-party services you use (automated resume screening in your ATS, AI fraud detection in your payment processor)
For each system, record: system name, vendor (if third-party), primary use case, data inputs, decision outputs, and whether it affects natural persons.
Step 2: Classify Your AI Systems by Risk (Week 2–3)
The EU AI Act uses a four-tier risk hierarchy. Your obligations depend entirely on which tier each system falls into:
Unacceptable Risk (Prohibited)
Examples: Real-time biometric surveillance in public, social credit scoring, subliminal manipulation, AI that exploits vulnerabilities of specific groups
→ Immediately stop. No exceptions.
High Risk
Examples: AI in hiring/HR, credit scoring, education assessment, critical infrastructure, biometric ID, law enforcement, migration, administration of justice
→ Conformity assessment, registration, human oversight, technical documentation required
Limited Risk (Transparency obligations)
Examples: Chatbots, AI-generated content, emotion recognition, deepfakes
→ Must disclose to users that they are interacting with an AI system
Minimal Risk
Examples: AI spam filters, AI-powered search, recommendation engines that do not affect significant decisions
→ No mandatory obligations; voluntary codes of practice available
Step 3: High-Risk AI Compliance Requirements
If any of your systems fall in the high-risk category (most SMEs using AI in HR, customer assessment, or regulated products), here is the compliance checklist:
Step 4: Transparency Obligations (All Chatbots and AI-Generated Content)
Even if your AI systems are minimal risk, you still have transparency obligations if you use:
- Chatbots: Users must be informed they are interacting with an AI system (Article 52(1))
- AI-generated content: Synthetic images, audio, and video must be labelled as AI-generated (Article 52(3))
- Emotion recognition: Users must be informed when emotion recognition is being used on them
- Deep fakes: Must be clearly labelled, with limited exceptions for satire/art
Step 5: GDPR Intersection — What AI Means for Your Privacy Obligations
AI that processes personal data must comply with both the EU AI Act and GDPR simultaneously. The key interaction points:
- Data minimisation: AI models must be trained on minimum necessary data — GDPR principle applies to training datasets
- DPIA requirement: Article 35 GDPR requires a Data Protection Impact Assessment for AI systems that involve large-scale processing or systematic monitoring
- Article 22 right: Individuals have the right not to be subject to solely automated decisions with significant effects — requires human review pathway for high-risk AI decisions
- Legitimate basis for AI training: Using customer data to train AI models requires explicit consent or legitimate interest assessment
Penalties: What SMEs Are Actually at Risk Of
EU AI Act fines are tiered by violation severity:
- €35 million or 7% of global turnover — deploying prohibited AI systems
- €15 million or 3% of global turnover — non-compliance with other obligations (documentation, registration, human oversight)
- €7.5 million or 1.5% of global turnover — providing incorrect or misleading information to authorities
National authorities have discretion to impose lower penalties on SMEs — but “lower” at 3% of global turnover can still be substantial for a growing company.
Get EU AI Act Compliant This Quarter
ComplianceIQ classifies your AI systems under EU AI Act automatically, generates required documentation, and tracks your compliance posture across all applicable jurisdictions.
Start Your AI Act Assessment