EU AI Act Prohibited Practices: What's Banned and When
The EU AI Act bans eight AI applications outright. These are not high-risk — they are unacceptable risk. You cannot use them, sell them, or import them into the EU. The ban has been in effect since February 2, 2025.
Important: The prohibited practices ban is already in force
Article 5 (prohibited practices) applied from February 2, 2025 — six months ahead of the rest of the EU AI Act. If your AI system falls into one of these categories, you are already in violation. The August 2026 deadline is for high-risk AI obligations. The ban is now.
The eight banned AI practices
1. Subliminal manipulation (Article 5(1)(a))
What it is: AI systems that use techniques operating below a person's level of awareness to influence behavior in ways that cause harm or are likely to cause harm. Classic example: subliminal advertising embedded in content, or AI-driven persuasion that exploits subconscious biases without users knowing.
What is not banned: Normal persuasive interfaces, recommendations, advertising, and personalization are not banned. The key word is "subliminal" — techniques the user cannot consciously perceive or resist.
2. Exploiting vulnerabilities (Article 5(1)(b))
What it is: AI that exploits specific vulnerabilities of a group of persons — age, disability, economic or social situation — to distort their behavior in a way that causes or is likely to cause harm. Example: AI systems targeting elderly people with manipulative financial products, or AI targeting children with gambling mechanics.
Practical implication: AI used in marketing to vulnerable populations needs careful review. This does not ban serving products to elderly or disabled people — it bans AI that exploits those vulnerabilities specifically.
3. Social scoring by public authorities (Article 5(1)(c))
What it is: AI systems that evaluate or classify natural persons or groups of persons based on their social behavior or known or predicted personal characteristics, leading to detrimental treatment unrelated to the original data context — China-style "social credit scoring." Banned for public authorities; private companies also cannot build systems that enable this for governments.
4. Real-time biometric identification in public spaces (Article 5(1)(d))
What it is: "Real-time" remote biometric identification of people in publicly accessible spaces by law enforcement. This bans live facial recognition by police on public streets.
Narrow exceptions exist: Searching for a missing child, preventing an imminent terrorist attack, or finding suspects in serious crimes with specific prior judicial authorization. These exceptions are narrow and subject to oversight.
What is not banned: Biometric verification (comparing one face to one stored image for authentication), "post-remote" biometric identification with authorization, private use of facial recognition (e.g., employer time clocks), and biometric identification used for non-law-enforcement purposes. The ban specifically targets law enforcement real-time mass surveillance.
5. Emotion recognition in workplaces and schools (Article 5(1)(f))
What it is: AI systems that infer the emotional states of workers from biometric data — facial expressions, heart rate, voice tone — in the workplace or in educational institutions. Example: "employee engagement AI" that analyzes facial expressions during meetings to detect frustration or boredom.
What is not banned: Medical systems that infer emotion for health purposes, and systems for safety purposes (detecting drowsiness in vehicle operators, for instance). The ban specifically targets non-medical emotion surveillance.
6. Biometric categorization to infer sensitive attributes (Article 5(1)(e))
What it is: AI that uses biometric data to infer and categorize individuals based on sensitive attributes — race, ethnicity, political opinions, religious beliefs, sexual orientation, trade union membership. This bans facial recognition systems trained to predict political affiliation from face images.
7. Predictive policing based on profiling (Article 5(1)(g))
What it is: AI systems used by or on behalf of law enforcement to assess the risk of an individual offending or re-offending based solely on profiling or assessment of their personality traits and characteristics. Not based on objective verifiable facts — purely on inferred characteristics. This bans systems that predict who will commit crime based on demographic or behavioral profiles.
8. Real-time facial recognition databases scraped from the internet (Article 5(1)(h))
What it is: Creating or expanding facial recognition databases by scraping facial images from the internet or CCTV without consent — essentially what Clearview AI does. This is now banned.
Penalties for violating the prohibited practices
Violations of Article 5 (prohibited practices) carry the highest penalties in the EU AI Act: up to €35 million or 7% of total worldwide annual turnover, whichever is higher. This is the same as GDPR's top tier. For a company with €1 billion in revenue, that is a potential fine of €70 million.
Does any of this apply to you?
For most companies, the answer is no. The banned practices target specific, extreme applications. Standard chatbots, document AI, hiring tools, customer service AI, and recommendation engines are not banned — they may be high-risk, but they are not prohibited.
Check your AI systems if you work in: security technology, emotion analytics, employee monitoring, social media behavioral analysis, or law enforcement tools. These are the areas most likely to touch the prohibited practice definitions.
Verify your AI systems are not in prohibited territory
ComplianceIQ scans your AI tools against the EU AI Act's prohibited practice definitions and flags any systems that need review.
Check your AI systems now →