Back to Resources
Compliance Guides18 min read

EU AI Act Prohibited AI Practices: Is Your System at Risk?

ACH

AI Comply HQ

The EU AI Act draws many lines. Risk tiers. Documentation requirements. Transparency obligations. But its hardest, most uncompromising line is the one most organizations still underestimate: outright bans on specific AI practices.

These are not future requirements. They are not recommendations. As of February 2, 2025, Article 5 of the EU AI Act is fully in force. Organizations found deploying prohibited AI practices face the regulation's steepest penalties, up to 35 million euros or 7% of global annual turnover, whichever is higher.

If you build, deploy, or distribute AI systems that touch the European Union in any way, you need to know exactly where these lines are drawn and whether your systems are on the wrong side of them.

This guide walks through every prohibited practice, the reasoning behind each ban, the narrow exceptions that exist, and the practical steps you should take to verify your systems are compliant.

The Eight Prohibited AI Practices Under Article 5

Article 5 of the EU AI Act identifies eight categories of AI practices that are banned without qualification (or, in one case, with extremely narrow exceptions). These reflect the European Union's position that certain uses of artificial intelligence are fundamentally incompatible with democratic values and human dignity.

Here is each one, explained in practical terms.

1. Subliminal Manipulation Techniques That Cause Harm

What is banned: AI systems that deploy subliminal techniques, operating beyond a person's consciousness, to materially distort their behavior in a way that causes or is likely to cause physical or psychological harm.

What this means in practice: If your AI system uses techniques that operate below the threshold of user awareness to influence decisions, and those techniques could result in harm, you are in prohibited territory. This includes dark patterns powered by AI that manipulate users into actions against their interest, AI-driven interfaces that exploit cognitive biases through techniques the user cannot perceive, and persuasion engines that use subconscious nudges to alter behavior in harmful ways.

The key qualifier is "beyond a person's consciousness." Transparent recommendation engines and clearly labeled persuasive content are not caught by this provision. The ban targets manipulation that the subject cannot detect or resist.

2. Exploitation of Vulnerabilities

What is banned: AI systems that exploit vulnerabilities of specific groups of persons due to their age, disability, or social or economic situation, with the objective or effect of materially distorting their behavior in a way that causes or is likely to cause harm.

What this means in practice: This prohibition targets AI systems designed to take advantage of people who are less able to protect themselves. Examples include AI-powered marketing targeting elderly users with misleading financial products, systems that exploit children's developmental vulnerabilities to drive engagement or purchases, and AI that targets economically disadvantaged individuals with predatory lending or pricing algorithms.

Note the breadth of "social or economic situation." An AI system that identifies financially stressed individuals and targets them with high-interest loan products could fall squarely under this ban, even if the targeting methodology is sophisticated and the system's designers did not consciously intend exploitation.

3. Social Scoring by Public Authorities

What is banned: AI systems used by public authorities (or on their behalf) for social scoring, meaning evaluating or classifying individuals based on their social behavior or personal characteristics, where the resulting score leads to detrimental treatment that is unjustified or disproportionate.

What this means in practice: The EU has drawn a firm line against government-operated social credit systems. This ban also extends beyond government use: general-purpose social scoring systems that evaluate individuals across multiple contexts and lead to unfavorable treatment in unrelated domains are prohibited regardless of who operates them.

If your system aggregates behavioral data from multiple sources to assign individuals a "trustworthiness" or "reliability" score that affects their access to services, employment, or social participation, you are likely operating a prohibited system.

Check If Your AI System Is Compliant

4. Real-Time Remote Biometric Identification in Public Spaces by Law Enforcement

What is banned: The use of real-time remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.

What this means in practice: Live facial recognition in public spaces by police and other law enforcement agencies is prohibited as a general rule. This is the one prohibition that comes with exceptions (covered below), but the default position is a ban. The prohibition covers any system that can identify natural persons at a distance, in real time, by comparing their biometric data against a reference database. This is commonly understood as live facial recognition surveillance.

This ban does not apply to post-facto identification (analyzing recorded footage after the fact), nor does it apply to biometric verification (one-to-one matching, such as unlocking a phone). It specifically targets one-to-many identification performed in real time in public spaces.

5. Untargeted Scraping for Facial Recognition Databases

What is banned: AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

What this means in practice: If you are building or enriching a facial recognition database by indiscriminately collecting facial images from websites, social media, or surveillance footage, you are operating a prohibited system. This prohibition directly targets the business model pioneered by companies like Clearview AI, which built massive facial recognition databases by scraping billions of images from public websites without consent.

The word "untargeted" is significant. Collecting facial images for a specific, lawful purpose with appropriate consent is not caught by this provision. The ban targets bulk, indiscriminate harvesting.

6. Emotion Recognition in Workplaces and Educational Institutions

What is banned: AI systems that infer emotions of natural persons in the areas of workplace and education, except where the system is intended to be placed on the market for medical or safety reasons.

What this means in practice: Deploying AI to read employees' or students' emotional states is prohibited in workplace and educational settings. This covers systems that analyze facial expressions to gauge employee engagement during meetings, tools that monitor students' emotional responses during online learning, and AI-powered interview platforms that score candidates based on detected emotions.

The medical and safety exceptions are narrow. An AI system that detects driver drowsiness for safety purposes would be permitted. An AI system that monitors factory workers' stress levels to optimize productivity would not.

7. Biometric Categorization Inferring Sensitive Attributes

What is banned: AI systems that categorize natural persons based on their biometric data to infer or deduce sensitive attributes, specifically race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.

What this means in practice: Using AI to look at someone's face, voice, gait, or other biometric data and infer their race, religion, political affiliation, or sexual orientation is prohibited. This covers both intentional profiling systems and systems where such inference is a byproduct.

The ban reflects deep concerns about the historical misuse of physiognomic and biometric classification and the potential for systematic discrimination. Even if your system does not explicitly set out to categorize people by these attributes, if its biometric analysis capabilities produce such inferences, you may be in violation.

8. Individual Criminal Risk Prediction Based Solely on Profiling

What is banned: AI systems that assess or predict the risk of a natural person committing a criminal offense, based solely on the profiling of that person or on assessing their personality traits and characteristics.

What this means in practice: Predictive policing systems that flag individuals as likely future criminals based on their profile, rather than on their observed behavior or involvement in specific criminal activity, are prohibited. This does not ban all AI use in law enforcement. Systems that analyze crime patterns by location, identify links between known criminal activities, or support investigations based on objective behavioral evidence remain permissible. The prohibition targets systems that treat a person's characteristics (demographics, personality traits, social connections) as predictors of future criminality.

Start Your Free Compliance Assessment

Why These Practices Were Banned

The eight prohibitions are not arbitrary. Each reflects a deliberate judgment by EU legislators about where AI crosses from useful technology into unacceptable threat. Understanding the reasoning helps you anticipate how regulators will interpret edge cases.

Fundamental Rights as the Foundation

The EU AI Act is built on the EU Charter of Fundamental Rights. The prohibited practices directly threaten rights including human dignity (Article 1 of the Charter), respect for private life (Article 7), protection of personal data (Article 8), non-discrimination (Article 21), and the right to an effective remedy (Article 47). When legislators debated these bans, the question was not "Is this technology useful?" but "Does this technology, regardless of utility, violate rights that we consider non-negotiable?"

Democratic Values at Stake

Several prohibitions, particularly social scoring and predictive criminal profiling, reflect concerns about AI's potential to undermine democratic governance. Social scoring systems create power asymmetries that are incompatible with democratic accountability. Predictive profiling systems encode biases and create self-fulfilling prophecies that erode the presumption of innocence.

The Proportionality Principle

EU law requires that restrictions on rights and freedoms be proportionate to the objective pursued. The prohibited practices represent cases where legislators concluded that no legitimate objective could justify the intrusion. Real-time mass biometric surveillance in public spaces, for instance, was judged disproportionate even for law enforcement purposes, with only the narrowest exceptions surviving the legislative process.

Historical Precedents of AI Misuse

The prohibitions did not emerge from theoretical concerns alone. Real-world incidents informed every ban. The Dutch childcare benefits scandal, where an algorithm wrongly flagged thousands of families (disproportionately those with dual nationality) for fraud, demonstrated the harm of automated profiling. China's social credit system illustrated the dangers of state-operated behavioral scoring. Clearview AI's mass facial image scraping showed the privacy implications of unregulated biometric database creation. Cambridge Analytica's psychological profiling revealed how AI-driven manipulation could distort democratic processes. Each prohibition maps to documented harms that regulators determined must not be repeated within the EU.

The Exceptions and Nuances

Law Enforcement Exceptions for Biometric Identification

The prohibition on real-time remote biometric identification in public spaces includes three narrow exceptions where law enforcement may use such systems, subject to strict conditions:

  1. Targeted search for specific victims. Including abducted children and victims of human trafficking or sexual exploitation
  2. Prevention of a specific, substantial, and imminent threat to life. Or a genuine and foreseeable threat of a terrorist attack
  3. Identification of suspects. Of serious criminal offenses listed in the regulation (carrying a maximum sentence of at least four years)

Even when these exceptions apply, law enforcement must obtain prior judicial authorization (or, in urgent cases, authorization within 48 hours after the fact). Each use must be individually justified, time-limited, geographically constrained, and subject to oversight by a data protection authority. National law must explicitly provide for the use of these systems. The exceptions are not blanket permissions; they are tightly regulated carve-outs.

Research and Development Contexts

The prohibitions apply to AI systems that are "placed on the market" or "put into service." AI systems used purely for research and development purposes, before any deployment decision, are generally outside the scope of Article 5. This does not provide an indefinite safe harbor. If research crosses into real-world testing with actual subjects, the prohibitions may apply.

What "Deploying in the EU" Means for Non-EU Companies

The EU AI Act has extraterritorial reach. If your AI system's output is "used in the Union," you are within scope, regardless of where your company is headquartered or where the system is physically hosted. A company based in San Francisco that deploys an emotion recognition system used by a London-based employer is subject to the prohibition. The test is where the system produces effects, not where it is built or operated.

Assess Your AI System Now

How to Check If Your AI System Uses Prohibited Practices

Many organizations assume the prohibited practices are obviously extreme, things that no reasonable company would do. This assumption is dangerous. Prohibited practices can hide inside otherwise legitimate systems, and the line between "innovative feature" and "banned practice" is narrower than most teams expect.

Self-Assessment Questions

Work through these questions for each AI system in your portfolio:

  1. Does your system influence user behavior through techniques that operate below conscious awareness? Consider personalization engines, recommendation algorithms, and UX optimization systems. If the mechanism of influence is deliberately hidden from the user, scrutinize it against prohibition 1.

  2. Does your system differentiate its approach based on user vulnerability, such as age, disability, or economic status? Adaptive systems that change behavior based on detected vulnerability markers need careful review against prohibition 2.

  3. Does your system aggregate behavioral or personal data to produce scores that affect individuals across multiple, unrelated contexts? Loyalty scores, trust ratings, or composite behavioral indices could constitute social scoring under prohibition 3.

  4. Does your system perform real-time identification of individuals in public or semi-public spaces? Even if your use case is commercial rather than law enforcement, verify that your system could not be repurposed or is not being used by law enforcement clients.

  5. Does your system collect or process facial images at scale from online or surveillance sources? Data pipeline practices matter. If your training data includes scraped facial images, you may be in violation of prohibition 5 regardless of your system's primary purpose.

  6. Does your system detect, infer, or analyze emotional states in workplace or educational contexts? Engagement monitoring, sentiment analysis of employee communications, and student attention tracking all require careful evaluation against prohibition 6.

  7. Does your system use biometric data to categorize individuals by race, religion, political opinion, or sexual orientation? This includes systems where such categorization is an unintended byproduct of other processing.

  8. Does your system predict individual criminal risk based on personal characteristics rather than observed behavior? This extends to adjacent use cases like predicting employee misconduct or student disciplinary risk based on profiling.

Common Features That Might Cross the Line

Several mainstream AI capabilities sit closer to the prohibited line than their developers typically recognize:

  • Customer churn prediction that factors in economic vulnerability indicators could implicate prohibition 2
  • Employee engagement analytics that analyze facial expressions or vocal tone during meetings likely violate prohibition 6
  • AI-powered hiring tools that assess personality traits from video interviews may involve prohibited emotion recognition or biometric categorization
  • Dynamic pricing algorithms that adjust prices based on inferred economic vulnerability could constitute exploitation under prohibition 2
  • Content recommendation engines using subconscious manipulation techniques to maximize engagement could trigger prohibition 1
  • Background check AI that predicts future behavior based on personal characteristics rather than documented history could violate prohibition 8

Edge Cases to Watch

The following scenarios deserve particular attention:

  • Emotion detection in customer service: Analyzing customer sentiment during support calls to improve service quality is not in a workplace or educational context and therefore not directly caught by prohibition 6. If the same system is turned on internal staff performance monitoring, it crosses the line.
  • Credit scoring algorithms: Traditional credit scoring based on financial behavior is not social scoring. But a system that incorporates social media behavior, shopping patterns, and relationship data to assess creditworthiness starts resembling a general social scoring system.
  • Age verification systems: Using biometric analysis to estimate age for age-gating purposes is generally permissible. Using the same biometric data to infer other characteristics (ethnicity, gender, emotional state) is not.

What Happens If You Deploy a Prohibited Practice

The EU AI Act reserves its harshest enforcement mechanisms for prohibited practices. This is not an area where regulators will issue warnings and provide grace periods.

Maximum Fines

Violations of Article 5 carry the highest penalty tier in the regulation: up to 35 million euros or 7% of total worldwide annual turnover for the preceding financial year, whichever is higher. For context, the GDPR's maximum fine is 20 million euros or 4% of turnover. The EU AI Act deliberately set the penalty for prohibited practices even higher than Europe's landmark privacy regulation.

For SMEs and startups, the regulation provides that fines should be "effective, proportionate, and dissuasive," meaning smaller companies may face lower absolute amounts, but the fine will still be calibrated to hurt.

Criminal Liability Considerations

While the EU AI Act itself establishes administrative penalties, Member States may implement national laws that attach criminal liability to certain violations. Several EU countries are expected to introduce criminal sanctions for the most egregious AI violations, particularly those involving prohibited practices that cause direct harm to individuals.

Directors and officers who knowingly authorize the deployment of prohibited AI systems face personal liability in many jurisdictions.

Cease and Desist Orders

National market surveillance authorities have the power to order immediate withdrawal of prohibited AI systems from the market. This means your product can be pulled from the EU market entirely, not just fined but shut down in the region. For companies with significant EU revenue, this represents an existential business risk.

Public Disclosure

Enforcement actions under the EU AI Act include public disclosure requirements. Fines and orders are published, creating reputational damage that compounds the financial penalty. In regulated industries (financial services, healthcare, education), a public finding that your organization deployed a prohibited AI practice can trigger secondary regulatory consequences and loss of operating licenses.

Whistleblower Protections

The EU AI Act includes provisions protecting individuals who report violations. Your employees, contractors, and business partners have legal protection if they report a prohibited practice to authorities, reducing the likelihood that violations will remain hidden.

Find Out Where You Stand

How AI Comply HQ Helps You Identify Prohibited Practices

Identifying whether your AI system falls within a prohibited category is not always straightforward. The eight prohibitions involve nuanced definitions, contextual analysis, and technical assessment that go beyond simple checklists. AI Comply HQ was built to handle exactly this complexity.

The Interview Process

Our AI-powered compliance interview walks you through a structured assessment of your system's capabilities, use cases, and deployment context. The interview is designed to surface prohibited practice risks that teams often miss in self-assessment, including edge cases where a legitimate feature could be reclassified based on its deployment context or user population.

The system asks targeted questions about your AI's data inputs, decision-making mechanisms, affected populations, and operational context. Based on your answers, it classifies your system against all eight prohibited categories and flags any areas of concern.

Automated Red-Flag Alerts

When your responses indicate capabilities that align with prohibited practices, even partially, the platform generates specific, actionable alerts. These are not generic warnings. Each alert identifies the specific prohibition at risk, the feature or capability that triggered the flag, the regulatory basis for the concern, and recommended next steps.

This early warning system catches issues before they become enforcement actions.

Compliance Documentation

For every assessment, AI Comply HQ generates structured compliance documentation that records your system's classification, the analysis supporting that classification, and the evidence trail. If a regulator asks why you believe your system does not engage in prohibited practices, you have a documented, defensible answer, not a verbal assurance from your engineering team.

Ongoing Monitoring

AI systems evolve. Features get added. Use cases expand. A system that was compliant at launch can drift into prohibited territory as it scales. AI Comply HQ supports periodic reassessment so you can catch compliance drift before it becomes a violation.

Assess Your Compliance in Minutes

The prohibited practices provisions of the EU AI Act are already in force. Every day your AI system operates without a clear compliance assessment is a day of unquantified regulatory risk.

You do not need a six-month consulting engagement to determine whether your systems are in scope. AI Comply HQ's guided interview can classify your AI system against all eight prohibited categories in a single session. You get a clear, documented answer, and if issues are found, a concrete path to resolution.

The organizations that act now will be the ones with clean compliance records when enforcement begins in earnest. The ones that assume "it does not apply to us" will be the case studies in future regulatory guidance documents.

Start Your Free Compliance Assessment

AI Comply HQ supports compliance operations. It does not provide legal advice. Consult qualified legal counsel for advice specific to your situation.

Assess Your Compliance in Minutes

Our AI-powered compliance interview classifies your AI systems, auto-fills regulatory forms, and generates audit-ready documentation.

Start Your Free Trial

7-day free trial. Cancel anytime.