Regulatory rationale and purpose of prohibitions
The EU Artificial Intelligence Act (EU AI Act) establishes a risk-based regulatory framework for artificial intelligence systems placed on or used within the European Union. At the top of this framework are prohibited AI practices, which the Regulation identifies as posing an unacceptable risk to fundamental rights, democratic values or public safety.
Unlike high-risk AI systems, which may be deployed subject to strict safeguards, prohibited AI practices are not eligible for compliance-based mitigation. They are banned outright, reflecting the EU legislator’s assessment that certain uses of AI are inherently incompatible with Union values and legal principles.
Understanding the scope of prohibited practices is therefore essential, as it defines the absolute outer limits of lawful AI development and deployment under EU law.
Position of prohibited practices within the EU AI Act
The EU AI Act differentiates between four regulatory layers:
- Prohibited AI practices (unacceptable risk),
- High-risk AI systems,
- Limited-risk AI systems, and
- Minimal-risk AI systems.
Prohibited practices sit at the highest level of regulatory intervention. If an AI system falls within this category, it may not be placed on the market, put into service or used within the European Union, regardless of technical safeguards or governance measures.
The prohibition applies across the entire AI lifecycle and binds all actors involved, including providers, deployers and public authorities.
Core categories of prohibited AI practices
The EU AI Act identifies several categories of AI practices that are deemed unacceptable due to their potential to undermine fundamental rights or exploit vulnerabilities. These prohibitions are defined by use case and effect, not by the underlying technology alone.
Manipulative and exploitative AI systems
AI systems are prohibited where they deploy subliminal techniques or other manipulative methods that materially distort human behaviour in a manner that causes or is likely to cause harm.
This includes AI systems designed to exploit psychological vulnerabilities based on age, disability or socio-economic circumstances. The decisive factor is not persuasion as such, but covert manipulation that impairs autonomous decision-making.
Social scoring by public authorities
The EU AI Act prohibits AI systems used by or on behalf of public authorities for social scoring purposes. Social scoring refers to the evaluation or classification of individuals based on social behaviour or personal characteristics, where such scoring leads to detrimental or unfavourable treatment.
The prohibition reflects concerns that social scoring systems undermine principles of equality, proportionality and human dignity, particularly where scores are applied across contexts unrelated to the original data.
Certain biometric identification practices
The Regulation introduces strict limitations on the use of biometric identification systems, particularly where they operate in real time and in publicly accessible spaces.
As a general rule, the use of real-time remote biometric identification by law enforcement authorities in public spaces is prohibited. The EU AI Act allows only narrowly defined exceptions, subject to stringent conditions and prior authorisation, reflecting the high risk such systems pose to privacy and fundamental rights.
Predictive policing based on profiling
AI systems used for predictive policing that rely on profiling or assessment of individuals’ likelihood of committing criminal offences are prohibited where they are based on personal characteristics rather than objective and verifiable facts.
The prohibition targets systems that infer criminal propensity from attributes such as personality traits, background or social environment, rather than from specific, case-related evidence.
Scope of the prohibitions
The prohibitions under the EU AI Act apply irrespective of:
- whether the AI system is developed within or outside the European Union,
- whether it is deployed by public or private actors, or
- whether it is offered commercially or used internally.
What matters is whether the AI system is placed on the EU market, put into service or used within the Union and whether its intended use falls within a prohibited category.
The extraterritorial reach of the EU AI Act ensures that prohibited practices cannot be lawfully circumvented through outsourcing or cross-border deployment models.
Relationship to high-risk AI systems
Prohibited AI practices must be distinguished from high-risk AI systems. While both categories address serious risks, their regulatory treatment differs fundamentally.
High-risk AI systems may be deployed if extensive compliance requirements are met. Prohibited practices, by contrast, are excluded from lawful use altogether. No conformity assessment or governance framework can legitimise a prohibited use case.
This distinction underscores the EU legislator’s view that some AI applications present risks that cannot be adequately mitigated, even through robust safeguards.
Legal consequences of non-compliance
The use or placing on the market of prohibited AI practices constitutes a serious violation of the EU AI Act. Such violations may trigger:
- enforcement action by competent authorities,
- orders to withdraw or disable the AI system,
- significant administrative fines.
Given the severity of the sanctions, accurate assessment of whether an AI system falls within a prohibited category is of critical importance.
Regulatory significance of prohibited practices
The prohibition regime performs a normative function within the EU AI Act. It delineates the boundaries of acceptable AI use and signals the Union’s commitment to safeguarding fundamental rights in the digital age.
By identifying unacceptable practices ex ante, the Regulation aims to provide legal certainty while preventing irreversible societal harm that could arise from the unchecked deployment of certain AI technologies.
Conclusion
Prohibited AI practices under the EU AI Act represent the absolute limits of lawful artificial intelligence use within the European Union. These prohibitions target AI applications that manipulate behaviour, enable social scoring, facilitate intrusive biometric identification or rely on impermissible predictive profiling.
Unlike high-risk AI systems, prohibited practices are not subject to conditional compliance, but are excluded from lawful deployment altogether. Correct identification of prohibited use cases is therefore a foundational step in assessing the legal permissibility of AI systems under EU law.
Notice
The information provided on this page is for general informational purposes only and does not constitute legal advice.







