How to Detect and Prevent Prohibited AI Uses Under the EU AI Act
Article 5 of the EU AI Act outlines categories of AI use that are prohibited outright. These are not compliance obligations that can be met with documentation or technical safeguards; they are bans. Organisations must actively identify whether any of their AI use cases fall within these categories and take immediate action if they do.
Violations of Article 5 prohibited use provisions carry the highest penalties in the EU AI Act: up to €35,000,000 or 7% of total worldwide annual turnover, whichever is higher. This is the maximum penalty tier in the Regulation.
Article 5 sets out the complete list of prohibited AI practices. These prohibitions applied from 2 February 2025, making them the earliest obligations to come into force under the phased implementation schedule. The Article covers both AI systems placed on the EU market and those deployed within the EU, regardless of where the provider is based.
What Article 5 Prohibits
The following AI practices are banned under Article 5. Note that some prohibitions apply specifically to public authorities, while others apply to all operators.
- BANNED Subliminal manipulation: AI systems that deploy subliminal techniques beyond a person's consciousness or that exploit psychological weaknesses or vulnerabilities in a way that materially distorts behaviour, causing harm.
- BANNED Exploitation of vulnerabilities: AI systems that exploit vulnerabilities related to age, disability, or specific social or economic circumstances to distort behaviour in a harmful way.
- BANNED Social scoring by public authorities: AI systems used by or on behalf of public authorities to evaluate or classify natural persons over a period of time based on social behaviour or personal characteristics, in a way that leads to detrimental or disproportionate treatment.
- BANNED Real-time remote biometric identification in public spaces: Use by law enforcement of real-time remote biometric identification systems in publicly accessible spaces, with limited, narrowly defined exceptions for specific threats.
- BANNED Retrospective biometric identification: Post-hoc use of remote biometric identification systems by law enforcement in publicly accessible spaces, unless specifically authorised by judicial or independent administrative authority, except in cases of urgency.
- BANNED Emotion recognition in workplaces and educational institutions: AI systems used to infer the emotions of natural persons in the context of their employment or education, with exceptions for medical or safety reasons.
- BANNED Biometric categorisation to infer sensitive attributes: AI systems that categorise natural persons individually based on biometric data to deduce or infer race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation.
- BANNED Untargeted scraping for facial recognition databases: AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
Step-by-Step: Detecting and Preventing Prohibited Uses
Document All AI Use Cases in Detail
The same underlying AI system may be permitted or prohibited depending entirely on how it is used. A facial recognition system used for door access control in a secure facility is subject to high-risk obligations but is not prohibited; the same system used to identify people in a public square in real time may be. Start by documenting not just what AI systems exist, but precisely what they do, in what context, against which subjects, and what decisions or outputs they produce. Vague descriptions such as "analytics tool" or "AI assistant" are insufficient. You need use-case-level detail.
Map Each Use Case Against Article 5 Categories
For every AI use case documented in step 1, conduct a structured review against each Article 5 category. This should be done by a person with legal or compliance expertise who can assess edge cases. Do not rely on the AI system's vendor to determine whether a use is prohibited. Pay particular attention to: any AI touching biometric data (face, voice, gait, physiological signals), any AI that scores or rates human behaviour over time, any AI used in employment or educational contexts that analyses emotional or behavioural states, and any AI that personalises content or decisions in ways that could be considered manipulative.
Implement API-Layer Controls for High-Risk Patterns
Prohibited uses often manifest as identifiable patterns in AI API traffic. For example: requests containing biometric data to emotion analysis APIs, scoring API calls that aggregate behavioural data over time, or large-scale requests containing facial images. Implementing controls at the API layer, inspecting request payloads for indicators of prohibited use cases before they reach AI models, provides an enforceable technical control that complements policy. This is particularly important where AI capabilities are accessible to multiple development teams who may not be aware of legal restrictions.
Set Up Real-Time Alerts for Suspicious AI Traffic Patterns
Passive documentation is insufficient. Configure active monitoring and alerting for API traffic patterns that suggest prohibited use cases: sudden spikes in biometric data being sent to AI APIs, unusual volumes of behavioural scoring calls, or API calls involving data types (such as images, audio, or location data) that were not expected for a given integration. Real-time visibility into AI API traffic enables you to detect policy violations as they occur, not in a retrospective audit months later.
Establish a Pre-Deployment Review for New AI Uses
Before any new AI system or new use case is deployed, require a documented review against Article 5 categories. This should be embedded into your AI procurement and development processes as a mandatory gate. The review should be conducted independently of the team proposing the system and should result in a written record of the assessment. If a use case is borderline, obtain legal advice before proceeding.
Document All Decisions, Including Negative Findings
Regulators may request evidence of how prohibited use assessments were conducted. Maintain records of: which AI use cases were assessed, who conducted the assessment, what conclusion was reached, and what evidence supported the conclusion. If a use case was initially considered potentially prohibited but was found to be permissible, document the reasoning. These records are your primary defence in the event of a regulatory investigation.
Common Edge Cases and Ambiguities
Emotion Recognition in Customer Service AI
AI systems that analyse voice tone, sentiment, or text to infer customer emotional states are widely deployed in contact centres. The Article 5 prohibition on emotion recognition applies specifically to workplace and educational contexts. Using emotion inference in customer service contexts (analysing customers, not workers) is not prohibited under Article 5, though it may trigger other obligations. However, if the system analyses call centre agents' emotional states as well as customers', the workplace prohibition may apply to that component of the analysis.
HR Screening and Recruitment AI
AI systems used in HR that score, rank, or categorise candidates on the basis of video interviews, written responses, or behavioural assessments occupy a complex space. They do not automatically fall under the Article 5 prohibited categories, but may trigger high-risk obligations under Annex III (employment and workers management). Review HR AI tools carefully against both Article 5 and Annex III.
LLM-Based Profiling Systems
Large language models used to summarise, categorise, or draw inferences about individuals from unstructured data could, in certain configurations, produce outputs that amount to prohibited practices. For example, inferring political opinions or religious beliefs from text analysis. The prohibition applies to the purpose and output, not the underlying technology. Document the intended outputs carefully.
← Back to Assessment