How to Classify AI Systems by Risk Under the EU AI Act
The EU AI Act applies different obligations to different AI systems depending on their risk level. Before you can determine what compliance measures are required for any given AI system, you must classify it correctly. Misclassification, particularly under-classifying a high-risk system, can result in significant compliance gaps and regulatory exposure.
The risk classification framework is established by Articles 5–9 and Annex III. Article 5 defines prohibited practices. Article 6 establishes the classification rules for high-risk AI systems, including the safety component and Annex III criteria. Annex III lists the eight categories of high-risk AI use cases. Article 7 gives the Commission power to amend Annex III. Articles 8–9 set out the requirements that high-risk AI systems must meet.
The EU AI Act Risk Tiers
Prohibited (Article 5)
AI practices that are banned outright. These prohibitions applied from 2 February 2025, the earliest deadline in the Act. See the prohibited use guide for the full list.
High-Risk (Article 6 & Annex III)
AI systems that must comply with the full suite of Chapter III requirements: risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy and robustness, and conformity assessment. Full obligations apply from August 2026.
Limited Risk (Article 50)
AI systems subject to transparency obligations only, primarily disclosure to users that they are interacting with AI (chatbots, AI-generated content). No conformity assessment or technical documentation required.
Minimal/No Risk
AI systems with no specific obligations under the EU AI Act. The vast majority of AI applications (spam filters, recommendation systems, AI in games) fall here. Voluntary codes of practice may apply.
Annex III High-Risk Categories
Any AI system falling within the following eight categories is classified as high-risk under Annex III, subject to certain conditions. Review each category carefully against each AI system in your inventory.
Step-by-Step: Classifying Each AI System
Start with Your AI Inventory
Risk classification must be applied to every AI system in your organisation. If you have not yet completed an AI inventory, start there. See the AI inventory guide. Classification without a complete inventory produces an incomplete compliance picture. Apply the classification process systematically to each system in your inventory, not just the ones you already suspect are high-risk.
Apply the Article 5 Prohibited Use Check
Before assessing risk tier, determine whether any use case is prohibited under Article 5. Prohibited uses do not have a compliance pathway. They cannot be made compliant by adding safeguards. If any use case is prohibited, it must be ceased or fundamentally redesigned. Conduct this check first and document the result. Proceed to risk tier classification only for use cases confirmed as not prohibited.
Apply the Article 6 Safety Component Test
Article 6(1) classifies as high-risk any AI system that is a safety component of a product covered by EU harmonised legislation listed in Annex I (such as machinery, medical devices, lifts, pressure equipment, etc.), and where that product is subject to third-party conformity assessment. If your AI system is embedded in or forms a critical part of such a regulated product, it is high-risk regardless of Annex III. This route to high-risk classification is particularly relevant for manufacturers of physical products that incorporate AI features.
Check Each Annex III Category Systematically
For each AI system, work through the eight Annex III categories listed above. Apply the following test for each category: does this AI system perform a function described in this category, in the specific context described? Both the function and the context matter. An AI system that ranks job candidates is high-risk under category 4, but an AI system that ranks products in a search result is not. Pay particular attention to AI systems that could incidentally be used in a high-risk context even if not designed for it. The classification follows the use, not the design intent.
Be Aware of the Article 7 Amendment Process
The European Commission has the power under Article 7 to amend Annex III by adding new high-risk categories where AI applications in those areas present risks equivalent to those already listed. This means a system classified as limited or minimal risk today may become high-risk in the future. Monitor European Commission guidance and implementing acts, and plan to re-run your classification exercise when Annex III is amended. Build a periodic classification review into your AI governance calendar.
Document Classification Decisions with Reasoning
For every AI system, record: the classification reached (prohibited, high-risk, limited risk, minimal risk), the test applied (Article 5 check, Article 6(1) safety component test, or Annex III category check), the reasoning supporting the conclusion, and who conducted the assessment and when. Regulators may challenge your classification, particularly if they believe you have under-classified a system. Documented reasoning is your primary defence. For borderline cases, note the uncertainty and the factors considered.
Assign Compliance Obligations Based on Risk Tier
Once classification is complete, map each high-risk system to its specific compliance obligations: technical documentation (Article 11), data governance (Article 10), logging (Article 12), transparency and instructions for use (Article 13), human oversight measures (Article 14), accuracy and robustness testing (Article 15), conformity assessment (Article 43), registration in the EU database (Article 49), and post-market monitoring (Article 72). Create a compliance checklist for each high-risk system and assign ownership for each obligation.
General Purpose AI (GPAI): Separate Obligations
General purpose AI models, meaning AI models trained on broad data that can perform a wide range of tasks such as large language models, are subject to separate obligations under Articles 51–56, regardless of whether they also fall under the high-risk classification. GPAI model providers must produce technical documentation, publish summaries of training data, implement copyright policies, and (for models with systemic risk) conduct adversarial testing and report serious incidents.
If your organisation is a GPAI model provider, not merely a user of GPAI models, review Articles 51–56 separately from the high-risk classification framework. Deployers who integrate GPAI models into their products are deployers of those models and may additionally be providers of the resulting AI system, depending on how it is packaged.
Classification Is Not a One-Time Activity
An AI system's risk classification can change for several reasons: the system's purpose changes, the deployment context changes, the user population changes, or Annex III is updated by the Commission. Classify each system at initial deployment, and re-classify whenever a material change occurs. Schedule periodic reviews (at least annually) as part of your AI governance programme. Document each review and its conclusion, even when the classification is unchanged.
← Back to Assessment