The Four-Tier AI Risk Classification Under the EU AI Act | Standard Intelligence
The Four-Tier AI Risk Classification Under the EU AI Act
The EU AI Act establishes a four-tier risk classification framework that determines every obligation attaching to an AI system. Understanding where a system falls within this hierarchy is the precondition for all subsequent risk assessment, conformity assessment, and ongoing compliance activities. A classification error at this stage cascades into every downstream decision.
Abstract
Read abstract
Risk classification is the gateway to every other EU AI Act compliance obligation. The regulation establishes four tiers: prohibited practices under Article 5 that must cease immediately, high-risk systems under Annex III and Article 6 requiring the full AISDP and conformity assessment, limited-risk systems under Article 50 with transparency obligations, and minimal-risk systems requiring only a classification record. The Article 6(3) exception allows certain systems that would otherwise be high-risk to be treated as lower risk, but only when both a functional criterion and a risk criterion are satisfied simultaneously. Classification confirmation must precede every risk assessment, verifying that the system's Classification Decision Record remains current and that no reclassification triggers have been activated. Each classification decision requires documented evidence, reviewer approval, and active maintenance throughout the system lifecycle. The classification framework connects directly to the risk management system under Article 9, determining the depth and scope of every subsequent compliance activity.
What are the four risk tiers under the EU AI Act?
Regulatory Requirement
The EU AI Act establishes a four-tier risk classification framework that determines the obligations attaching to each AI system.
The EU AI Act establishes a four-tier risk classification framework that determines the obligations attaching to each AI system. Understanding where a system falls within this framework is the precondition for every subsequent risk assessment activity.
Tier 1 covers prohibited practices under Article 5. Systems deploying subliminal manipulation, exploiting vulnerabilities of specific groups, implementing social scoring by public authorities, performing untargeted facial recognition scraping, recognising emotions in workplaces or educational institutions outside narrow exceptions, assessing criminal risk solely through profiling, or performing real-time remote biometric identification in public spaces are prohibited. These systems cannot proceed through the process; their existence triggers immediate escalation and cessation.
How does the Article 6(3) exception work?
Regulatory Requirement
The Article 6(3) exception allows certain systems that would otherwise be classified as high-risk to be treated as lower risk if two conditions are both satisfied.
The Article 6(3) exception allows certain systems that would otherwise be classified as high-risk to be treated as lower risk if two conditions are both satisfied. The functional criterion requires the system's function to fall within specified categories: performing narrow procedural tasks, improving the results of previously completed human activities, or detecting decision-making patterns without replacing human assessment.
The risk criterion requires that the system does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Both criteria must be met; satisfying one alone is insufficient.
How should classification be confirmed before risk assessment?
Engineering Approach
The risk assessment must begin with a classification confirmation.
The risk assessment must begin with a classification confirmation. Before conducting any detailed risk analysis, the assessor verifies that the system's Classification Decision Record is current, that no reclassification triggers have been activated since the CDR was approved, and that the classification rationale remains sound given the system's current deployment context. A system that has drifted from its intended purpose into a higher-risk domain since classification requires reclassification before the risk assessment proceeds.
Frequently Asked Questions
Can a system change risk tier after initial classification?
Yes. Reclassification triggers include changes to intended purpose, expansion into new deployment domains, modification of the affected population, or regulatory framework updates. A system that drifts into a higher-risk domain requires reclassification before the risk assessment proceeds.
What happens if a system is classified at the wrong tier?
A classification error cascades into every downstream compliance decision. The entire risk assessment, AISDP, and any conformity assessment built upon the wrong tier would be invalidated. Classification confirmation is a protective measure against this risk.
Who must approve reliance on the Article 6(3) exception?
Both the Legal and Regulatory Advisor and the AI Governance Lead must review and approve. The AI System Assessor documents the analysis addressing each criterion separately, treating the exception as a hypothesis to test against evidence.
Do minimal-risk systems need any documentation?
Yes. A Baseline AISDP confirming the classification rationale is required, documenting that every higher tier was considered and ruled out with stated reasoning. The classification record must be defensible.
Written by
Related Pages
In This Section
Prohibited (Article 5), high-risk (Annex III and Article 6), limited risk (Article 50 transparency), and minimal risk (voluntary baseline), each carrying progressively different AISDP and compliance obligations.
Subliminal manipulation, vulnerability exploitation, public authority social scoring, untargeted facial recognition scraping, workplace and educational emotion recognition, criminal risk profiling through profiling alone, and real-time remote biometric identification in public spaces.
Systems within eight Annex III domains (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice) or safety components under Annex I harmonisation legislation.
Both a functional criterion (narrow procedural tasks or pattern detection) and a risk criterion (no significant harm to health, safety, or rights) must be met simultaneously. Satisfying one alone is insufficient.
A Classification Decision Record documenting the system description, assessment against each tier, supporting evidence, reviewer approval, and version history demonstrating active maintenance.
High-risk systems in Annex III domains require the full AISDP with all twelve modules, conformity assessment, CE marking, and EU database registration.
Classification confirmation must precede every risk assessment, verifying the CDR is current and no reclassification triggers have been activated.
Tier 2 covers high-risk systems under Annex III and Article 6. Systems falling within the eight Annex III domains, covering biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration and border control, and administration of justice, or constituting safety components of products governed by Annex I harmonisation legislation, require the full AISDP with all twelve modules, conformity assessment, CE marking, and EU database registration.
Tier 3 covers limited-risk systems under Article 50, triggering transparency obligations for chatbots, emotion recognition systems, biometric categorisation systems, and systems generating synthetic content. These require a Standard AISDP addressing transparency measures. Tier 4 covers minimal-risk systems that do not trigger any of the above categories, requiring only a Baseline AISDP confirming the classification rationale.
The AI System Assessor documents reliance on this exception with rigorous analysis addressing each criterion separately. Both the Legal and Regulatory Advisor and the AI Governance Lead must review and approve any claim. The risk assessment should treat the exception as a hypothesis to be tested against evidence, not as a convenient exit from compliance obligations.
Classification is not a one-time determination. The Legal and Regulatory Advisor reassesses the classification when the system's intended purpose changes, when the deployment context changes, when the provider updates the system in ways that could affect the risk profile, and at minimum annually. Each reassessment is documented with reasoning and conclusion.
CTO of Standard Intelligence. Leads platform engineering and contributes to the PIG series technical content.
Classification Rules for High-Risk AI
Union Harmonisation Legislation (Annex I Products)
High-Risk AI System Areas