Back to Resources
Risk Assessment19 min read

How to Conduct an EU AI Act Risk Assessment (Step-by-Step)

ACH

AI Comply HQ

Risk assessment is the cornerstone of EU AI Act compliance. Every obligation under the Act (documentation, human oversight, transparency, conformity assessment, post-market monitoring) flows from a single determination: what risk tier does your AI system fall into?

Get the classification right, and the path to compliance is clear. Get it wrong, and you face one of two outcomes: you either spend resources meeting requirements that do not apply to you, or, far worse, you operate a high-risk system without the safeguards the law demands.

This guide provides a complete, step-by-step methodology for conducting an EU AI Act risk assessment. It covers the four risk tiers in detail, walks through the classification process, identifies common mistakes, and explains how to document your assessment for regulatory scrutiny.

Whether you are assessing a single AI chatbot or conducting an enterprise-wide portfolio review, this methodology applies.

Why Risk Assessment Matters More Than You Think

The EU AI Act is not a one-size-fits-all regulation. Unlike the GDPR, which applies broadly uniform obligations to all personal data processing, the AI Act calibrates its requirements based on the potential harm an AI system can cause. This risk-based architecture means that classification is not a preliminary step; it is the determinative step that shapes every subsequent compliance obligation.

Consider the practical difference:

  • A minimal-risk AI system (a spam filter, for instance) faces zero mandatory requirements under the Act
  • A high-risk AI system (an AI-powered hiring tool, for example) must comply with requirements spanning risk management, data governance, technical documentation, human oversight, accuracy, robustness, and cybersecurity, plus registration in the EU database and potentially third-party conformity assessment

The gap between these two outcomes is enormous in terms of cost, effort, and operational impact. That is why risk assessment deserves rigorous, documented analysis, not a quick judgment call in a meeting.

Your risk classification must be defensible. National market surveillance authorities have the power to review and challenge your classification. If an authority determines that you have underclassified a system, treating a high-risk system as limited or minimal risk, the consequences include enforcement action, fines, and mandatory withdrawal of the system from the EU market.

Classify Your AI Systems: Start Free Assessment

Section 1: The Four Risk Tiers Explained in Detail

The EU AI Act establishes four risk categories. Understanding each one in depth is essential before you begin classification.

Unacceptable Risk: Banned Practices

The highest tier contains AI practices that the EU considers fundamentally incompatible with European values and fundamental rights. These are outright prohibited, and no compliance pathway exists. If your system falls here, it must be decommissioned.

Prohibited practices include:

  • Subliminal manipulation. AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, to materially distort behavior in a manner that causes or is likely to cause significant harm
  • Exploitation of vulnerabilities. AI systems that exploit vulnerabilities related to age, disability, or social or economic situation to materially distort behavior
  • Social scoring. AI systems used by public authorities (or on their behalf) to evaluate or classify individuals based on social behavior or personal characteristics, where such scoring leads to detrimental or unfavorable treatment
  • Individual criminal risk prediction. AI systems that make risk assessments of natural persons to predict criminal offending based solely on profiling or personality traits
  • Untargeted facial image scraping. AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV
  • Emotion inference in sensitive contexts. AI systems that infer emotions of individuals in the workplace or educational institutions, except for medical or safety reasons
  • Biometric categorization for sensitive attributes. AI systems that categorize natural persons based on biometric data to infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation
  • Real-time remote biometric identification in public spaces. For law enforcement purposes, except under narrow, judicially authorized exceptions for serious crime

These prohibitions have been enforceable since February 2, 2025. If you are still operating any system that falls into these categories, you are already in violation.

Key nuance: The prohibition on manipulation applies to techniques "beyond a person's consciousness" or that are "purposefully manipulative or deceptive." Standard persuasion techniques in marketing AI do not necessarily fall here, but AI systems designed to exploit psychological vulnerabilities may. When in doubt, seek specialized legal analysis.

High-Risk: The Core Compliance Challenge

High-risk AI systems are the primary focus of the Act's compliance requirements. These systems are legal but heavily regulated because of their potential impact on health, safety, and fundamental rights.

The Act identifies high-risk systems in two distinct categories:

Annex I: AI as a Safety Component in Regulated Products

AI systems that serve as safety components of products already covered by EU harmonized legislation are automatically high-risk. This includes AI embedded in:

  • Medical devices and in vitro diagnostic medical devices
  • Machinery and elevators
  • Toys
  • Radio equipment
  • Personal protective equipment
  • Civil aviation products
  • Motor vehicles and their trailers
  • Marine equipment
  • Rail systems (cable cars and interoperability)
  • Pressure equipment

These systems face a compliance deadline of August 2, 2027 and must undergo third-party conformity assessment under the applicable product legislation.

Annex III: AI in Sensitive Domains

This is where most organizations find their compliance obligations. Annex III enumerates eight domains where AI systems are classified as high-risk:

  1. Biometrics. Remote biometric identification systems (beyond real-time law enforcement, which is prohibited), biometric categorization systems, and emotion recognition systems
  2. Critical infrastructure. AI systems used as safety components in the management and operation of road traffic, water supply, gas supply, heating supply, and electricity supply, as well as digital infrastructure
  3. Education and vocational training. AI used to determine access to education, evaluate learning outcomes, monitor prohibited behavior during exams, or assess the appropriate level of education for an individual
  4. Employment, worker management, and access to self-employment. AI used in recruitment (ad targeting, filtering, evaluating candidates), promotion, termination, task allocation, and performance monitoring
  5. Access to essential services. AI used in credit scoring, insurance risk assessment and pricing, emergency service dispatch, public assistance benefits evaluation, and health and life insurance
  6. Law enforcement. Polygraph and similar tools, deepfake detection, risk assessment (re-offending, victimization), profiling in criminal investigations, and crime analytics
  7. Migration, asylum, and border control. Risk assessment and screening tools, verification of document authenticity, and examination of applications for asylum, visa, and residence permits
  8. Administration of justice and democratic processes. AI used by judicial authorities to research, interpret, and apply the law, and AI intended to influence the outcome of elections

Important exception: An Annex III system is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. To claim this exception, the provider must demonstrate that the system: (a) performs a narrow procedural task, (b) improves the result of a previously completed human activity, (c) detects decision-making patterns without replacing human assessment, or (d) performs a preparatory task to an assessment relevant to the use cases listed. This exception must be documented and the system must still be registered.

Limited Risk: Transparency Obligations

Limited-risk AI systems do not require the full compliance apparatus of high-risk systems, but they must meet transparency requirements so that individuals know they are interacting with AI.

Systems with transparency obligations include:

  • AI systems that interact directly with natural persons. Must be designed so that individuals are informed they are interacting with an AI system, unless this is obvious from the circumstances and context of use
  • AI systems that generate synthetic content. Providers must ensure that outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated. This applies to AI-generated text, audio, images, and video
  • Emotion recognition and biometric categorization systems. Deployers must inform exposed persons of the system's operation and process personal data in accordance with applicable EU law
  • Deep fakes. Persons who deploy AI to generate or manipulate image, audio, or video content that would falsely appear authentic must disclose that the content has been artificially generated or manipulated

These obligations take effect on August 2, 2026.

Minimal Risk: Voluntary Compliance

AI systems that do not fall into the above categories are classified as minimal risk. The Act imposes no mandatory requirements on these systems. Organizations are encouraged, but not required, to voluntarily adopt codes of conduct covering topics such as:

  • Environmental sustainability
  • Accessibility for persons with disabilities
  • Stakeholder participation in design
  • Diversity of development teams

Even without legal obligations, documenting your classification rationale for minimal-risk systems is prudent. If a regulator questions your classification, you will need to demonstrate why the system does not trigger higher-tier obligations.

Section 2: Step-by-Step Risk Assessment Methodology

The following methodology provides a structured, repeatable process for classifying your AI systems under the EU AI Act. Follow these steps for each AI system in your inventory.

Step 1: Define the AI System's Purpose and Deployment Context

Before classification, you must clearly articulate what the AI system does and how it is used. Ambiguity at this stage leads to misclassification downstream.

Document the following for each system:

  • System name and version. Include vendor information for third-party systems
  • Technical description. What type of AI does the system employ? (Machine learning, deep learning, rule-based, hybrid, generative, etc.)
  • Intended purpose. What specific task or function does the system perform?
  • Deployment context. Where is the system used? Who uses it? Who is affected by its outputs?
  • Input data. What data does the system process? Include data types, sources, and volumes
  • Output and decision impact. What does the system produce? Does it make or inform decisions that affect natural persons?
  • Geographic scope. In which EU member states is the system operational?
  • Integration context. Is the system standalone, or is it a component within a larger product or service?

Why this matters: The same underlying AI technology can fall into different risk tiers depending on its deployment context. A natural language processing model used as a customer support chatbot is limited risk (transparency obligation). The same model used to assess job applicants is high risk (employment domain). Context is everything.

Step 2: Map to Annex III Categories

With a clear understanding of the system's purpose and context, systematically evaluate it against each of the eight Annex III domains.

For each domain, ask:

  • Does the system's intended purpose fall within this category?
  • Could the system's reasonably foreseeable use fall within this category?
  • Does the system affect or make decisions about natural persons within this domain?

Work through all eight domains, not just the ones that seem obviously relevant. An AI system used for employee scheduling, for example, might seem like a simple operational tool. But if it monitors performance patterns or allocates tasks in ways that affect working conditions, it could fall under the employment category (domain 4).

If the system matches any Annex III domain, proceed to the detailed classification criteria. If it matches none, evaluate whether it falls under Annex I (safety component in a regulated product) or has transparency obligations (limited risk).

Map Your Systems to Annex III: Start Free Trial

Step 3: Evaluate the System's Impact on Fundamental Rights

For systems that potentially match Annex III categories, conduct a fundamental rights impact assessment. This is not an optional enhancement; it is integral to determining whether the system genuinely poses a significant risk of harm.

Evaluate the system's potential impact on:

  • Right to human dignity. Does the system treat individuals as means to an end?
  • Right to privacy and data protection. Does the system process personal data, and is such processing proportionate to its purpose?
  • Right to non-discrimination. Could the system produce outcomes that disproportionately affect protected groups?
  • Freedom of expression. Could the system restrict or chill lawful expression?
  • Right to an effective remedy. Can individuals challenge decisions made or informed by the system?
  • Rights of the child. Does the system interact with or affect minors?
  • Workers' rights. Does the system affect employment conditions, health and safety, or collective bargaining?
  • Consumer protection. Does the system affect consumer choices, access to goods/services, or complaint mechanisms?

Document both the likelihood and severity of potential adverse impacts. A system with a low probability of impact but catastrophic potential harm may still warrant high-risk classification. Conversely, a system that matches an Annex III category but demonstrably poses no significant risk to fundamental rights may qualify for the exception noted above.

Step 4: Document the Classification Rationale

Regardless of the classification outcome, document your reasoning thoroughly. Your classification documentation should include:

  • The classification determination. Which risk tier does the system fall into?
  • The analysis pathway. Which Annex III categories were evaluated and why the system does or does not match each
  • Supporting evidence. Technical specifications, deployment data, impact assessments, and any expert opinions consulted
  • The exception analysis (if applicable). If claiming an Annex III system is not high-risk, document the specific exception criteria met
  • The assessor. Who conducted the assessment, their qualifications, and the date of assessment
  • Review schedule. When will the classification be reviewed? (Recommendation: at minimum annually and upon any material change to the system)

This documentation must withstand regulatory review. National competent authorities have the power to request classification documentation and challenge classifications they consider incorrect. A well-documented rationale is your primary defense.

Step 5: Identify Applicable Requirements

Once classification is complete, map the specific requirements that apply to your system based on its risk tier.

For high-risk systems, the applicable requirements include:

  1. Risk management system (Article 9). A continuous, iterative process throughout the system's lifecycle
  2. Data and data governance (Article 10). Quality criteria for training, validation, and testing datasets
  3. Technical documentation (Article 11). Comprehensive documentation per Annex IV
  4. Record-keeping (Article 12). Automatic logging of events during operation
  5. Transparency and provision of information to deployers (Article 13). Clear instructions for use
  6. Human oversight (Article 14). Measures enabling human monitoring, intervention, and override
  7. Accuracy, robustness, and cybersecurity (Article 15). Performance standards across the lifecycle
  8. Quality management system (Article 17). A systematic quality system documented in policies and procedures
  9. EU database registration (Article 49). Registration before placing the system on the market
  10. Conformity assessment (Article 43). Self-assessment or third-party assessment depending on the system type
  11. Post-market monitoring (Article 72). Systematic collection and analysis of performance data after deployment
  12. Serious incident reporting (Article 73). Notification to authorities within prescribed timeframes

For limited-risk systems, the applicable requirements include:

  1. Transparency to users (Article 50). Notification that individuals are interacting with AI
  2. Content marking (Article 50). Machine-readable labeling of AI-generated content
  3. Deepfake disclosure (Article 50). Disclosure when content has been artificially generated or manipulated

Create a compliance roadmap that maps each applicable requirement to your current state, identifies gaps, and defines remediation actions with deadlines.

Step 6: Create a Compliance Roadmap

With your requirements mapped and gaps identified, build a prioritized remediation plan:

  • Immediate actions. Address any prohibited practices. Ensure AI literacy training is in progress (already required). Verify GPAI model compliance (already required).
  • High priority (complete by June 2026). Technical documentation, risk management systems, and data governance measures for high-risk systems. These are the most time-consuming to implement and should start immediately.
  • Medium priority (complete by July 2026). Transparency mechanisms, human oversight protocols, logging capabilities, and deployment of content-marking systems for limited-risk systems.
  • Final phase (complete by August 1, 2026). EU database registration, conformity assessment completion, quality management system finalization, and post-market monitoring activation.
  • Ongoing. Incident reporting procedures, monitoring system performance, periodic risk reassessment, and regulatory change tracking.

Assign clear ownership for each action item. Compliance is a cross-functional effort, and engineering, legal, product, operations, and leadership all have roles to play.

Section 3: Common Mistakes in Risk Assessment

Having worked with organizations at every stage of compliance preparation, the following errors appear repeatedly. Avoid them.

Mistake 1: Classifying Based on Technology, Not Use

The EU AI Act classifies based on purpose and deployment context, not on the underlying technology. A large language model is not inherently high-risk or low-risk. Its classification depends entirely on what it is used for. A customer FAQ chatbot and an AI-powered legal research tool may use the same foundation model but fall into entirely different risk tiers.

Mistake 2: Ignoring Third-Party AI Tools

Many organizations focus their risk assessment on internally developed AI systems while overlooking third-party tools. If your organization deploys an AI system, even one developed by a vendor, you bear deployer obligations under the Act. This includes SaaS tools with embedded AI features, API-based AI services, and AI-powered plugins within your software stack.

Mistake 3: Applying the Exception Too Broadly

The exception allowing Annex III systems to be classified as non-high-risk is narrow and requires affirmative documentation. It applies only when the system performs narrow procedural tasks, improves previously completed human activities, detects decision patterns without replacing human judgment, or performs preparatory tasks. Organizations that apply this exception loosely, particularly for AI systems that meaningfully influence decisions about people, face significant regulatory risk.

Mistake 4: Treating Classification as a One-Time Exercise

Risk classification must be reviewed when the system changes, when its deployment context changes, when new guidance is issued by the AI Office or national authorities, and at regular intervals regardless. An AI system's risk profile can shift with a model update, a new use case, expansion to a new geographic market, or changes in the regulatory landscape.

Mistake 5: Failing to Document Minimal-Risk Determinations

Organizations often thoroughly document high-risk classifications but neglect to document why a system was classified as minimal risk. If a regulator questions your classification, an undocumented minimal-risk determination is difficult to defend. Spend a paragraph per system documenting why it does not trigger higher-tier obligations.

Mistake 6: Conflating GDPR Compliance with AI Act Compliance

GDPR compliance is necessary but not sufficient. The AI Act imposes requirements (risk management systems, conformity assessments, technical documentation, human oversight measures) that have no analogue in the GDPR. Organizations that assume their data protection framework covers AI Act obligations will find significant gaps.

Avoid Classification Mistakes: Get Expert Guidance

Section 4: How AI Comply HQ Mirrors This Process

The risk assessment methodology described above is precisely the process that AI Comply HQ automates. The platform mirrors each step:

Structured Interview = Steps 1 and 2. Our AI-guided interview asks targeted questions about your system's purpose, deployment context, affected populations, and domain of use. These questions directly map to the Annex III categories and classification criteria described above. You answer in plain language; the platform handles the regulatory mapping.

Impact Analysis = Step 3. Based on your responses, the platform evaluates potential impacts on fundamental rights and correlates those impacts with the Act's risk thresholds. The analysis is transparent, and you can see exactly why the platform reached its classification determination.

Classification Report = Step 4. The platform generates a documented classification rationale for each assessed system. This report is structured for regulatory review and includes the analysis pathway, supporting evidence from your interview responses, and the applicable risk tier determination.

Requirements Mapping = Step 5. Once classified, each system receives a tailored list of applicable requirements with specific references to the relevant articles and annexes of the Act. No ambiguity about what applies to you.

Compliance Roadmap = Step 6. The platform generates a prioritized remediation plan based on your current state and the August 2026 deadline. Actions are sequenced by dependency, effort, and deadline proximity.

The entire process, from first interview question to completed classification with documentation, takes a fraction of the time required for manual assessment. For organizations with multiple AI systems, the time savings compound dramatically.

Section 5: Getting Started

You do not need to wait for perfect conditions to begin your risk assessment. The regulatory text is final. The guidance is available. The methodology is established. The only variable is when you start.

For organizations just beginning:

  1. Start with a complete inventory of your AI systems (see our compliance checklist guide for a detailed inventory methodology)
  2. Prioritize systems that most obviously fall into Annex III categories: employment AI, credit scoring, biometric systems
  3. Work through the six-step methodology above for your highest-priority systems first
  4. Use the results to build organizational muscle before tackling ambiguous cases

For organizations mid-process:

  1. Audit your existing classifications against the methodology above. Are there gaps in documentation?
  2. Verify that you have assessed all third-party AI tools, not just internally developed systems
  3. Check that your classification rationales are documented to a standard that would satisfy regulatory review
  4. Ensure you have a review schedule for reclassification triggers

For organizations with dedicated compliance teams:

  1. Use this methodology as a standardized framework across business units and geographies
  2. Establish a central registry of classification decisions with version control
  3. Implement a change management process that triggers reclassification when systems are updated or redeployed
  4. Consider the platform assessment as a baseline that your legal team can refine and validate

Assess Your Compliance in Minutes

Risk assessment does not have to be a multi-month project consuming your legal team's bandwidth. The methodology is systematic. The criteria are defined. The classification outcomes are deterministic given the right inputs.

AI Comply HQ exists to collect those inputs efficiently, apply the classification logic accurately, and generate the documentation your organization needs, all through a guided, conversational interview that takes minutes.

Start your free risk assessment today. Answer straightforward questions about your AI systems and receive an immediate classification determination with documented rationale, applicable requirements, and a prioritized compliance roadmap.

No legal expertise required to begin. Start your free trial today.

Start Your Free Risk Assessment

AI Comply HQ supports compliance operations. It does not provide legal advice. Consult qualified legal counsel for advice specific to your situation.

Assess Your Compliance in Minutes

Our AI-powered compliance interview classifies your AI systems, auto-fills regulatory forms, and generates audit-ready documentation.

Start Your Free Trial

7-day free trial. Cancel anytime.