Back to Resources
Compliance Guides14 min read

EU AI Act Compliance Checklist for 2026: What Your Company Must Do Now

ACH

AI Comply HQ

The clock is running. On August 2, 2026, the EU AI Act's high-risk and limited-risk obligations become fully enforceable. Organizations that develop, deploy, import, or distribute AI systems in the European Union will face binding requirements backed by penalties of up to 35 million euros or 7% of global annual turnover, whichever is higher.

This is not a soft launch. The European Commission has already established the AI Office, national competent authorities are standing up enforcement capabilities, and the first prohibited-practice provisions have been in effect since February 2025. The regulatory machinery is operational.

Yet a significant share of organizations remain unprepared. A 2025 survey by the Centre for European Policy Studies found that fewer than one in three AI-deploying businesses had completed even a preliminary compliance assessment. The gap between regulatory reality and organizational readiness is wide and narrowing fast.

This article provides the comprehensive, actionable checklist your organization needs to close that gap. Whether you are a startup deploying a single chatbot or an enterprise operating dozens of AI-powered systems across the EU, the steps below apply to you.

Why Every AI-Deploying Company Needs This Checklist Now

The EU AI Act is not merely a European regulation. Its extraterritorial reach means it applies to any organization whose AI systems affect people located in the EU, regardless of where the organization is headquartered. If your AI system processes data from EU residents, generates outputs used within the EU, or is made available on the EU market, you are in scope.

The consequences of non-compliance extend beyond fines. Organizations that fail to meet their obligations risk:

  • Market access restrictions. Non-compliant high-risk AI systems cannot legally be placed on the EU market
  • Reputational damage. Public enforcement actions erode customer trust and partner confidence
  • Contractual exposure. Enterprise customers increasingly require EU AI Act compliance as a procurement condition
  • Operational disruption. Remediation under regulatory pressure is far more costly and disruptive than proactive compliance

The August 2, 2026 deadline is not a starting line. It is a finish line. The work of classification, documentation, risk management, and technical adaptation takes months. Organizations that begin today will be ready. Those that wait will face rushed, expensive, and error-prone compliance efforts under time pressure.

Find Out Where You Stand: Free Assessment

Section 1: Understanding the EU AI Act Risk Tiers

The EU AI Act operates on a risk-based framework. Your obligations depend entirely on which risk tier your AI system falls into. Misclassification is the most common and most consequential compliance error. It can leave you either dangerously under-prepared or unnecessarily burdened with requirements that do not apply.

Unacceptable Risk (Prohibited)

Certain AI practices are banned outright under the Act. These prohibitions have been in effect since February 2, 2025. If your AI system falls into this category, it cannot legally operate in the EU under any circumstances.

Prohibited practices include:

  • Social scoring by public authorities that leads to detrimental treatment of individuals
  • Exploitative AI that deploys subliminal, manipulative, or deceptive techniques to distort behavior in ways that cause significant harm
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement purposes (with narrow, judicially authorized exceptions)
  • Emotion recognition in workplaces and educational institutions
  • Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases
  • Biometric categorization systems that infer sensitive attributes such as race, political opinions, or sexual orientation
  • Predictive policing based solely on profiling or personality traits

If any of your AI systems engage in these practices, they must be decommissioned immediately. There is no transition period remaining.

High-Risk

High-risk AI systems are subject to the most demanding compliance obligations. These are systems that pose significant risks to health, safety, or fundamental rights. The Act defines high-risk systems in two ways:

Annex I systems. AI systems that are safety components of products already covered by existing EU harmonized legislation (medical devices, machinery, toys, vehicles, aviation, and others). These must comply by August 2, 2027.

Annex III systems. AI systems used in sensitive domains, including:

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure (energy, water, transport, digital)
  • Education and vocational training (access, assessment, monitoring)
  • Employment, worker management, and access to self-employment (recruitment, task allocation, performance evaluation)
  • Access to essential private and public services (credit scoring, insurance, emergency dispatch)
  • Law enforcement (risk assessment, evidence evaluation, crime detection)
  • Migration, asylum, and border control (risk assessment, document verification)
  • Administration of justice and democratic processes (case research, law application)

High-risk systems must comply by August 2, 2026.

Limited Risk

Limited-risk AI systems carry transparency obligations. Users must be informed that they are interacting with an AI system. This applies to:

  • Chatbots and conversational AI. Users must know they are communicating with a machine
  • AI-generated or manipulated content. Including deepfakes, synthetic images, audio, and video
  • Emotion recognition and biometric categorization systems. Individuals must be informed when such systems are in operation
  • AI-generated text published to inform the public on matters of public interest. Must be labeled as artificially generated

These transparency obligations also take effect on August 2, 2026.

Minimal Risk

The vast majority of AI systems fall into the minimal-risk category. These face no mandatory requirements under the Act, though the European Commission encourages voluntary adherence to codes of conduct. Examples include:

  • Spam filters
  • AI-enhanced video games
  • Inventory management systems
  • Basic recommendation engines for non-essential services

Even minimal-risk systems benefit from documentation and transparency as a matter of good practice, but the Act imposes no binding obligations.

Section 2: The Pre-Compliance Checklist

The following checklist represents the essential steps every AI-deploying organization must complete before August 2, 2026. These are not optional best practices. For high-risk and limited-risk systems, they are regulatory requirements.

1. Inventory All AI Systems

You cannot comply with regulations that apply to systems you do not know about. The first and most critical step is a complete inventory.

  • Catalog every AI system your organization develops, deploys, or uses
  • Include third-party AI tools, embedded AI features in SaaS products, and AI components within larger systems
  • Document the vendor, version, deployment context, and purpose for each system
  • Identify the organizational owner responsible for each system
  • Do not overlook internal tools. HR screening software, automated code review, customer support chatbots, and analytics tools all count

Common gap: Organizations frequently undercount their AI systems by 40-60% on the first inventory pass. Conduct a second sweep specifically targeting procurement records, IT asset management systems, and departmental software licenses.

2. Classify Each System by Risk Tier

With your inventory complete, classify every system against the four risk tiers described above.

  • Map each system against the Annex III categories for high-risk classification
  • Evaluate whether any system falls under Annex I product safety legislation
  • Identify systems with transparency obligations (limited risk)
  • Document the classification rationale for each system. Regulators will want to see your reasoning
  • When classification is ambiguous, err on the side of the higher risk tier

3. Document Intended Use and Deployment Context

For every system classified as high-risk or limited-risk, create detailed documentation of:

  • The system's intended purpose and the specific tasks it performs
  • The deployment context, including where, when, and by whom it is used
  • The target user population and affected persons
  • Geographic scope of deployment
  • Any foreseeable misuse scenarios and the safeguards against them

This documentation forms the foundation of your conformity assessment and must be maintained throughout the system's lifecycle.

4. Assess Data Governance Practices

High-risk AI systems require rigorous data governance. Evaluate your current practices against these requirements:

  • Training data quality. Are your training datasets relevant, representative, and free of material errors?
  • Bias assessment. Have you tested for and mitigated biases in training, validation, and testing data?
  • Data provenance. Can you document where your training data came from and the legal basis for its use?
  • Data minimization. Are you collecting only the data necessary for the system's intended purpose?
  • Special category data. If you process data revealing race, health, political opinions, or other sensitive attributes, do you have explicit justification?

5. Review Human Oversight Mechanisms

The EU AI Act requires that high-risk AI systems be designed to allow effective human oversight. Assess whether your systems provide:

  • The ability for human operators to understand the system's capabilities and limitations
  • Mechanisms to monitor the system's operation in real time
  • The capacity to intervene in or override the system's outputs
  • A stop function or the ability to discontinue operation when necessary
  • Clearly defined roles and responsibilities for human oversight personnel
Check Your Oversight Readiness: Start Free Trial

6. Evaluate Transparency Requirements

Transparency is a cross-cutting obligation that applies differently depending on risk tier:

  • High-risk systems must provide deployers with clear instructions of use, including information about performance characteristics, known limitations, and human oversight measures
  • Limited-risk systems must inform users that they are interacting with AI
  • AI-generated content must be machine-detectable as such (watermarking or equivalent)
  • Review all user-facing interfaces, documentation, and disclosures for adequacy

7. Check Conformity Assessment Needs

High-risk AI systems must undergo a conformity assessment before they can be placed on the EU market. Determine:

  • Whether your system requires third-party conformity assessment (required for biometric identification and critical infrastructure systems)
  • Whether self-assessment through an internal quality management system is sufficient
  • Whether your system is covered by existing EU harmonized standards that provide a presumption of conformity
  • Which notified body would conduct a third-party assessment if required

8. Establish Incident Reporting Procedures

Providers of high-risk AI systems must report serious incidents to national market surveillance authorities. Establish:

  • A clear definition of what constitutes a serious incident (death, serious damage to health, property, or environment, or serious and irreversible disruption in critical services)
  • An internal reporting chain with defined roles and escalation paths
  • A timeline-compliant process. Serious incidents must be reported within 15 days of the provider becoming aware
  • Documentation and record-keeping procedures for all incidents, including near-misses

9. Set Up Technical Documentation

Technical documentation is the backbone of EU AI Act compliance for high-risk systems. Your documentation must include:

  • A general description of the AI system
  • Detailed information about the development process, including design specifications and development methodology
  • Information about monitoring, functioning, and control of the system
  • A description of the risk management system
  • Documentation of data governance measures
  • Testing and validation procedures and results
  • Information about the system's performance across relevant demographic groups
  • A description of the system's logging capabilities

This documentation must be kept up to date throughout the system's lifecycle and made available to national competent authorities upon request.

10. Plan for Ongoing Monitoring

Compliance is not a one-time event. High-risk AI system providers must implement a post-market monitoring system that:

  • Actively and systematically collects data on system performance after deployment
  • Evaluates the system's continued compliance with requirements
  • Identifies risks that may emerge during real-world operation
  • Triggers corrective actions when necessary
  • Feeds back into the risk management system

11. Register in the EU Database

Providers and deployers of high-risk AI systems must register their systems in the EU database for high-risk AI systems before placing them on the market. Prepare the required registration information, including system identification, provider contact details, intended purpose, and conformity status.

12. Appoint an Authorized Representative (if applicable)

Organizations established outside the EU that place high-risk AI systems on the EU market must appoint an authorized representative established in the EU. This representative serves as the point of contact for national authorities and bears specific legal obligations under the Act.

Section 3: Timeline. What Is Due When

Understanding the phased enforcement timeline is critical for prioritization. Here is the complete schedule:

| Date | Milestone | Status | |------|-----------|--------| | August 1, 2024 | EU AI Act enters into force | Complete | | February 2, 2025 | Prohibited AI practices banned; AI literacy obligations apply | Active | | August 2, 2025 | General-purpose AI (GPAI) model obligations apply | Active | | August 2, 2026 | Full obligations for high-risk AI systems (Annex III) and limited-risk transparency requirements | 4 months away | | August 2, 2027 | High-risk AI in Annex I products (existing EU product safety legislation) must comply | Upcoming |

What this means for your planning:

  • If you operate any prohibited practices, you are already in violation. Act immediately.
  • If you provide general-purpose AI models, your obligations are already enforceable.
  • If you develop or deploy high-risk AI systems under Annex III, you have until August 2, 2026, roughly four months from this publication.
  • If your AI is embedded in regulated products (medical devices, machinery, etc.), you have until August 2, 2027, but should begin immediately given the complexity of dual regulatory compliance.

The Commission can also issue delegated acts to update the list of high-risk systems, so monitor regulatory developments continuously.

Section 4: How AI Comply HQ Automates This Checklist

Completing the checklist above manually is possible but time-consuming. It typically requires weeks of effort from cross-functional teams spanning legal, engineering, product, and operations. Many organizations spend tens of thousands of euros on external consultancies just to complete the classification and gap analysis phases.

AI Comply HQ was built to compress this timeline from weeks to hours.

Here is how the platform maps to each phase of the compliance process:

AI-Guided Interview. Instead of parsing hundreds of pages of regulatory text, our interview system asks targeted, plain-language questions about your AI systems. Your answers are automatically mapped to the Act's requirements using a classification engine trained on the full regulatory text, recitals, and European Commission guidance.

Automated Risk Classification. Based on your interview responses, the platform classifies each AI system by risk tier, maps it against Annex III categories, and identifies applicable obligations. No legal expertise required.

Compliance Documentation Generation. The platform auto-generates technical documentation, risk management frameworks, and transparency notices from your interview data. Outputs are structured to meet the format and content requirements specified in the Act.

Gap Analysis and Roadmap. After classification, the platform identifies specific gaps between your current state and full compliance, then generates a prioritized remediation roadmap with clear deadlines.

Audit-Ready Reports. Every output is designed to withstand regulatory scrutiny. When a national competent authority requests documentation, you have it ready.

The platform does not replace legal counsel for complex interpretive questions. What it does is eliminate the 80% of compliance work that is systematic, structured, and automatable, freeing your team to focus on the judgment calls that actually require human expertise.

Automate Your Compliance Checklist: Start Free

Assess Your Compliance in Minutes

The EU AI Act's August 2, 2026 deadline is four months away. Every week of delay compresses the time available for remediation, increases the risk of enforcement action, and raises the cost of compliance.

You do not need to hire a consultancy. You do not need to read 400 pages of regulatory text. You need a structured process that walks you through classification, identifies your gaps, and generates the documentation regulators expect.

That is exactly what AI Comply HQ provides.

Start your free compliance assessment today. Our AI-guided interview takes minutes, not months. You will receive your risk classification, gap analysis, and initial compliance documentation immediately.

Start Your Free Compliance Assessment

AI Comply HQ supports compliance operations. It does not provide legal advice. Consult qualified legal counsel for advice specific to your situation.

Assess Your Compliance in Minutes

Our AI-powered compliance interview classifies your AI systems, auto-fills regulatory forms, and generates audit-ready documentation.

Start Your Free Trial

7-day free trial. Cancel anytime.