
EU AI Act Compliance: Complete Guide for 2026 (Regulation EU 2024/1689)
Everything companies need to know about EU AI Act compliance — prohibited AI, high-risk Annex III requirements, GPAI obligations, August 2026 deadline, penalties up to €35M, and a step-by-step checklist.
EU AI Act Compliance: Complete Guide for 2026
The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Published in the Official Journal on 12 July 2024, it entered into force on 1 August 2024 and phases in obligations through 2027.
The most critical deadline is five months away: from 2 August 2026, high-risk AI system requirements under Annex III become fully enforceable — with fines up to €35 million or 7% of global turnover.
Whether you develop AI products, deploy AI tools from third-party vendors, or use AI models in your B2B workflows, the EU AI Act almost certainly applies to you. This guide covers the full scope, risk classification, enforcement timeline, penalty structure, and the practical steps you need to take before August 2026.
EU AI Act At a Glance
| Question | Short answer |
|---|---|
| What is it? | Regulation (EU) 2024/1689 — first horizontal AI regulation worldwide. |
| Who must comply? | Providers, deployers, importers, and distributors of AI systems used in the EU. |
| Entered into force? | 1 August 2024. |
| Prohibited AI enforced? | 2 February 2025. |
| GPAI obligations apply? | 2 August 2025. |
| High-risk AI deadline? | 2 August 2026 (Annex III systems). |
| Penalty for prohibited AI? | Up to €35M or 7% of global annual turnover. |
| Enforcement body? | EU AI Office + national competent authorities (one per Member State). |
What Is the EU AI Act?
The EU AI Act establishes a risk-based framework for AI. Rather than regulating AI technology as such, it regulates AI systems by the risk they pose to health, safety, and fundamental rights. The higher the risk, the stricter the requirements.
Four-Tier Risk Classification
| Risk Tier | Examples | What it means |
|---|---|---|
| Unacceptable (Prohibited) | Social scoring, subliminal manipulation, real-time facial recognition in public spaces | Banned outright. Already enforceable since 2 February 2025. |
| High-Risk (Annex III) | CV screening, credit scoring, biometric ID, educational assessment | Must meet quality, transparency, and oversight requirements before August 2026. |
| Limited-Risk | Chatbots, deepfake generators, emotion recognition tools | Must disclose AI nature to users (Article 50 transparency). |
| Minimal-Risk | Spam filters, AI-powered video games | No specific obligations, though voluntary codes of conduct encouraged. |
The regulation recognises that most AI systems pose minimal or limited risk and imposes the heaviest compliance burdens only where the stakes are highest.
Who Must Comply?
The AI Act defines four categories of operators:
| Role | Definition | Key obligations |
|---|---|---|
| Provider | Develops an AI system or GPAI model and places it on the market or puts it into service. | Primary compliance burden: conformity assessment, technical documentation, EU database registration. |
| Deployer | Uses an AI system under its own authority in a professional context. | Implement human oversight, monitor AI performance, conduct fundamental rights impact assessments for certain high-risk systems. |
| Importer | Brings AI systems from third countries into the EU market. | Verify that the provider has met requirements; affix importer declaration. |
| Distributor | Makes AI systems available on the EU market without modifying them. | Verify compliance, co-operate with authorities. |
Territorial scope: The Act applies where the AI system's output is used in the EU, regardless of where the provider is established. A US company whose AI model is deployed by EU customers falls within scope.
Chapter II: Prohibited AI Practices (In Force Since 2 February 2025)
Article 5 of the AI Act bans AI practices that pose an unacceptable risk to fundamental rights. These already carry penalties of up to €35M or 7% of global annual turnover.
The following are prohibited:
- Subliminal manipulation — AI that subliminally influences behaviour in a way that causes harm.
- Exploiting vulnerabilities — AI targeting individuals based on age, disability, or social/economic situation to distort their behaviour.
- Social scoring — Government or private evaluation of persons based on social behaviour, resulting in detrimental treatment.
- Real-time biometric identification in public spaces (law enforcement use, with narrow exceptions).
- Emotion recognition in workplaces and educational institutions.
- Biometric categorisation to infer sensitive attributes (race, political opinion, sexual orientation).
- Untargeted scraping of facial images from the internet or CCTV to build recognition databases.
- Predictive policing based solely on profiling without individual suspicion.
Note for HR and EdTech teams: Emotion recognition systems in workplaces and schools are prohibited. If you use wellness monitoring tools, engagement trackers, or proctoring software with emotion detection, review them immediately.
Chapter III & Annex III: High-Risk AI Systems (Deadline: 2 August 2026)
This is the category that affects most enterprise and B2B deployments. Annex III lists eight sectors where AI systems are presumed high-risk:
- Biometrics — Remote identification, categorisation, emotion recognition (where not prohibited).
- Critical infrastructure — Safety components of energy, water, transport, digital infrastructure.
- Education and vocational training — Determining access, assessing students, evaluating learning outcomes.
- Employment and worker management — Recruitment, selection, performance evaluation, promotion, termination.
- Access to essential private and public services — Credit scoring, insurance risk assessment, emergency services dispatch.
- Law enforcement — Risk assessment for crime, lie detection, evidence assessment.
- Migration, asylum, border control — Examination of applications, risk assessment, document authenticity.
- Administration of justice and democratic processes — AI used in legal proceedings, electoral systems.
What High-Risk Providers Must Do (Before August 2026)
| Requirement | What it means in practice |
|---|---|
| Risk management system (Art. 9) | Continuous risk assessment process throughout the AI lifecycle — identification, analysis, mitigation. |
| Data governance (Art. 10) | Training/validation/test datasets must be relevant, representative, and error-free to the extent possible. |
| Technical documentation (Art. 11 + Annex IV) | Detailed documentation of system design, intended purpose, capabilities, limitations, and performance metrics. |
| Record-keeping / logging (Art. 12) | Automatic event logs enabling traceability and auditability. |
| Transparency to deployers (Art. 13) | Instructions for use explaining capabilities, limitations, and performance characteristics. |
| Human oversight (Art. 14) | Design must allow natural persons to understand, monitor, and — where necessary — override the AI's outputs. |
| Accuracy, robustness, cybersecurity (Art. 15) | Systems must perform consistently; protect against adversarial attacks, data poisoning, and model failures. |
| Conformity assessment (Art. 43) | Self-assessment for most Annex III systems; third-party assessment for biometric identification and certain critical infrastructure uses. |
| EU database registration (Art. 49) | High-risk systems must be registered in the EU AI public database before deployment. |
| Post-market monitoring (Art. 72) | Ongoing monitoring of performance post-deployment; serious incidents must be reported to national authorities. |
Deployer Obligations for High-Risk Systems
Deployers (organisations using high-risk AI provided by a third party) have their own obligations:
- Fundamental Rights Impact Assessment (FRIA) — Required for deployers that are public bodies or private entities providing essential services.
- Human oversight measures — Must implement the oversight procedures specified by the provider.
- Staff training — Ensure that staff using high-risk AI systems have the necessary competence.
- Incident monitoring — Monitor the system and report serious incidents to the provider and, where required, to the national authority.
Key point for SaaS buyers: If you purchase a third-party AI tool that falls under Annex III, you are a deployer — and you have legal obligations of your own. Vendor contracts should now include AI Act compliance clauses, including evidence of conformity assessment and access to technical documentation.
Chapter V: General-Purpose AI (GPAI) Models (In Force Since 2 August 2025)
Chapter V introduces a separate regime for large-scale AI models that can be used for many purposes — foundation models like GPT-4, Claude, Gemini, Llama, and Mistral.
Who Is a GPAI Provider?
Any organisation that trains and makes available a GPAI model — including through an API — is a GPAI provider. The threshold is models trained using more than 10²³ FLOPs (floating-point operations).
Core GPAI Obligations (All Providers)
- Technical documentation — Document model architecture, training data, compute used, and performance benchmarks.
- Copyright compliance — Implement a policy for compliance with EU copyright law, including the text-and-data mining exception under Article 4 of the DSM Directive.
- Transparency to downstream providers — Publish or make available information to help downstream AI developers understand capabilities and limitations.
Additional Obligations: Systemic-Risk GPAI Models
Models trained using ≥10²⁵ FLOPs are presumed to have high-impact capabilities and are subject to systemic-risk obligations (Article 55):
- Model evaluations — Including adversarial testing ("red-teaming") before and after deployment.
- Systemic risk assessment — Assess risks that could have significant impact on the Union market.
- Cybersecurity measures — Protect the model and related infrastructure.
- Incident reporting — Report serious incidents to the EU AI Office.
- Notification — Proactively notify the EU AI Office of systemic-risk models.
The GPAI Code of Practice, published on 10 July 2025, provides practical guidance on implementing these obligations. Participation is voluntary but constitutes a presumption of compliance.
Article 50: Transparency Obligations (Limited-Risk AI)
Even if your AI system is not high-risk or prohibited, Article 50 requires disclosure in specific situations:
- Chatbots must inform users they are interacting with an AI system.
- Emotion recognition and biometric categorisation systems must notify the persons affected.
- Deepfakes — AI-generated synthetic audio, video, or images must be labelled with machine-readable watermarks.
- AI-generated text on matters of public interest must be marked as artificially generated.
These obligations apply from 2 August 2026.
Penalty Structure (Article 99 and Article 101)
| Violation | Maximum fine |
|---|---|
| Prohibited AI practices (Art. 5) | €35 million or 7% of global annual turnover |
| Non-compliance with high-risk obligations (Annex III) | €15 million or 3% of global annual turnover |
| Supplying incorrect/misleading information to authorities | €7.5 million or 1% of global annual turnover |
| GPAI providers (Commission enforcement from Aug 2026) | €15 million or 3% of global annual turnover |
| Non-cooperation with EU AI Office (GPAI) | Periodic penalty payments |
Fines apply to the higher of the fixed amount or the turnover percentage. For SMEs and start-ups, national authorities may apply the lower cap.
Context: The €35M prohibited-AI fine exceeds the maximum first-tier GDPR fine (€10M / 2%). For high-impact violations, the 7% rate matches the GDPR's highest tier. The EU AI Act is designed to have real financial teeth.
Enforcement Infrastructure
EU AI Office
The European AI Office serves as the central EU authority with direct enforcement powers over GPAI models. It co-ordinates national enforcement, publishes guidance, and maintains the EU AI public database.
National Competent Authorities
Each Member State must designate at least:
- One market surveillance authority — investigates AI systems on the market.
- One notifying authority — oversees conformity assessment bodies.
National authorities were required to be in place by 2 August 2025. Enforcement powers vary by Member State. Italy's national AI Law (Law No. 132/2025), which entered into force on 10 October 2025, established fines up to €774,685 for national-level violations.
"Digital Omnibus" Extension Proposal
In late 2025, the European Commission proposed a "Digital Omnibus" package that could postpone certain Annex III high-risk obligations until December 2027. As of March 2026, this proposal has not been adopted. Prudent compliance planning should treat 2 August 2026 as the binding deadline.
August 2026 Compliance Checklist
Step 1 — AI System Inventory
- Map every AI system your organisation develops, deploys, imports, or distributes that touches EU users.
- Document system name, vendor, purpose, data inputs/outputs, and user population.
Step 2 — Risk Classification
- For each system, determine its risk tier: prohibited, high-risk (Annex III), limited-risk, or minimal-risk.
- Apply the Article 6 classification rules: Does the system fall under Annex III? Does it pose a significant risk to health, safety, or fundamental rights?
Step 3 — Address Prohibited AI Immediately
- Shut down or redesign any system that violates Article 5.
- Audit emotion-recognition tools in workplace/education settings.
- Review social-scoring or behavioural-profiling systems.
Step 4 — High-Risk Compliance Programme
- Implement a risk management system (Article 9).
- Conduct data governance review for training/validation data (Article 10).
- Prepare technical documentation per Annex IV (Article 11).
- Implement logging/record-keeping (Article 12).
- Define human oversight procedures (Article 14).
- Complete conformity assessment (Article 43).
- Register in the EU AI database (Article 49).
Step 5 — Deployer Obligations
- For third-party high-risk AI tools: request and review technical documentation and conformity assessment records from providers.
- Update vendor contracts to include AI Act compliance obligations.
- Complete a Fundamental Rights Impact Assessment if required.
- Train relevant staff on AI oversight procedures.
Step 6 — GPAI Obligations (If Applicable)
- If you provide a GPAI model: implement technical documentation, copyright policy, and downstream transparency requirements.
- Assess whether your model exceeds the 10²⁵ FLOP systemic-risk threshold.
- Review and consider signing the GPAI Code of Practice.
Step 7 — Transparency (Article 50)
- Add AI disclosure notices to all chatbot interfaces.
- Implement deepfake watermarking if generating synthetic media.
- Notify users of emotion recognition where not prohibited.
How the EU AI Act Interacts with Other EU Regulations
| Regulation | Relationship |
|---|---|
| GDPR | AI systems processing personal data must meet both AI Act and GDPR requirements. The AI Act's data governance obligations (Art. 10) complement GDPR's data minimisation and accuracy principles. |
| NIS2 | High-risk AI systems in critical infrastructure must meet both AI Act (accuracy, robustness, cybersecurity — Art. 15) and NIS2 risk management measures (Art. 21). |
| DORA | Financial entities deploying AI in ICT systems face overlapping AI Act and DORA obligations. DORA's ICT risk management framework can partially address AI Act operational resilience requirements. |
| Cyber Resilience Act (CRA) | AI products with digital elements placed on the EU market must meet both CRA cybersecurity requirements and AI Act requirements. The CRA's vulnerability reporting and SBOM obligations apply from September 2026. |
How Orbiq Helps with EU AI Act Compliance
EU AI Act compliance requires documentation, evidence collection, vendor assessments, and continuous monitoring across your AI portfolio — exactly the kind of structured compliance workflow that Orbiq's ISMS Software is built for.
- AI system inventory and risk classification — Document and classify every AI system with structured evidence trails using Orbiq's ISMS Software.
- Vendor AI Act questionnaires — Send AI Act compliance questionnaires to third-party AI providers and track responses through Orbiq's Vendor Assurance Platform.
- Evidence collection and audit readiness — Collect technical documentation, conformity assessment records, and training data governance evidence automatically.
- Continuous monitoring — Orbiq's Continuous Monitoring ensures your AI Act compliance posture stays current as systems evolve or new regulations emerge.
Frequently Asked Questions
See the FAQ section at the top of this guide for answers to the most common questions about the EU AI Act.
Related Articles
- Cyber Resilience Act: Complete Compliance Guide
- DORA Compliance: Complete Guide
- NIS2 Compliance: How to Achieve and Maintain Compliance
- EU Compliance Software: Buyer's Guide
- GDPR Compliance: Complete Guide (Articles 28, 32, 33, 34)
Sources & References
- Regulation (EU) 2024/1689 — Official Journal — Official text of the EU AI Act, 12 July 2024
- EU AI Act Implementation Timeline — artificialintelligenceact.eu — phased enforcement schedule
- Article 5 — Prohibited AI Practices — artificialintelligenceact.eu
- Article 6 — Classification Rules for High-Risk AI Systems — artificialintelligenceact.eu
- Annex III — High-Risk AI System Categories — artificialintelligenceact.eu
- Article 55 — Obligations for Systemic-Risk GPAI Models — artificialintelligenceact.eu
- Article 99 — Penalties — artificialintelligenceact.eu — fine structure
- GPAI Code of Practice — European Commission, published 10 July 2025
- Guidelines for GPAI Model Providers — European Commission
- DLA Piper: Latest Wave of AI Act Obligations Take Effect — August 2025 enforcement milestone
- Orrick: 6 Steps Before 2 August 2026 — compliance preparation guide
- Baker Botts: What Energy Executives Should Know Before August 2026 — March 2026 enforcement guidance
- EU AI Act Penalties — Holistic AI — detailed penalty analysis
- EU AI Office — European Commission official page