The EU AI Act — Regulation (EU) 2024/1689 — is the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament and Council on 13 June 2024 and officially published on 12 July 2024, it establishes harmonised rules for the development, placement on the market, and use of AI systems across the European Union.
As a regulation (not a directive), the AI Act applies directly in all EU member states without requiring national transposition.
What Counts as an AI System?
Article 3 of the AI Act defines an AI system as:
A machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the inputs it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
This is a broad definition that covers:
Large language models (GPT, Claude, Gemini)
Machine learning classifiers and predictive models
Computer vision and image recognition systems
Natural language processing tools
Recommendation engines
Automated decision-making systems
Robotic process automation with adaptive capabilities
Why the EU AI Act Was Needed
Challenge
How the AI Act Addresses It
No legal framework for AI
First comprehensive regulation specifically addressing AI risks
Fundamental rights risks
Bans the most harmful AI practices outright
Lack of transparency
Requires disclosure when people interact with AI systems
Unaccountable decisions
Mandates human oversight for high-risk AI decisions
Market fragmentation
Creates a single set of rules across all 27 EU member states
Unregulated GPAI models
Establishes obligations for providers of general-purpose AI models
Who Must Comply?
The AI Act applies to a broad range of actors in the AI value chain:
Role
Definition
Key Obligations
Providers
Develop or commission AI systems and place them on the EU market or put them into service
Full compliance with requirements for the applicable risk tier; conformity assessment; registration
Deployers
Use AI systems under their authority (not for personal non-professional use)
Human oversight; transparency to affected persons; monitoring; DPIA for high-risk systems
Importers
Place AI systems from non-EU providers on the EU market
Verify conformity documentation; ensure traceability and labelling
Distributors
Make AI systems available on the EU market without modification
Verify conformity marking; ensure storage and transport do not compromise compliance
Authorised representatives
Mandated by non-EU providers to act on their behalf in the EU
Maintain documentation; cooperate with authorities; act as contact point
Extraterritorial Reach
The AI Act applies to:
Providers placing AI systems on the EU market or putting them into service in the EU — regardless of where they are established
Deployers located within the EU
Providers and deployers located outside the EU if the output of their AI system is used in the EU
Critical point: The AI Act does not only cover AI developers. If your organisation uses any AI tool — from ChatGPT for customer service to an algorithm for candidate screening — you are a deployer with regulatory obligations.
The Risk-Based Approach: Four Tiers
The AI Act classifies AI systems into four risk categories, with obligations increasing proportionally to risk:
Risk Tier
Description
Examples
Regulatory Approach
Unacceptable risk
Practices that threaten fundamental rights
Social scoring, subliminal manipulation, mass biometric surveillance
Total ban
High risk
Systems in sensitive domains affecting health, safety, or fundamental rights
Employment screening, credit scoring, medical diagnostics, law enforcement
Strict obligations before market placement
Limited risk
Systems that directly interact with people
Chatbots, deepfake generators, emotion recognition systems
Transparency obligations
Minimal risk
Most AI applications
Spam filters, AI content recommendations, video games
No mandatory obligations (voluntary codes of practice)
Practical note: Most organisations using AI tools fall into the "limited risk" category with transparency obligations. However, if you use AI for decisions that affect people — hiring, credit, access to services, insurance — you are almost certainly in the high-risk category with significantly stricter requirements.
Prohibited AI Practices
Chapter II of the AI Act defines AI practices that are strictly prohibited because they pose an unacceptable risk to fundamental rights. These bans are already in effect since 2 February 2025.
The Eight Prohibited Practices
#
Prohibited Practice
What It Means
1
Subliminal manipulation
Using techniques beyond a person's consciousness to materially distort behaviour in a way that causes or is likely to cause significant harm
2
Exploitation of vulnerabilities
Targeting the vulnerabilities of specific groups (age, disability, social or economic situation) to materially distort behaviour causing significant harm
3
Social scoring
Evaluating or classifying individuals based on social behaviour or personal characteristics, leading to detrimental or unfavourable treatment in unrelated contexts
4
Criminal behaviour prediction
Assessing the risk that a person will commit a criminal offence solely based on profiling or personality traits (predictive policing exceptions apply)
5
Untargeted facial image scraping
Indiscriminate scraping of facial images from the internet or CCTV to build facial recognition databases
6
Emotion recognition in workplaces and schools
Using AI to infer emotions in employment and educational settings (with narrow exceptions for medical and safety purposes)
7
Biometric categorisation for sensitive attributes
Using biometric data to infer race, political opinions, trade union membership, religious beliefs, or sexual orientation
8
Real-time remote biometric identification in public spaces
For law enforcement purposes (with strictly defined, narrow exceptions requiring judicial authorisation)
Warning: These prohibitions are already enforceable. If your organisation uses any AI system that could fall within these categories — such as an employee emotion recognition tool or a biometric categorisation system — immediate review and remediation is required.
High-Risk AI Systems
Chapter III establishes detailed obligations for high-risk AI systems — those that pose significant risks to health, safety, or fundamental rights.
Domains Covered by High-Risk Classification
Domain
Examples of High-Risk Systems
Biometrics
Remote biometric identification, categorisation of individuals
Critical infrastructure
Management of road traffic, water supply, gas, electricity, heating
Education
Admission to educational institutions, assessment of students, proctoring
Employment
Recruitment and candidate selection, performance monitoring and evaluation, task allocation
Access to essential services
Creditworthiness assessment, emergency services dispatch, health insurance risk assessment
Law enforcement
Risk assessment for victims, polygraphs, evidence evaluation, profiling
Fact-finding, application of law to specific cases
Democratic processes
Systems that may influence election outcomes
Eight Obligations for High-Risk AI Providers
#
Obligation
Requirements
1
Risk management system
Identify, analyse, assess, and mitigate risks throughout the AI system lifecycle
2
Data governance
Ensure training, validation, and testing data is relevant, representative, free from errors, and complete
3
Technical documentation
Prepare detailed documentation before market placement, covering system design, capabilities, limitations, and intended purpose
4
Record-keeping (logging)
Implement automatic logging of all relevant events to ensure traceability of the system's operation
5
Transparency to users
Provide clear, comprehensive information to deployers including capabilities, limitations, and instructions for use
6
Human oversight
Design the system to enable effective human oversight by qualified persons, including the ability to intervene or halt the system
7
Accuracy, robustness, and cybersecurity
Ensure appropriate levels of accuracy, robustness against errors and adversarial attacks, and cybersecurity protection
8
EU database registration
Register the high-risk system in the EU public database (Article 71) before placing it on the market
Deployer Obligations for High-Risk Systems
If you use a high-risk AI system (as a deployer), your obligations include:
Use the system in accordance with the provider's instructions for use
Ensure human oversight by trained, qualified individuals with authority to intervene
Monitor the system's operation and report serious incidents to the provider and competent authority
Conduct a data protection impact assessment (DPIA) under GDPR where applicable
Inform affected individuals that they are subject to a high-risk AI system's decision
Keep logs generated by the system for the period specified by the provider
Expert tip: Even if you use a third-party SaaS product for candidate screening or credit assessment, you are the deployer with legal obligations. You cannot outsource regulatory responsibility to the AI vendor — the AI Act explicitly places obligations on both providers and deployers.
General-Purpose AI Models (GPAI)
Chapter V addresses general-purpose AI models — foundation models like GPT, Claude, Gemini, Llama, and Mistral that can be adapted for many different tasks.
Obligations for All GPAI Model Providers
Obligation
Details
Technical documentation
Prepare and maintain detailed technical documentation about the model
Information to downstream providers
Provide information to providers of AI systems that integrate the GPAI model
Copyright compliance
Establish a policy to comply with EU copyright law, including the text and data mining opt-out
Training data summary
Publish a sufficiently detailed summary of the content used for training the model
GPAI Models with Systemic Risk
Models trained with computational resources exceeding 10^25 FLOPs (or designated by the Commission based on other criteria) are classified as having systemic risk and face additional obligations:
Additional Obligation
Details
Model evaluation
Perform standardised evaluations, including adversarial testing
Systemic risk assessment
Assess and mitigate reasonably foreseeable systemic risks
Incident tracking
Track, document, and report serious incidents to the European AI Office
Cybersecurity
Ensure adequate cybersecurity protection for the model and its infrastructure
Context: As of early 2026, models classified as having systemic risk include the largest frontier models from providers like OpenAI, Google, Anthropic, and Meta. The GPAI model rules took effect on 2 August 2025.
Transparency Requirements
Article 50 establishes transparency obligations that apply to certain AI systems regardless of their risk classification:
System Type
Transparency Obligation
AI systems interacting with people
Individuals must be informed that they are interacting with an AI system (unless this is obvious from the circumstances)
AI systems generating synthetic content
Outputs must be marked in a machine-readable format as AI-generated
Emotion recognition systems
Affected individuals must be informed of the system's operation and the categories of data processed
Deepfake systems
Content must be labelled as artificially generated or manipulated
AI-generated text published as factual
Must be labelled as AI-generated (with exceptions for editorially reviewed content)
Practical example: If your organisation uses an AI-powered chatbot for customer support, users must be clearly informed that they are communicating with an automated system. Concealing the AI interaction violates the regulation.
AI Act Timeline: Key Dates
Date
Milestone
Status
13 June 2024
AI Act adopted by the European Parliament and Council
Complete
12 July 2024
Published in the Official Journal of the EU
Complete
1 August 2024
AI Act entered into force
Complete
2 February 2025
Prohibited AI practices (Chapter II) and AI literacy obligations become enforceable
In effect
2 August 2025
GPAI model obligations (Chapter V) and governance provisions take effect
In effect
2 February 2026
Codes of practice for GPAI model providers expected
Upcoming
2 August 2026
Full application — high-risk AI system obligations (Chapter III), deployer obligations, penalties, and all remaining provisions
Upcoming
2 August 2027
Extended deadline for high-risk AI systems listed in Annex I (embedded in other EU product legislation)
Planned
Key Numbers
EUR 35 million — Maximum penalty for prohibited practices
7 % — Penalty as percentage of global turnover for the most serious violations
4 — Risk tiers in the regulatory framework
8 — Prohibited AI practices
10^25 FLOPs — Threshold for GPAI models with systemic risk
Penalties and Enforcement
The AI Act introduces a three-tier penalty structure that exceeds GDPR maximums:
Violation Type
Maximum Fine
Alternative (% of Turnover)
Prohibited AI practices
EUR 35 million
7 % of global annual turnover
High-risk system obligations
EUR 15 million
3 % of global annual turnover
Incorrect information to authorities
EUR 7.5 million
1 % of global annual turnover
Reduced Penalties for SMEs and Start-ups
For small and medium-sized enterprises and start-ups, the lower of the two thresholds (absolute amount vs percentage of turnover) applies — providing proportionate protection for smaller organisations.
Enforcement Structure
Authority
Role
European AI Office
Oversees GPAI model compliance; coordinates cross-border enforcement; develops codes of practice
National competent authorities
Enforce the AI Act at member state level; conduct market surveillance; handle complaints
National market surveillance authorities
Monitor the market for non-compliant AI systems; conduct inspections
Notified bodies
Perform third-party conformity assessments for certain high-risk AI systems
Comparison with GDPR: The AI Act's maximum penalty of EUR 35 million / 7 % of turnover for prohibited practices significantly exceeds GDPR's maximum of EUR 20 million / 4 % of turnover. This signals the EU's intent to treat AI regulation with the highest priority.
How to Achieve Compliance: 5-Step Roadmap
Step 1: Inventory All AI Systems
Identify every AI system your organisation develops, deploys, or procures:
Question
Why It Matters
What does the system do?
Determines risk classification
Who are the end users and affected persons?
Determines transparency and human oversight obligations
What domains and sectors are involved?
Determines whether the system falls into a high-risk category
What data is used for training and operation?
Determines data governance and GDPR obligations
Is the system developed in-house or procured?
Determines whether you are a provider, deployer, or both
Step 2: Classify Each System by Risk Tier
For each identified AI system, determine the applicable risk category:
Classification
Action Required
Prohibited
Immediately cease use or development
High risk
Apply all Chapter III obligations (risk management, data governance, documentation, logging, transparency, human oversight, accuracy, registration)
Automatic logging records (Article 12) — system event logs, input/output records, human interventions
Instructions for use (Article 13) — deployer guidance on capabilities, limitations, human oversight requirements
EU Declaration of Conformity (Article 47) — formal declaration that the system meets all applicable requirements
Conformity assessment — self-assessment or third-party assessment depending on the system category
For limited-risk systems:
Transparency notices — clear disclosure to users that they are interacting with AI
Content labelling — machine-readable marking of AI-generated content
Step 4: Establish Human Oversight
For high-risk systems, human oversight is a core requirement:
Designate qualified individuals responsible for overseeing each high-risk system
Define intervention procedures — when and how humans can override, pause, or halt the system
Ensure the technical capability to interrupt the system at any time
Document all human interventions and their rationale
Provide training to oversight personnel on the system's capabilities, limitations, and risks
Step 5: Register, Monitor, and Adapt
Register high-risk systems in the EU public database (Article 71) before market placement
Establish post-market monitoring to track the system's performance after deployment
Monitor regulatory developments — delegated acts, European AI Office guidelines, and national implementation measures
Update documentation and compliance measures as the regulatory framework evolves
Conduct regular reviews of AI system performance, risk classification, and compliance status
AI Act and GDPR: How They Interact
The AI Act and GDPR are complementary regulations that frequently apply simultaneously:
Aspect
AI Act
GDPR
Focus
Regulation of AI systems as products and services
Protection of personal data
Scope
Providers and deployers of AI systems
All organisations processing personal data
Risk approach
Four-tier risk classification of AI systems
Risk-based approach to data processing
When both apply
When an AI system processes personal data — which most do
Supervisory authority
National AI competent authorities + European AI Office
National data protection authorities
Practical Implications
If your AI system processes personal data, both regulations apply simultaneously
You need a lawful basis under GDPR for the personal data processed by the AI system
A Data Protection Impact Assessment (DPIA) under GDPR is typically required for high-risk AI systems
Data subject rights (access, erasure, objection, explanation of automated decisions) must be respected
AI Act transparency requirements complement GDPR's transparency obligations (Articles 13-14)
The AI Act's data governance requirements for training data align with GDPR's data quality and purpose limitation principles
Key insight: For most organisations, AI Act compliance and GDPR compliance must be planned in parallel. The data protection impact assessment, transparency requirements, and human oversight obligations overlap significantly.
AI Act and Other EU Regulations
Regulation
Relationship with AI Act
GDPR
Complementary — AI systems processing personal data must comply with both
NIS2
NIS2 cybersecurity requirements apply to AI system infrastructure; AI Act adds cybersecurity requirements for high-risk systems
DORA
DORA applies to financial entities using AI; AI Act adds AI-specific requirements
Product safety legislation
AI systems embedded in products (medical devices, vehicles, machinery) must comply with both sector-specific legislation and the AI Act
Digital Services Act
Online platforms using AI for content moderation or recommender systems must comply with both DSA and AI Act transparency requirements
GPAI model providers must comply with copyright rules, including text and data mining opt-outs
Frequently Asked Questions
Does the AI Act apply to my organisation?
If you develop, place on the market, or use any AI system in the EU — yes. The regulation applies to providers, deployers, importers, and distributors. Even using third-party AI tools (ChatGPT, Copilot, etc.) makes you a deployer with regulatory obligations.
What if I only use someone else's AI system?
As a deployer, you have obligations: use the system in accordance with the provider's instructions, ensure human oversight for high-risk applications, provide transparency to affected persons, monitor the system's operation, and report serious incidents.
What are the penalties for non-compliance?
Up to EUR 35 million or 7 % of global annual turnover for prohibited practices. EUR 15 million or 3 % for high-risk system violations. EUR 7.5 million or 1 % for providing incorrect information to authorities. Lower thresholds apply to SMEs and start-ups.
When does the AI Act fully apply?
The AI Act entered into force on 1 August 2024 and is being applied in phases. Prohibited practices have been banned since 2 February 2025. GPAI model obligations apply from 2 August 2025. Full application for high-risk systems takes effect on 2 August 2026.
How does the AI Act relate to GDPR?
They are complementary. The AI Act regulates AI systems as products and services. GDPR protects personal data. If an AI system processes personal data — and most do — both regulations apply simultaneously. Plan compliance for both in parallel.
Do I need to register my AI system?
High-risk AI systems must be registered in the EU public database (Article 71) before being placed on the market or put into service. This registration requirement does not apply to limited-risk or minimal-risk systems.
What is a GPAI model and does it affect me?
A General-Purpose AI model (GPAI) is a foundation model like GPT or Claude that can be adapted for many tasks. If you provide a GPAI model, you have specific obligations (technical documentation, copyright compliance, training data summary). If you only deploy a system built on a GPAI model, the provider's obligations apply to the model layer while your deployer obligations apply to your specific use case.
What human oversight is required for high-risk systems?
High-risk systems must be designed to enable effective human oversight. This means: qualified individuals assigned to oversee the system, the ability to understand and interpret system outputs, the capability to override or halt the system, and documentation of all human interventions.
Need support with the EU AI Act? Vision Compliance helps organisations navigate AI regulation — from system inventory and risk classification through documentation, human oversight setup, and ongoing compliance.
Robert Lozo, mag. iur., is a Partner at Vision Compliance specializing in EU regulatory compliance. He advises organizations on GDPR, NIS2, AI Act, and financial regulation, delivering audit-ready documentation and compliance roadmaps across regulated industries.