AI Risk Assessment: The Complete Framework for EU AI Act Compliance (2026)
December 4, 2025
Updated: February 22, 2026
30 min read
AI Regulation
As artificial intelligence becomes embedded in every layer of business operations — from customer service chatbots to credit-scoring algorithms — the question is no longer whether to assess AI risks, but how to do it systematically. The EU AI Act, which entered full application for high-risk systems on 2 August 2026, makes risk assessment a legal obligation. But beyond compliance, organisations that master AI risk assessment gain a competitive advantage: fewer incidents, faster deployments, and greater stakeholder trust.
This guide provides a complete, practical AI risk assessment framework that satisfies EU AI Act requirements while aligning with international standards like NIST AI RMF and ISO/IEC 42001. Whether you're a compliance officer facing your first AI audit or a CTO deploying your twentieth model, you'll find actionable methodology, scoring matrices, and ready-to-use templates.
Quick Reference
Details
Primary regulation
EU AI Act (Regulation 2024/1689)
High-risk obligations apply
2 August 2026
Key frameworks
NIST AI RMF 1.0, ISO/IEC 42001:2023
Risk categories
Unacceptable, High, Limited, Minimal
Maximum penalty
EUR 35 million or 7% global turnover
Assessment frequency
Continuous + annual formal review
Who must comply
Providers, deployers, importers of AI systems in EU
Fundamental Rights Impact Assessment
Required for high-risk public-sector deployments
Key Takeaways
The EU AI Act mandates a risk-based approach with four tiers — AI systems classified as high-risk face the most extensive requirements
A proper AI risk assessment framework covers six phases: inventory, identification, analysis, evaluation, treatment, and monitoring
ISO/IEC 42001 provides a certifiable AI management system standard that complements EU AI Act compliance
The NIST AI Risk Management Framework offers a voluntary but widely adopted structure around Govern, Map, Measure, and Manage functions
High-risk AI providers must maintain a continuous risk management system throughout the entire AI lifecycle
Fundamental Rights Impact Assessments (FRIA) are mandatory before deploying high-risk AI in public services
Share article
Need help with compliance?
Contact us for a free consultation
Organisations should combine quantitative scoring matrices with qualitative expert judgment for comprehensive risk evaluation
Start with your highest-risk AI systems first — a phased approach is both practical and defensible to regulators
A comprehensive AI risk assessment addresses all five categories — not just the technical ones that engineering teams naturally gravitate toward.
The EU AI Act Risk Classification Framework
The EU AI Act uses a risk-based pyramid to determine the level of regulatory obligation. Every AI system deployed in or affecting the EU market must be classified into one of four tiers.
How to Classify Your AI System
Follow this decision tree for each AI system in your inventory:
1. Is the AI practice explicitly prohibited? → UNACCEPTABLE RISK (ban)
2. Does it fall under Annex I (product safety) or Annex III (standalone)? → HIGH RISK
3. Does it interact with people, generate content, or categorise biometrically? → LIMITED RISK
4. None of the above → MINIMAL RISK
Risk Tier Overview
Risk Level
Regulatory Treatment
Key Obligation
Timeline
Unacceptable
Prohibited entirely
Must not deploy or make available
2 Feb 2025 ✅
High
Strict conformity requirements
Full risk management system, CE marking
2 Aug 2026 ✅
Limited
Transparency obligations
Disclose AI nature to users
2 Aug 2025 ✅
Minimal
No specific obligations
Voluntary codes of conduct encouraged
N/A
Important: Classification is based on the AI system's intended purpose, not the underlying technology. The same machine learning model could be minimal-risk in one application and high-risk in another.
Prohibited AI Practices (Unacceptable Risk)
The following AI practices are completely banned in the EU since 2 February 2025. If your AI system falls into any of these categories, no risk mitigation is sufficient — it must be discontinued.
#
Prohibited Practice
Description
Why Prohibited
1
Social scoring
AI systems evaluating/classifying people based on social behaviour leading to detrimental treatment
Violates human dignity and non-discrimination
2
Exploitation of vulnerabilities
AI exploiting age, disability, or social/economic situation to distort behaviour
Targets those least able to protect themselves
3
Real-time remote biometric identification
Live facial recognition in publicly accessible spaces by law enforcement
Legal research assistants, alternative dispute resolution, election influence analysis
Rule of law, democratic integrity
The "Opt-Out" for Low-Risk High-Risk
Article 6(3) provides an important exception: an AI system listed in Annex III is not considered high-risk if it:
Performs a narrow procedural task
Improves the result of a previously completed human activity
Detects decision-making patterns without replacing human assessment
Performs a preparatory task for assessments listed in Annex III
The provider must document why the exception applies and notify the relevant authority.
Limited and Minimal Risk Categories
Limited Risk: Transparency Obligations
AI systems classified as limited risk must comply with transparency requirements (Article 50):
System Type
Transparency Requirement
Example
Chatbots & conversational AI
Inform users they are interacting with AI
"You are chatting with an AI assistant"
Emotion recognition
Inform persons being subjected to the system
Notification before emotion analysis in research
Biometric categorisation
Inform persons being categorised
Age estimation at retail self-checkout
Deepfakes / AI-generated content
Label content as artificially generated or manipulated
Watermarking AI-generated images
AI-generated text on public interest
Disclose AI generation unless human editorial review
AI-written news articles must be labelled
Minimal Risk: Voluntary Compliance
All AI systems not falling into the above categories are minimal risk. While no specific legal obligations apply, the European Commission encourages voluntary:
Codes of conduct
AI ethics guidelines
Transparency measures
Internal governance frameworks
Best practice: Even for minimal-risk AI, maintaining a basic risk register and conducting lightweight assessments protects against regulatory reclassification and builds organisational AI maturity.
General-Purpose AI (GPAI) Model Obligations
The AI Act introduced specific rules for foundation models and general-purpose AI, recognising their unique position in the AI value chain.
All GPAI Providers Must
Obligation
Description
Key Details
Technical documentation
Maintain comprehensive model documentation
Training methodology, data sources, capabilities, limitations
Downstream information
Provide adequate information to deployers
Enable deployers to meet their own AI Act obligations
Models trained with cumulative compute exceeding 10^25 FLOP (or designated by the Commission) face additional obligations:
Additional Obligation
Purpose
Model evaluations including adversarial testing
Identify and mitigate systemic risks
Systemic risk assessment and mitigation
Address risks to health, safety, fundamental rights, democracy
Serious incident tracking and reporting
Notify AI Office and national authorities
Adequate cybersecurity protections
Prevent model theft, manipulation, misuse
Energy consumption reporting
Environmental transparency
The Code of Practice
GPAI providers can demonstrate compliance through adherence to Codes of Practice developed by the AI Office in consultation with industry. The first General-Purpose AI Code of Practice was published in 2025, covering:
Transparency and copyright compliance
Safety and risk identification
Technical risk mitigation
Internal governance
International Frameworks: NIST AI RMF and ISO 42001
While the EU AI Act is the primary regulatory driver, two international frameworks provide valuable complementary structure for AI risk management.
NIST AI Risk Management Framework (AI RMF 1.0)
Published by the US National Institute of Standards and Technology, the NIST AI RMF is a voluntary framework widely adopted globally. It organises AI risk management around four core functions:
Function
Purpose
Key Activities
GOVERN
Cultivate a culture of risk management
Policies, roles, accountability, diverse perspectives, organisational commitment
MAP
Contextualise risks in the mission
Intended use definition, stakeholder identification, benefit-cost analysis, interdependencies
MEASURE
Analyse and assess risks
Metrics selection, testing, bias evaluation, tracking over time
Why it matters for EU compliance: The AI Act's conformity assessment requirements align closely with NIST AI RMF's MEASURE function. Organisations already following NIST have a significant head start on EU compliance.
ISO/IEC 42001:2023 — AI Management System Standard
ISO/IEC 42001 is the world's first certifiable standard for AI management systems. It provides a structured approach to managing AI development and deployment within an organisation.
ISO 42001 Element
Description
AI Act Alignment
Context of the organisation
Understanding internal/external AI factors
Supports system inventory and classification
Leadership & commitment
Top management responsibility for AI governance
Aligns with AI Act governance requirements
Planning
AI risk assessment and treatment planning
Directly supports Article 9 risk management
Support
Resources, competence, awareness, communication
Supports human oversight and training requirements
Operation
AI system lifecycle management
Aligns with development and monitoring obligations
Performance evaluation
Monitoring, measurement, analysis, internal audit
Supports continuous monitoring requirements
Improvement
Corrective actions and continuous improvement
Aligns with post-market monitoring obligations
Key benefit: ISO 42001 certification provides external validation of your AI management practices, which can strengthen regulatory relationships and satisfy customer due diligence requirements.
Framework Comparison
Aspect
EU AI Act
NIST AI RMF
ISO 42001
Nature
Mandatory regulation
Voluntary framework
Certifiable standard
Scope
EU market
Global (US-originated)
Global
Approach
Risk-based classification
Function-based management
Management system
Enforcement
Market surveillance + fines
None (voluntary)
Certification audits
Best for
Legal compliance baseline
Practical risk methodology
Organisational maturity
Certification
CE marking (high-risk)
Self-declaration
Accredited certification
Recommendation: Use all three together — the AI Act defines your legal obligations, NIST AI RMF provides practical methodology, and ISO 42001 gives you an auditable management system.
The 6-Step AI Risk Assessment Framework
This section provides a complete, step-by-step methodology for conducting AI risk assessments that satisfy both EU AI Act requirements and international best practices.
Step 1: AI System Inventory and Classification
Objective: Create a comprehensive register of all AI systems and classify each under the AI Act.
What to capture for each system:
Field
Description
Example
System ID
Unique identifier
AI-HR-001
System name
Descriptive name
Candidate Screening Engine
Business owner
Responsible department/person
HR Director
AI technique
ML type, deep learning, rules-based, etc.
Gradient-boosted classifier
Data inputs
What data the system processes
CVs, job descriptions, historical hiring data
Outputs/decisions
What the system produces
Candidate ranking score (0-100)
Affected persons
Who is impacted by outputs
Job applicants
Deployment status
Development, testing, production, retired
Production since Q2 2025
Provider
Internal or third-party vendor
VendorCo v3.2
AI Act classification
Risk tier determination
High Risk (Annex III — Employment)
Classification checklist:
Is the AI practice on the prohibited list (Article 5)? → Stop. Discontinue.
Is the system a safety component of a regulated product (Annex I)? → High Risk
Does the system fall within an Annex III domain? → High Risk (unless Article 6(3) exception applies)
Does the system interact with people or generate/manipulate content? → Limited Risk
None of the above → Minimal Risk
Step 2: Risk Identification
Objective: Systematically identify all potential risks for each AI system.
Use this comprehensive risk taxonomy to ensure nothing is overlooked:
Technical Risks
Risk
Description
Detection Method
Model accuracy degradation
Performance drops over time
Continuous accuracy monitoring
Model drift
Data distribution shifts from training
Statistical drift detection
Adversarial vulnerability
Susceptibility to malicious inputs
Red-team / adversarial testing
Data quality issues
Training on incorrect, incomplete, or stale data
Data quality audits
Integration failures
Errors in system-to-system connections
Integration testing, monitoring
Scalability limits
Performance degradation under load
Load testing, capacity planning
Bias and Fairness Risks
Risk
Description
Detection Method
Historical bias
Training data reflects past discrimination
Demographic analysis of training data
Representation bias
Under-representation of population groups
Dataset coverage analysis
Measurement bias
Proxy variables correlate with protected characteristics
Feature correlation analysis
Aggregation bias
Single model for diverse populations
Subgroup performance comparison
Feedback loops
System outputs reinforce existing biases
Longitudinal outcome tracking
Privacy and Data Protection Risks
Risk
Description
Detection Method
Unlawful processing
No valid legal basis under GDPR
Legal basis audit per processing activity
Data minimisation failure
Collecting more data than necessary
Purpose limitation review
Re-identification risk
Anonymised data can be re-linked to individuals
Privacy-enhancing technology assessment
Cross-border transfer
Data moves outside EU without safeguards
Data flow mapping
Right to explanation gap
Cannot explain automated decisions to data subjects
Explainability testing
Human Oversight Risks
Risk
Description
Detection Method
Automation bias
Humans over-trust AI outputs
Human-AI interaction studies
Deskilling
Human operators lose expertise over time
Competency assessments
Override inability
No mechanism to intervene or reverse AI decisions
System architecture review
Alert fatigue
Too many alerts lead to ignoring genuine risks
Alert volume and response analysis
Step 3: Risk Analysis
Objective: Assess the likelihood and impact of each identified risk using a structured scoring methodology.
Objective: Compare risk scores against acceptance criteria and prioritise treatment.
Risk Level
Score Range
Treatment Required
Approval Authority
Critical
20–25
Immediate action — do not deploy until mitigated
Board / C-Suite
High
12–19
Significant mitigation required before deployment
AI Governance Committee
Medium
6–11
Mitigation recommended; accept with documented rationale
Business Unit Head
Low
1–5
Accept and monitor; implement low-cost controls
System Owner
Decision framework:
Critical Risk → STOP deployment → Escalate to board → Redesign or discontinue
High Risk → PAUSE deployment → Implement controls → Re-assess before go-live
Medium Risk → PROCEED with caution → Implement reasonable controls → Monitor
Low Risk → PROCEED → Document acceptance → Periodic review
Step 5: Risk Treatment
Objective: Select and implement appropriate controls for each risk requiring mitigation.
Treatment Strategy
When to Use
Examples
Avoid
Risk is unacceptable and cannot be adequately mitigated
Discontinue the AI system or use case
Mitigate
Risk can be reduced to acceptable levels through controls
Bias correction, human oversight, monitoring
Transfer
Risk can be shared with a third party
Insurance, vendor contractual guarantees
Accept
Risk is within tolerance after analysis
Document rationale, set monitoring thresholds
Technical controls catalogue:
Control
Risk Addressed
Implementation Effort
Continuous model monitoring
Drift, accuracy degradation
Medium
Adversarial robustness testing
Manipulation vulnerability
High
Bias testing across protected groups
Discrimination
Medium
Explainability tools (SHAP, LIME)
Transparency gaps
Medium
Human-in-the-loop decision review
Automation bias, oversight
Low–Medium
Data quality pipeline validation
Data integrity
Medium
Fallback/graceful degradation
System availability
Medium
Input validation and anomaly detection
Adversarial/garbage inputs
Low
Access controls and audit logging
Unauthorised use
Low
Regular retraining schedule
Model staleness
Medium
Step 6: Documentation and Continuous Monitoring
Objective: Create a comprehensive record and establish ongoing risk management.
Documentation requirements (EU AI Act Article 9):
Risk management policy and methodology
Complete AI system inventory with classifications
Risk identification results (all categories)
Risk analysis scoring and rationale
Risk evaluation decisions and acceptance criteria
Risk treatment plans with timelines and owners
Residual risk assessments
Monitoring arrangements and key risk indicators
Review schedule and reassessment triggers
Monitoring cadence:
Activity
Frequency
Responsibility
Automated model performance monitoring
Continuous (real-time)
ML Engineering
Bias and fairness metric review
Monthly
AI Ethics / Compliance
Key risk indicator dashboard review
Weekly
System Owner
Formal risk reassessment
Annually (minimum)
AI Governance Committee
Post-incident risk review
After any AI incident
Incident Response Team
Regulatory change impact assessment
As regulations evolve
Legal / Compliance
AI Risk Scoring Matrix and Methodology
A structured scoring methodology is essential for consistent, defensible risk evaluation. This section provides a complete AI-specific risk scoring system.
Likelihood Scale
Score
Level
Description
Criteria
1
Very Low
Rare occurrence
Less than 1% probability within assessment period
2
Low
Unlikely but possible
1–10% probability
3
Medium
Reasonably possible
10–50% probability
4
High
Likely to occur
50–90% probability
5
Very High
Almost certain
>90% probability
AI-specific likelihood factors to consider:
Model complexity (more complex = higher failure likelihood)
Data quality and freshness
Deployment environment stability
User sophistication and misuse potential
Historical incident rates for similar systems
Impact Scale
Score
Level
Financial Impact
Rights Impact
Operational Impact
1
Minimal
Under EUR 10K
Minor inconvenience
No service disruption
2
Low
EUR 10K–100K
Limited, easily remedied
Brief disruption, quick recovery
3
Moderate
EUR 100K–1M
Material harm to individuals
Significant service degradation
4
Severe
EUR 1M–10M
Serious rights violation, regulatory action
Extended outage, major business impact
5
Critical
>EUR 10M
Fundamental rights violation, physical harm
Existential threat to operations
Risk Scoring Matrix
Multiply Likelihood × Impact to determine the overall risk score:
Likelihood ↓ / Impact →
1 Minimal
2 Low
3 Moderate
4 Severe
5 Critical
5 Very High
5 Medium
10 Medium
15 High
20 Critical
25 Critical
4 High
4 Low
8 Medium
12 High
16 High
20 Critical
3 Medium
3 Low
6 Medium
9 Medium
12 High
15 High
2 Low
2 Low
4 Low
6 Medium
8 Medium
10 Medium
1 Very Low
1 Low
2 Low
3 Low
4 Low
5 Medium
Scoring Example: HR Candidate Screening AI
Risk
Likelihood
Impact
Score
Level
Gender bias in CV scoring
4 (High)
4 (Severe — discrimination)
16
High
Model drift from changing job market
3 (Medium)
3 (Moderate)
9
Medium
GDPR non-compliance (profiling)
2 (Low — policies in place)
4 (Severe — regulatory penalty)
8
Medium
System downtime
2 (Low)
2 (Low — manual fallback exists)
4
Low
Adversarial CV manipulation
1 (Very Low)
3 (Moderate)
3
Low
Result: The gender bias risk (score 16) must be addressed with significant mitigation before deployment. The system falls under Annex III (Employment) and is therefore high-risk under the AI Act, requiring a full conformity assessment.
Fundamental Rights Impact Assessment (FRIA)
Article 27 of the AI Act requires deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment before first use when:
The deployer is a body governed by public law, or
The deployer is a private entity providing public services, or
The system performs credit scoring or insurance pricing
FRIA Structure
Section
Contents
Key Questions
1. Use Description
Intended purpose, scope, context
What will the AI system be used for? For how long? How frequently?
2. Affected Persons
Categories, scale, vulnerability
Who is affected? How many people? Are any groups particularly vulnerable?
3. Fundamental Rights Analysis
Assessment per right
How might each fundamental right be affected?
4. Risk Mitigation
Measures to protect rights
What safeguards will be implemented?
5. Governance
Oversight and accountability
Who is responsible? How will compliance be monitored?
6. Stakeholder Input
Consultation results
Were affected groups consulted? What were their concerns?
Fundamental Rights to Assess
Right (EU Charter)
Article
AI-Specific Considerations
Human dignity
Art. 1
Automated decision-making that treats people as mere data points
Right to life
Art. 2
AI in healthcare, autonomous vehicles, critical infrastructure
Integrity of the person
Art. 3
AI in medical treatment decisions, genetic analysis
Prohibition of torture
Art. 4
AI in detention or interrogation contexts
Right to liberty
Art. 6
Predictive policing, pre-trial risk assessment
Private and family life
Art. 7
Surveillance, behavioural analysis, profiling
Data protection
Art. 8
All AI processing personal data
Non-discrimination
Art. 21
Any AI system making decisions about individuals
Equality (gender)
Art. 23
AI in hiring, credit, insurance
Rights of the child
Art. 24
AI in education, content moderation, age estimation
Practical tip: Use the EU Agency for Fundamental Rights (FRA) guidance and the ALTAI (Assessment List for Trustworthy AI) tool to structure your FRIA.
High-Risk AI System Compliance Requirements
For AI systems classified as high-risk, the EU AI Act mandates a comprehensive set of requirements. This section maps each requirement to practical implementation steps.
Article 9: Risk Management System
The risk management system must be continuous, iterative, and documented throughout the AI system's entire lifecycle.
Requirement
What It Means in Practice
Identify known and foreseeable risks
Risk identification workshops, threat modelling, literature review
Estimate and evaluate risks from intended use
Scenario analysis, user testing, deployment context assessment
Demographic analysis of datasets, bias detection tools
Gap identification
Coverage gap analysis per intended use population
Data governance processes
Data lineage tracking, access controls, retention policies
Article 11: Technical Documentation
Document
Contents
General description
Intended purpose, provider details, version history
System elements
Architecture, algorithms, data, training methodology
Monitoring and control
Performance metrics, logging, human oversight design
Risk management
Complete risk assessment results and treatment plans
Changes log
All modifications throughout lifecycle
Standards compliance
List of harmonised standards applied
EU Declaration of Conformity
Formal compliance statement
Article 14: Human Oversight
High-risk AI systems must be designed to allow effective human oversight:
Capability
Purpose
Implementation
Understand AI capacities and limitations
Informed human judgment
Training programme, documentation
Recognise automation bias tendency
Prevent over-reliance
Decision prompts, confidence intervals
Correctly interpret output
Prevent misapplication
Output explanation, context information
Decide not to use or reverse
Maintain human control
Override buttons, undo mechanisms
Intervene or interrupt
Emergency stop
Kill switches, circuit breakers
Conformity Assessment
Before placing a high-risk AI system on the market, providers must complete a conformity assessment:
Step
Description
Who
1. Internal compliance check
Verify all Article 8–15 requirements are met
Provider
2. Quality management system
Establish QMS covering AI lifecycle
Provider
3. Technical documentation
Prepare complete documentation package
Provider
4. Conformity assessment
Self-assessment OR third-party audit (for biometrics)
Provider / Notified Body
5. EU Declaration of Conformity
Formal compliance declaration
Provider
6. CE marking
Affix CE mark to the system
Provider
7. Registration
Register in EU AI database
Provider
AI Risk Assessment Template
Use this ready-to-use template structure to document your AI risk assessment. This template satisfies EU AI Act documentation requirements for high-risk systems.
Section 1: Executive Summary
Assessment Date: ____________________
Assessment Lead: ____________________
Systems Assessed: ____________________
Overall Risk Rating: [ ] Critical [ ] High [ ] Medium [ ] Low
Key Findings: ____________________
Priority Actions: ____________________
Next Review Date: ____________________
Download tip: Adapt this template into your organisation's document management system. Many GRC (Governance, Risk, Compliance) platforms now include AI-specific risk modules.
Common Pitfalls and How to Avoid Them
Based on real-world AI risk assessment projects, here are the most frequent mistakes and their solutions:
#
Pitfall
Why It Happens
Solution
1
Treating assessment as one-time
Compliance deadline pressure, "check the box" mentality
Build continuous monitoring from day one; schedule quarterly reviews
Quantify cost of AI incidents to justify risk management budget
8
Documentation that gathers dust
Assessment done for regulators, not operations
Make risk documentation a living operational tool; integrate with CI/CD
9
Ignoring downstream impacts
Assessing AI in isolation, not the broader decision chain
Map the full decision pipeline: AI output → human review → final decision → impact
10
No incident response plan
"We haven't had an AI incident yet"
Develop AI-specific incident response procedures before you need them
Building an AI Risk Management Culture
Sustainable AI risk management goes beyond checklists. It requires embedding risk awareness into how your organisation develops, deploys, and uses AI systems.
Governance Structure
Role
Responsibility
Reports To
Board/C-Suite
AI strategy and risk appetite
Shareholders
AI Governance Committee
Policy, oversight, escalation decisions
Board
AI Ethics Officer
Ethical review, bias assessment, FRIA
Governance Committee
AI Risk Manager
Risk assessment coordination, monitoring
Governance Committee
System Owners
Day-to-day risk management per system
AI Risk Manager
ML Engineers
Technical risk controls, monitoring
System Owners
Legal/Compliance
Regulatory mapping, classification guidance
Governance Committee
Maturity Model
Assess your current AI risk management maturity and target the next level:
Level
Name
Characteristics
Target Milestone
1
Ad Hoc
No formal AI risk process; reactive response to incidents
Most organisations start here
2
Developing
Basic inventory exists; risk assessment for new high-risk AI only
Quantitative metrics; continuous monitoring; integrated with SDLC
12–18 months
5
Optimising
Predictive risk intelligence; automated compliance; industry-leading practice
18–24 months
Training and Awareness
Audience
Training Content
Frequency
All employees
AI policy awareness, shadow AI risks, reporting procedures
Annual + onboarding
AI developers
Secure AI development, bias testing, documentation requirements
Quarterly
Business users
AI limitations, automation bias, override procedures
Semi-annual
Management
AI risk governance, regulatory updates, escalation protocols
Semi-annual
Board members
AI strategic risk, liability exposure, industry benchmarks
Annual
Frequently Asked Questions
Who is responsible for AI risk assessment — the provider or the deployer?
Both, but with different scopes. Providers (developers/manufacturers) must conduct comprehensive risk assessment during development and maintain the risk management system throughout the lifecycle. Deployers (organisations using the AI) must conduct their own assessment of risks in their specific deployment context, including a Fundamental Rights Impact Assessment where required. If you deploy a vendor's AI system, you cannot simply rely on the vendor's risk assessment — you must evaluate risks in your specific context.
How often should we conduct an AI risk assessment?
The EU AI Act requires continuous risk management, not periodic snapshots. In practice, this means: automated monitoring running continuously, monthly metric reviews, quarterly governance reviews, annual formal reassessment, and immediate reassessment after any significant change (new data, model update, changed context) or incident. A "set and forget" approach is explicitly non-compliant.
What's the difference between a risk assessment and a conformity assessment?
A risk assessment evaluates the risks an AI system poses — it's an analytical exercise. A conformity assessment is the broader compliance process that includes the risk assessment but also covers technical documentation, quality management, testing, and formal declaration of conformity. Think of risk assessment as one step within the larger conformity assessment.
Does ISO 42001 certification guarantee AI Act compliance?
No. ISO 42001 provides an excellent management system framework that supports compliance, but it does not guarantee it. The AI Act has specific requirements (classification, CE marking, registration, FRIA) that go beyond ISO 42001's scope. However, having ISO 42001 certification demonstrates serious commitment to AI governance and provides a strong foundation for meeting AI Act requirements.
How should we handle AI systems developed before the AI Act?
Existing AI systems are not exempt. The AI Act applies to all AI systems placed on the market or put into service in the EU, regardless of when they were developed. For high-risk systems, providers and deployers must ensure compliance by the relevant deadlines. Start with a gap analysis comparing your existing practices against AI Act requirements, then create a remediation plan prioritised by risk level.
What counts as an "AI system" under the AI Act?
The AI Act defines an AI system as a "machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This is deliberately broad — if you're unsure whether your system qualifies, conduct a classification assessment to be safe.
How do we assess risks for AI systems we procure from vendors?
Require your AI vendors to provide: complete technical documentation, AI Act risk classification with rationale, bias testing results, performance metrics, data governance documentation, and information about human oversight design. Include contractual clauses for right-to-audit, incident notification, and compliance warranties. Then conduct your own deployment-context risk assessment on top of the vendor's documentation.
What role does GDPR play in AI risk assessment?
GDPR and the AI Act are complementary, not overlapping. GDPR governs personal data processing (including by AI), while the AI Act governs AI system risks more broadly. For AI systems processing personal data, you must comply with both regulations. This typically means your AI risk assessment should include a Data Protection Impact Assessment (DPIA) under GDPR Article 35 alongside the AI Act requirements. The two assessments can be conducted in parallel and documented together.
AI risk assessment is where legal obligation meets genuine business value. Organisations that build robust assessment capabilities today will deploy AI faster, face fewer incidents, and build the stakeholder trust that separates responsible AI leaders from the rest.
Not sure where to start? Vision Compliance helps organisations across the EU navigate AI risk assessment, from initial inventory and classification through full conformity assessment for high-risk systems. Our team brings expertise spanning AI governance, data protection, and cybersecurity — exactly the cross-functional perspective that effective AI risk management demands.
This guide reflects the EU AI Act as of February 2026, including all provisions that have entered into application. For organisation-specific compliance advice, consult with qualified legal and technical professionals.
Robert Lozo, mag. iur., is a Partner at Vision Compliance specializing in EU regulatory compliance. He advises organizations on GDPR, NIS2, AI Act, and financial regulation, delivering audit-ready documentation and compliance roadmaps across regulated industries.