EU AI ACT ENFORCEMENT READY

EU AI Act Compliance Services

Avoid €35M fines. Achieve conformity assessment. Deploy high-risk AI systems with regulatory confidence before August 2027 enforcement.

⚠️
€35M Maximum Fine (7% Global Turnover)
📅
August 2027 Enforcement Deadline
🇪🇺
Extraterritorial Reach (EU + Global)

EU AI Act Risk Classification

The EU AI Act mandates different compliance requirements based on AI risk level. Enterprise boards must understand which category their systems fall under.

Prohibited AI Systems

€35M or 7% Global Turnover

AI systems that pose unacceptable risk are banned entirely. No grace period, no conformity assessment—immediate prohibition.

  • Social scoring by governments
  • Real-time biometric identification in public spaces (limited exceptions)
  • Emotion recognition in workplace/education
  • Subliminal manipulation causing harm

High-Risk AI Systems

€15M or 3% Global Turnover

AI systems that significantly impact health, safety, or fundamental rights. Require conformity assessment before market placement.

  • Credit scoring and creditworthiness
  • HR recruitment and employee evaluation
  • Insurance underwriting and pricing
  • Critical infrastructure management
  • Biometric identification and categorization
  • Law enforcement and migration

Limited-Risk AI Systems

€7.5M or 1.5% Global Turnover

AI systems with transparency obligations. Users must be informed they're interacting with AI.

  • Chatbots and conversational AI
  • Emotion recognition systems
  • Biometric categorization
  • Deepfake generation

Minimal-Risk AI Systems

No Specific Requirements

AI systems with minimal risk. No mandatory obligations but voluntary codes of conduct encouraged.

  • AI-enabled video games
  • Spam filters
  • Inventory management systems
  • Recommendation engines (non-manipulative)

Provider vs Deployer Obligations

The EU AI Act is role-based, not organization-based. Compliance obligations differ significantly depending on whether you are a Provider (developer/manufacturer), Deployer (user/operator), or both. Misclassification of role is a primary enforcement failure mode.

PROVIDER

Provider Obligations

Organizations that develop, manufacture, or substantially modify AI systems for placing on the market or putting into service under their own name or trademark.

  • Technical documentation file (Article 11, Annex IV)
  • Risk management system (Article 9)
  • Data governance framework (Article 10)
  • Quality management system (Article 17)
  • Conformity assessment (Article 43)
  • CE marking and declaration (Articles 48-49)
  • EU database registration (Article 71)
  • Post-market monitoring system (Article 72)
  • Serious incident reporting (Article 62)
DEPLOYER

Deployer Obligations

Organizations that use AI systems under their authority, except for private non-professional activity. Deployers have distinct obligations even when using third-party AI.

  • Use AI per instructions of use (Article 26(1))
  • Ensure technical compatibility (Article 26(2))
  • Human oversight measures (Article 26(3))
  • Input data monitoring (Article 26(5))
  • Fundamental Rights Impact Assessment (FRIA) (Article 27)
  • Automatically generated logs retention (Article 26(7))
  • Serious incident reporting to provider (Article 26(8))
  • Transparency obligations to affected persons (Article 26(9))

Shared Responsibilities (Articles 23-29)

Many organizations act as both providers AND deployers. For example, a bank that builds its own credit scoring AI is a provider; when it deploys that AI, it's also a deployer. Organizations using third-party AI models but substantially modifying them may become providers. We help you map your actual roles across your AI portfolio and implement appropriate controls for each obligation.

Critical: Importers, distributors, and authorized representatives also have specific obligations under Articles 23-25. We assess your full supply chain role to ensure complete compliance coverage.

EU AI Act Enforcement Timeline

Regulatory deadlines you cannot miss—plan backwards from August 2027

Date
Requirement
Who's Affected
Status
Feb 2, 2025
Prohibited AI systems banned
All organizations using prohibited AI
ACTIVE
Aug 2, 2025
General-purpose AI model requirements
Providers of GPAI models (e.g., LLM providers)
ACTIVE
Aug 2, 2026
Limited-risk AI transparency obligations
Chatbots, deepfakes, emotion recognition
7 MONTHS
Aug 2, 2027
High-risk AI conformity assessment
Credit scoring, HR, insurance, critical infrastructure
19 MONTHS
Aug 2, 2030
High-risk AI in existing products (grandfathering ends)
Legacy high-risk AI systems deployed pre-regulation
55 MONTHS

EU AI Act Compliance Services

End-to-end compliance from risk classification through conformity assessment and post-market monitoring

Compliance Readiness

£12,000
4-6 weeks | Risk classification & gap analysis
  • AI system inventory across organization
  • Risk classification per EU AI Act Annex III
  • Provider vs deployer role mapping
  • Gap analysis against Articles 9-15
  • Extraterritorial scope assessment
  • Conformity pathway determination
  • Board-ready regulatory roadmap
  • Budget and timeline estimation

Post-Market Assurance

£18K/year
Annual retainer | Quarterly reviews
  • Post-market monitoring system (Article 72)
  • Named accountable owner designation
  • Serious incident reporting (Article 62)
  • Incident escalation threshold definition
  • Suspension/withdrawal decision authority
  • Technical documentation updates
  • Regulatory change monitoring
  • Quarterly compliance health checks
  • Annual conformity maintenance review
  • National competent authority liaison
  • Market surveillance engagement support

Fundamental Rights Impact Assessment (FRIA)

Mandatory for public sector deployers and high-risk AI affecting fundamental rights

Article 27 Requirement

FRIA Is Not Optional

Article 27 mandates that deployers of high-risk AI systems in the public sector (and certain private sector contexts) must conduct a Fundamental Rights Impact Assessment before putting the AI system into use. FRIA is politically sensitive and enforcement-critical. This is separate from—but integrated with—the risk management system required under Article 9.

When FRIA Is Mandatory

All public authorities or bodies deploying high-risk AI, and private entities deploying high-risk AI that affects fundamental rights (discrimination, privacy, freedom of expression, personal data, children's rights).

What FRIA Must Cover

Impact on fundamental rights, affected groups, duration and frequency of use, connection with other systems, complementary safeguards, and procedures for affected persons to lodge complaints.

How We Operationalize FRIA

We integrate FRIA with Article 9 risk management to avoid duplication. We map fundamental rights obligations, assess impacts across protected characteristics, and establish consultation mechanisms with affected stakeholders.

FRIA + Risk Management Integration

FRIA findings feed into risk treatment decisions. High fundamental rights risks may require additional human oversight, transparency measures, or even AI system modification before deployment.

High-Risk AI Conformity Assessment Process

What's required to comply with Article 43 before August 2027

1

Technical Documentation

Comprehensive technical file demonstrating compliance with all requirements (Articles 9-15, Annex IV).

2

Quality Management System

Quality management system ensuring consistent compliance throughout AI lifecycle (Article 17).

3

Conformity Assessment

Internal control OR third-party notified body assessment (depending on Annex VI/VII classification).

4

CE Marking

Affix CE marking and draw up EU declaration of conformity once assessment successfully passed.

5

EU Database Registration

Register high-risk AI system in EU database before market placement (Article 71).

6

Post-Market Monitoring

Ongoing monitoring with named accountable owners, serious incident reporting with defined escalation thresholds, and technical documentation updates (Article 72). Includes engagement with national competent authorities and market surveillance bodies across Member States.

Frequently Asked Questions

Does the EU AI Act apply to us if we're based outside the EU?

Yes, if: (1) Your AI systems are placed on the EU market, (2) Your AI outputs are used in the EU, or (3) You're an EU-based user of AI systems. The EU AI Act has extraterritorial reach similar to GDPR. If you serve EU customers, have EU operations, or your AI affects EU persons, you're likely in scope—regardless of where your headquarters are located.

How do we know if our AI is "high-risk" under the EU AI Act?

High-risk AI is defined in Annex III of the regulation. Key categories include: biometric identification, critical infrastructure, education/employment, law enforcement, migration/border control, administration of justice, and democratic processes. Additionally, AI used as safety components in products (medical devices, vehicles, machinery) regulated under existing EU legislation is automatically high-risk. Our Compliance Readiness assessment (£12K) provides definitive risk classification with legal justification.

What's the difference between provider and deployer obligations?

Providers (developers/manufacturers) have obligations around technical documentation, risk management, conformity assessment, and CE marking (Articles 9-17). Deployers (users/operators) have obligations around proper use, human oversight, input data monitoring, and FRIA for public sector use (Articles 26-27). Many organizations act as both—for example, a bank that builds its own credit AI is a provider; when it uses that AI, it's a deployer. Misclassifying your role is a common enforcement failure.

Is Fundamental Rights Impact Assessment (FRIA) mandatory for us?

FRIA is mandatory under Article 27 for: (1) All public authorities or bodies deploying high-risk AI, and (2) Private entities deploying high-risk AI that affects fundamental rights (discrimination, privacy, freedom of expression). FRIA is separate from—but integrated with—risk management under Article 9. We assess whether FRIA applies to your specific AI use cases and operationalize the assessment process.

What's the difference between EU AI Act compliance and ISO 42001 certification?

EU AI Act is mandatory legal compliance for high-risk AI with specific conformity assessment requirements enforced by national authorities. ISO 42001 is a voluntary international standard for AI management systems. While ISO 42001 can help build governance foundations that support EU AI Act compliance, it doesn't substitute for conformity assessment. Many organizations pursue both: ISO 42001 for governance framework, EU AI Act compliance for legal obligation.

What are the actual penalties for non-compliance?

Fines are tiered by violation severity: (1) €35M or 7% of global annual turnover for prohibited AI violations, (2) €15M or 3% for high-risk AI non-compliance (failure to meet conformity requirements), (3) €7.5M or 1.5% for other violations including transparency failures. National competent authorities and market surveillance bodies across Member States can also impose injunctions, product withdrawals, market bans, and temporary prohibitions. These are administrative fines—civil liability, private damages, and reputational harm are additional risks beyond regulatory penalties.

Do we need a notified body for conformity assessment?

It depends on your AI system classification under Annexes VI and VII. Some high-risk AI (e.g., biometrics per Annex III point 1, critical infrastructure management) requires mandatory third-party notified body assessment. Others allow internal conformity assessment if you have robust quality management systems. Annex VI specifies systems requiring third-party assessment; Annex VII covers internal assessment procedures. Our Compliance Readiness assessment identifies which conformity pathway applies to your specific AI systems.

Who engages with national competent authorities if enforcement action happens?

Engagement with national competent authorities and market surveillance bodies is typically led by designated accountable owners within your organization—usually a Chief Compliance Officer, Legal Head, or designated AI Governance Lead. We support this engagement by preparing evidence packages, coordinating responses, and liaising with authorities across Member States. Post-market monitoring obligations under Article 72 include specific requirements for authority notification and cooperation.

Don't Wait Until Enforcement Deadline

Start your EU AI Act compliance journey today. High-risk conformity assessment takes 4-6 months—the closer you get to August 2027, the more expensive and rushed implementation becomes.

Request Compliance Roadmap

Integrate with ISO 42001 certification and our complete AI Governance services for comprehensive regulatory readiness.