Trusted AI Governance
Avoid €35M fines. Achieve conformity assessment. Deploy high-risk AI systems with regulatory confidence before August 2027 enforcement.
The EU AI Act mandates different compliance requirements based on AI risk level. Enterprise boards must understand which category their systems fall under.
AI systems that pose unacceptable risk are banned entirely. No grace period, no conformity assessment—immediate prohibition.
AI systems that significantly impact health, safety, or fundamental rights. Require conformity assessment before market placement.
AI systems with transparency obligations. Users must be informed they're interacting with AI.
AI systems with minimal risk. No mandatory obligations but voluntary codes of conduct encouraged.
The EU AI Act is role-based, not organization-based. Compliance obligations differ significantly depending on whether you are a Provider (developer/manufacturer), Deployer (user/operator), or both. Misclassification of role is a primary enforcement failure mode.
Organizations that develop, manufacture, or substantially modify AI systems for placing on the market or putting into service under their own name or trademark.
Organizations that use AI systems under their authority, except for private non-professional activity. Deployers have distinct obligations even when using third-party AI.
Regulatory deadlines you cannot miss—plan backwards from August 2027
End-to-end compliance from risk classification through conformity assessment and post-market monitoring
Mandatory for public sector deployers and high-risk AI affecting fundamental rights
Article 27 mandates that deployers of high-risk AI systems in the public sector (and certain private sector contexts) must conduct a Fundamental Rights Impact Assessment before putting the AI system into use. FRIA is politically sensitive and enforcement-critical. This is separate from—but integrated with—the risk management system required under Article 9.
All public authorities or bodies deploying high-risk AI, and private entities deploying high-risk AI that affects fundamental rights (discrimination, privacy, freedom of expression, personal data, children's rights).
Impact on fundamental rights, affected groups, duration and frequency of use, connection with other systems, complementary safeguards, and procedures for affected persons to lodge complaints.
We integrate FRIA with Article 9 risk management to avoid duplication. We map fundamental rights obligations, assess impacts across protected characteristics, and establish consultation mechanisms with affected stakeholders.
FRIA findings feed into risk treatment decisions. High fundamental rights risks may require additional human oversight, transparency measures, or even AI system modification before deployment.
What's required to comply with Article 43 before August 2027
Comprehensive technical file demonstrating compliance with all requirements (Articles 9-15, Annex IV).
Quality management system ensuring consistent compliance throughout AI lifecycle (Article 17).
Internal control OR third-party notified body assessment (depending on Annex VI/VII classification).
Affix CE marking and draw up EU declaration of conformity once assessment successfully passed.
Register high-risk AI system in EU database before market placement (Article 71).
Ongoing monitoring with named accountable owners, serious incident reporting with defined escalation thresholds, and technical documentation updates (Article 72). Includes engagement with national competent authorities and market surveillance bodies across Member States.
Yes, if: (1) Your AI systems are placed on the EU market, (2) Your AI outputs are used in the EU, or (3) You're an EU-based user of AI systems. The EU AI Act has extraterritorial reach similar to GDPR. If you serve EU customers, have EU operations, or your AI affects EU persons, you're likely in scope—regardless of where your headquarters are located.
High-risk AI is defined in Annex III of the regulation. Key categories include: biometric identification, critical infrastructure, education/employment, law enforcement, migration/border control, administration of justice, and democratic processes. Additionally, AI used as safety components in products (medical devices, vehicles, machinery) regulated under existing EU legislation is automatically high-risk. Our Compliance Readiness assessment (£12K) provides definitive risk classification with legal justification.
Providers (developers/manufacturers) have obligations around technical documentation, risk management, conformity assessment, and CE marking (Articles 9-17). Deployers (users/operators) have obligations around proper use, human oversight, input data monitoring, and FRIA for public sector use (Articles 26-27). Many organizations act as both—for example, a bank that builds its own credit AI is a provider; when it uses that AI, it's a deployer. Misclassifying your role is a common enforcement failure.
FRIA is mandatory under Article 27 for: (1) All public authorities or bodies deploying high-risk AI, and (2) Private entities deploying high-risk AI that affects fundamental rights (discrimination, privacy, freedom of expression). FRIA is separate from—but integrated with—risk management under Article 9. We assess whether FRIA applies to your specific AI use cases and operationalize the assessment process.
EU AI Act is mandatory legal compliance for high-risk AI with specific conformity assessment requirements enforced by national authorities. ISO 42001 is a voluntary international standard for AI management systems. While ISO 42001 can help build governance foundations that support EU AI Act compliance, it doesn't substitute for conformity assessment. Many organizations pursue both: ISO 42001 for governance framework, EU AI Act compliance for legal obligation.
Fines are tiered by violation severity: (1) €35M or 7% of global annual turnover for prohibited AI violations, (2) €15M or 3% for high-risk AI non-compliance (failure to meet conformity requirements), (3) €7.5M or 1.5% for other violations including transparency failures. National competent authorities and market surveillance bodies across Member States can also impose injunctions, product withdrawals, market bans, and temporary prohibitions. These are administrative fines—civil liability, private damages, and reputational harm are additional risks beyond regulatory penalties.
It depends on your AI system classification under Annexes VI and VII. Some high-risk AI (e.g., biometrics per Annex III point 1, critical infrastructure management) requires mandatory third-party notified body assessment. Others allow internal conformity assessment if you have robust quality management systems. Annex VI specifies systems requiring third-party assessment; Annex VII covers internal assessment procedures. Our Compliance Readiness assessment identifies which conformity pathway applies to your specific AI systems.
Engagement with national competent authorities and market surveillance bodies is typically led by designated accountable owners within your organization—usually a Chief Compliance Officer, Legal Head, or designated AI Governance Lead. We support this engagement by preparing evidence packages, coordinating responses, and liaising with authorities across Member States. Post-market monitoring obligations under Article 72 include specific requirements for authority notification and cooperation.
Start your EU AI Act compliance journey today. High-risk conformity assessment takes 4-6 months—the closer you get to August 2027, the more expensive and rushed implementation becomes.
Request Compliance RoadmapIntegrate with ISO 42001 certification and our complete AI Governance services for comprehensive regulatory readiness.