EU AI Act Compliance for UK Businesses

The practical guide to what applies, when, and what you need to do before 2 August 2026

The EU AI Act is not a future concern. Prohibitions have been enforceable since February 2025. The full weight of high-risk AI obligations applies from 2 August 2026. Fines reach up to €35 million or 7% of global turnover. This guide covers what you need to know, written for CEOs and heads of legal who want answers, not a regulatory summary.

Digital image of word AI appearing in a globe surrounded by EU stars sitting on a digital representation of a microchip

The EU AI Act (Regulation EU 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It entered into force on 1 August 2024 and applies on a phased schedule, with the majority of obligations, including full compliance requirements for high-risk AI systems, taking effect on 2 August 2026. The Act applies to any business that develops, deploys or uses AI systems affecting individuals in the European Union, regardless of where that business is based. UK businesses are not exempt by virtue of Brexit. Fines for non-compliance reach up to €35 million or 7% of worldwide annual turnover. Guidance by Rory O'Keeffe, RMOK Legal - SCL-accredited IT Lawyer, AI Committee member, and author of AI Advantage (2025).

The world's first comprehensive AI law - and what it actually does

WHAT IS THE EU AI ACT

The EU Artificial Intelligence Act was formally adopted by the European Parliament in March 2024 after three years of negotiation. It entered into force on 1 August 2024 and is the first legislative framework of its kind anywhere in the world. Unlike sector-specific guidance or voluntary codes, it is a binding regulation with direct legal force across all 27 EU member states.

Its architects drew heavily from the GDPR playbook. Like GDPR, it is extraterritorial in scope — meaning it reaches beyond EU borders to capture any business whose AI systems affect people in the EU, regardless of where the provider or deployer is established. Like GDPR, it applies penalties severe enough to make compliance a board-level issue rather than a compliance team checkbox. And like GDPR, it is already shaping commercial contracts, procurement decisions and due diligence processes across every sector.

Unlike GDPR, which applies a broadly uniform set of rules to personal data processing, the AI Act takes a risk-based approach. The obligations you face under the Act depend entirely on how your AI system is classified - and that classification is the first question every business needs to answer.

“The AI Act is not a regulation on the horizon. Prohibited practices have been enforceable since February 2025. General-purpose AI obligations have applied since August 2025. August 2026 is not the beginning - it is the culmination.”

— Rory O'Keeffe, RMOK Legal, April 2026

Why Brexit does not exempt UK businesses from the EU AI Act

DOES IT APPLY TO YOU

The instinctive reaction from many UK businesses has been to treat the EU AI Act as someone else's problem. It is not.

The Act's extraterritorial scope is explicit. It applies to any provider placing AI systems on the EU market or putting them into service in the EU - regardless of where that provider is established. It applies to any deployer using AI systems within the EU under their authority. And it applies to providers and deployers outside the EU if the outputs of their AI systems are used within the EU. That final provision covers a significant proportion of UK businesses whose customers, employees or counterparties are located in European Union member states.

The practical test is straightforward: if your business develops or sells AI systems to EU customers, deploys AI that affects individuals in the EU, or uses AI tools whose outputs influence decisions affecting EU residents, the Act is likely to apply to some or all of your operations. Post-Brexit status provides no carve-out.

 

UK vs EU

The UK has taken a deliberately different approach to AI regulation. Rather than a single horizontal framework, the UK relies on five cross-sectoral principles applied by existing regulators within their domains: the ICO for data protection, the FCA for financial services, the CMA for competition, Ofcom for communications, and the MHRA for healthcare and medical devices. A dedicated UK AI bill was anticipated in 2025 but did not materialise. As of April 2026, no single UK AI law exists.

 

The practical implication: for UK-domestic operations, you are navigating sector-specific regulatory guidance rather than a unified compliance framework. For EU-facing operations — including any AI that affects EU individuals — you must comply with the EU AI Act in full. This divergence creates a dual-track compliance obligation for most UK businesses with European exposure, and it is not going away.

How the AI Act classifies AI systems, and what each tier means for your business

THE FOUR RISK TIERS

Everything under the AI Act flows from risk classification. The Act divides AI systems into four categories. Your obligations - from prohibition to comprehensive compliance to lighter-touch transparency - are determined entirely by which category your AI system falls into. Getting the classification wrong has consequences: misclassifying a high-risk system as minimal risk is itself a compliance failure, carrying penalties of up to €7.5 million or 1.5% of global turnover.

Risk level

What it means

Common examples

Maximum penalty

UNACCEPTABLE

Prohibited outright. These systems cannot be placed on the EU market or used in the EU under any circumstances. Prohibition has been in force since 2 February 2025.

Social scoring systems; AI exploiting vulnerabilities of specific groups; real-time remote biometric ID in public spaces (with limited exceptions); AI inferring emotions in workplaces and educational settings

Up to €35m or 7% of global turnover

HIGH RISK

Subject to the most demanding compliance obligations under Annex III. Full compliance required by 2 August 2026 for standalone systems. Broad range of use cases across eight domains.

CV screening and recruitment AI; credit scoring; fraud detection; worker performance monitoring; AI used in education, healthcare, critical infrastructure, law enforcement, migration and border management

Up to €15m or 3% of global turnover

LIMITED RISK

Lighter transparency obligations apply. Businesses must ensure that individuals know they are interacting with or are affected by an AI system. Applicable from 2 August 2026.

Chatbots; AI-generated content systems; deepfake tools; recommendation engines that interact directly with users

Up to €7.5m or 1.5% for misleading regulators

MINIMAL RISK

No specific AI Act obligations. The vast majority of AI applications currently in use fall into this category. Voluntary codes of conduct are encouraged.

AI in video games; spam filters; basic process automation; non-personalised content recommendations

No fines; voluntary compliance encouraged

The eight high-risk domains under Annex III

If your AI system falls into one of the following eight domains, it is classified as high-risk by default under Annex III of the Act. The classification is objective - it does not depend on your assessment of the risk, and a provider who believes their system is not high-risk must document that assessment before placing it on the market.

1. Biometrics

Remote biometric identification systems; biometric categorisation; emotion recognition

2. Critical infrastructure

AI as safety components in road traffic, water, gas, heating, electricity and critical digital infrastructure

3. Education and vocational training

AI determining access to educational institutions; AI monitoring students during examinations

4. Employment and workforce management

CV screening; targeted job advertising; candidate evaluation; worker performance monitoring; task allocation based on behaviour or personal traits

5. Essential private and public services

Credit scoring; life and health insurance risk assessment; eligibility assessment for benefits and healthcare services

6. Law enforcement

AI tools used to assess risk of individuals becoming crime victims; polygraphs; forensic analysis; crime prediction

7. Migration, asylum and border control

AI assessing risks posed by individuals crossing EU borders; verifying documents; assessing asylum applications

8. Administration of justice and democracy

AI used by courts to research facts or laws; AI influencing elections; AI used to interpret and apply law to specific facts

PRACTICAL NOTE

These categories affect a far wider range of businesses than most CEOs and heads of legal initially assume. If your business uses AI for any aspect of hiring, performance management, customer credit assessment, fraud detection or content moderation affecting EU individuals, you are almost certainly deploying a high-risk system. The fact that the AI is embedded in third-party software you have procured does not automatically transfer your compliance obligations to the vendor. Deployers have independent obligations under the Act.

WHAT COMPLIANCE REQUIRES

The specific obligations for providers and deployers of high-risk AI systems

The obligations under the Act differ based on your role in the AI value chain. The Act identifies four roles: providers (those who develop AI systems), deployers (those who use AI systems in a professional capacity), importers, and distributors. Providers bear the most extensive obligations. Deployers have fewer but still material obligations, including obligations that cannot be contracted away to your vendor.

Date

Milestone

What it means in practice

1 Aug 2024

Act enters into force

The clock starts. All future deadlines are calculated from this date. The Act has legal force from this point.

2 Feb 2025

Prohibitions live. AI literacy required

Eight categories of AI practice are now banned outright in the EU. Any business using prohibited AI systems in EU-facing operations is already in violation. AI literacy requirements under Article 4 are also in effect - organisations must ensure staff working with AI understand its capabilities, limitations and risks.

2 Aug 2025

GPAI obligations. Governance structures

Providers of general-purpose AI models (LLMs such as GPT-class systems) must comply with transparency, documentation, copyright compliance and safety obligations. EU AI Office is fully operational. National market surveillance authorities are designated across member states.

2 Aug 2026

High-Risk Obligations.  Full enforcement

The major deadline. Full compliance obligations apply for high-risk AI systems under Annex III. This includes conformity assessments, technical documentation, risk management systems, data governance, human oversight mechanisms, incident reporting, and registration in the EU high-risk AI database. Transparency obligations under Article 50 also take effect. National enforcement authorities have full investigatory and penalty powers from this date.

2 Aug 2027

Embedded high-risk AI

High-risk AI systems embedded in regulated products (medical devices, machinery, vehicles) have an extended transition period. Full compliance required by this date.

 

ON THE DIGITAL OMNIBUS

In November 2025, the European Commission proposed targeted amendments to the AI Act under the ‘Digital Omnibus’ package. The key proposal for businesses: a conditional deferral mechanism for high-risk Annex III obligations, linked to the availability of harmonised compliance standards. Under the proposal, Annex III obligations would not apply until six months after the Commission confirms that adequate compliance tools are available, with a backstop deadline of 2 December 2027. As of April 2026, this proposal has not been adopted into law. The European Parliament and Council are still negotiating. Compliance planning must treat 2 August 2026 as the operative deadline. Reliance on a potential extension that has not been legislated is not a defensible compliance position.

THE TIMELINE

What is already live, what arrives in August 2026, and what comes after

One of the most persistent misconceptions about the EU AI Act is that it is a future obligation. It is not. Several provisions are already in force. The question is not whether the Act applies to your business, but which obligations are currently active and which arrive in August 2026.

Provider obligations

Deployer obligations

  • Quality management system covering design, development, testing and monitoring
  • Comprehensive technical documentation covering system design, data governance, risk assessment and performance benchmarks
  • Risk management system, maintained throughout the lifecycle of the AI system
  • Data governance - training, validation and testing datasets must be relevant, representative and free of errors to the extent possible
  • Automatic record-keeping (logging) throughout the operational life of the system
  • Transparency obligations - instructions for deployers explaining capabilities, limitations and conditions of use
  • Human oversight mechanisms -systems must be designed so deployers can effectively monitor and override outputs
  • Conformity assessment before placing the system on the market
  • CE marking and EU declaration of conformity
  • Registration in the EU high-risk AI database
  • Post-market monitoring system and incident reporting
  • Immediate notification of serious incidents to national authorities within 72 hours
  • Use AI systems in accordance with the provider’s instructions for use
  • Assign human oversight to named individuals with the authority, competence and resources to intervene
  • Monitor the AI system for performance in the context of its intended use
  • Inform affected individuals that they are subject to a high-risk AI system
  • Conduct a fundamental rights impact assessment before deploying AI in certain public sector contexts
  • Maintain logs automatically generated by the system
  • Report serious incidents to the provider and, in certain cases, to national authorities
  • Carry out the same obligations as providers if deploying under own name/trademark, substantially modifying the system, or changing its intended purpose
  • Ensure vendor contracts include AI Act compliance obligations, audit rights, and notification provisions for system modifications

WHAT TO DO NOW

A practical checklist for CEOs and Heads of Legal - ordered by priority

 With four months to the August 2026 deadline, most businesses are in one of three positions: they have not started; they have completed an inventory but taken no further action; or they are in the process of building compliance frameworks and have not yet finished. Whichever position you are in, the sequence below applies.

Step 1 - Inventory your AI systems

List every AI tool your business uses or provides, including third-party software with AI features. Include tools used in HR, recruitment, customer service, fraud detection, compliance, risk assessment, content moderation and operational decision-making. Do not limit the inventory to tools you have procured specifically as AI - many AI systems are embedded in SaaS platforms and ERP systems where AI features have been added to existing tools. The inventory must cover EU-facing operations specifically.

Step 2 - Classify each system against the four risk tiers

Map each AI system in your inventory against the Annex III use cases set out in above in this guide. Where a system falls into a listed domain, it is high-risk by default unless you can document that it does not pose a significant risk of harm to health, safety or fundamental rights. That documentation must be completed before the system continues to be used or placed on the EU market. Where classification is unclear, get legal advice — misclassification is a compliance failure in itself.

Step 3 - Check your vendor contracts

For every AI system you deploy from a third-party vendor, your vendor agreement must include provisions that enable your compliance as a deployer. Specifically: the vendor must be able to confirm the system’s risk classification under the Act; you need access to the technical documentation required to satisfy your own oversight obligations; the agreement must include notification provisions requiring the vendor to tell you about material changes to the system; and you need audit rights sufficient to demonstrate compliance to regulators. Most standard AI vendor agreements do not contain these provisions. Every AI procurement agreement should be reviewed against the Act before August 2026.

Step 4 - Implement high-risk AI obligations where applicable

For systems you have classified as high-risk, the compliance programme covers: a documented risk management system; data governance records meeting the Act’s standards; human oversight mechanisms with named individuals and defined authority; automatic logging and record-keeping; post-market monitoring; incident reporting protocols; and - for providers - conformity assessments and EU database registration. The documentation work alone takes months. If you have not started, start this week.

Step 5 - Appoint a governance lead

Compliance with the AI Act requires someone with the authority, competence and organisational access to oversee it. In a business with a general counsel or head of legal, that function typically sits with or is led by the legal team, in partnership with compliance, IT and the relevant business units. The oversight function needs to be named, resourced, and operational before August 2026. It also needs to remain operational after August 2026 - AI Act compliance is an ongoing obligation, not a one-time project.

Step 6 - Train your people on AI literacy

AI literacy requirements under Article 4 have been in force since February 2025. Organisations must ensure that staff who work with AI systems understand those systems to the degree necessary to discharge their professional responsibilities. This is not a general digital skills requirement. It is a targeted obligation tied to the specific AI systems in use and the roles of the people using them. The training must be proportionate to the role and documented.

AI CONTRACTS

What the EU AI Act means for your commercial agreements

The EU AI Act does not exist in isolation from your commercial contracts. It creates new obligations that flow through procurement agreements, technology services contracts, SaaS subscription agreements, and AI-specific commercial relationships. Getting your contractual framework right is as important as getting your internal governance framework right.

What must be in an AI vendor agreement

A compliant AI vendor agreement in the EU AI Act era needs to address provisions that standard commercial technology contracts do not routinely contain. At minimum:

•     Risk classification confirmation - the vendor must confirm the AI system’s classification under the Act and warrant that the classification is accurate

•     Technical documentation access - as a deployer you need access to sufficient technical documentation to satisfy your oversight obligations

•     Compliance warranties - the vendor must warrant compliance with all applicable AI Act obligations as a provider

•     Change notification - the vendor must notify you of any material change to the AI system that could affect its risk classification or your compliance position

•     Incident notification - the vendor must notify you of serious incidents within timelines that allow you to discharge your own reporting obligations

•     Audit rights - you need the right to audit the vendor’s compliance with the Act as it relates to systems deployed in your business

•     Liability allocation for AI errors and hallucinations - the Act does not assign liability directly, but the allocation of risk between provider and deployer must be clear

•     AI Act compliance as a termination trigger - if the vendor falls out of compliance with the Act, you need a right to terminate

What must be in your customer-facing AI terms

If your business provides services with AI components to customers, your customer agreements need to address the Act’s transparency obligations, the scope of your role as provider or deployer, the limitations and conditions of use of the AI system, the data governance arrangements, and the mechanism for customers to exercise rights under the Act including the right to human oversight and the right to explanation of AI-driven decisions.

INTERSECTIONS

How the EU AI sits alongside GDPR, DORA and UK regulation

EU AI Act and GDPR

Most high-risk AI systems process personal data. The AI Act and GDPR are complementary rather than duplicative, but they impose different requirements on different aspects of the same AI system. GDPR governs the processing of personal data used to train, validate and operate the AI. The AI Act governs the design, documentation, oversight and deployment of the AI system itself. Data governance frameworks built for GDPR provide a foundation for the AI Act’s data documentation requirements but they are not sufficient on their own. The AI Act requires additional records specifically covering dataset representativeness, bias mitigation, and the use of data in conformity assessments.

EU AI Act and DORA

For financial services businesses, DORA (the Digital Operational Resilience Act) and the EU AI Act create overlapping obligations around technology resilience, incident reporting, and third-party risk management. AI systems embedded in financial services operations may be subject to both frameworks simultaneously. DORA’s ICT risk management requirements and the AI Act’s risk management system obligations should be designed as integrated frameworks where possible, with shared documentation, governance structures and audit trails. Treating them as separate projects is inefficient and creates gaps.

EU AI Act and UK regulatory framework

The UK’s principles-based, sector-led approach to AI governance does not require the same structured compliance programme as the EU AI Act. However, building an AI governance framework calibrated to the EU AI Act substantially satisfies the UK principles of safety, transparency, fairness, accountability and contestability simultaneously. For most businesses with any EU exposure, the EU AI Act provides the operative compliance framework. UK-specific sector guidance from the ICO, FCA, CMA and others should be applied as an additional layer within that framework.

The EU AI Act does not operate in isolation. It intersects with data protection law, sector-specific regulation, and the emerging UK regulatory framework in ways that create both compliance complexity and, for businesses with mature existing programmes, a genuine head start.

Common questions about EU AI Act compliance for UK businesses

  • Yes. The EU AI Act applies to any organisation that places AI systems on the EU market or puts AI systems into service within the EU, regardless of where that organisation is established. Post-Brexit status does not create an exemption. If your AI systems are used by, or produce outputs that affect, individuals in EU member states, the Act is likely to apply to those operations. The Act’s extraterritorial scope is modelled closely on GDPR’s approach to non-EU data controllers, which UK businesses are already familiar with. The practical test is EU market impact, not corporate domicile.

  • The key date is 2 August 2026, when full compliance obligations for high-risk AI systems under Annex III take effect. However, the Act is already partially in force. Eight categories of prohibited AI practices have been banned since 2 February 2025. AI literacy requirements under Article 4 have also applied since February 2025. General-purpose AI model obligations have applied since 2 August 2025. The 2 August 2026 deadline is not the beginning of compliance - it is the culmination of a phased implementation that started in 2024. For high-risk AI systems embedded in regulated products, an extended deadline of 2 August 2027 applies.

  • Almost certainly, if it is used in EU-facing recruitment operations. Employment and workforce management is one of the eight high-risk domains listed in Annex III of the Act. AI systems used to recruit or select individuals, place targeted job advertisements, analyse and filter job applications, or evaluate candidates are classified as high-risk by default. The same applies to AI used to monitor and evaluate the performance and behaviour of workers, or to make or assist decisions affecting terms and conditions of employment. If you are using AI tools for any of these purposes in your EU-facing operations, high-risk compliance obligations apply. This is one of the areas where many businesses significantly underestimate their exposure.

  • Partly. Providers bear the most extensive obligations under the Act — conformity assessments, technical documentation, CE marking, database registration. As a deployer using a third-party system, you have independent obligations that cannot be contracted away to your vendor. These include: using the system in accordance with the provider’s instructions; assigning human oversight; monitoring the system’s performance; informing individuals affected by the system; reporting serious incidents; and maintaining logs. You are also responsible for ensuring your vendor has met their provider obligations - deploying a non-compliant AI system exposes you as well as the provider.

  • A general-purpose AI model (GPAI model) is a model trained on large amounts of data that can perform a wide range of tasks, including but not limited to large language models such as GPT-class systems. GPAI obligations have applied since 2 August 2025. All GPAI providers must provide technical documentation, instructions for use, comply with the Copyright Directive, and publish a summary of training content. Providers of GPAI models with systemic risk - those with very high capability or wide usage - face additional obligations including adversarial testing, incident reporting, and cooperation with the EU AI Office. Most businesses using GPAI models as deployers rather than providers will primarily be concerned with the provider’s compliance and with ensuring their vendor agreements address this.

  • The EU AI Act’s penalty structure is tiered by severity of violation. Placing a prohibited AI system on the EU market: up to €35 million or 7% of worldwide annual turnover, whichever is higher. Non-compliance with high-risk AI system obligations: up to €15 million or 3% of worldwide annual turnover. Providing incorrect or misleading information to regulators: up to €7.5 million or 1.5% of worldwide annual turnover. For SMEs and startups, the lower of the two figures in each tier applies. These penalties exceed the GDPR maximum fines for most categories of violation. They are applied by national market surveillance authorities, with the EU AI Office having supervisory and enforcement authority over GPAI model providers.

  • A provider is any organisation that develops an AI system, or has one developed on its behalf, and places it on the EU market or puts it into service in the EU under its own name or trademark. A deployer is any organisation that uses an AI system under its authority in a professional context - that is, any business using AI tools in its operations. Most businesses will be deployers in relation to third-party AI tools they procure, and may simultaneously be providers in relation to AI systems they develop or customise and offer to customers. Deployers become subject to provider obligations if they use the system under their own name or trademark, substantially modify the system, or change its intended purpose beyond the scope of the original conformity assessment.

HOW RMOK LEGAL CAN HELP

AI governance and EU AI Act compliance advice - practical, commercial, board-ready

RMOK Legal provides EU AI Act compliance advice to businesses across the UK and internationally. Every engagement is led personally by Rory O'Keeffe, a member of the AI Committee of the Society for Computers and Law, accredited as a Leading IT Lawyer under the SCL’s programme, and the author of AI Advantage: Thriving Within Civilisation’s Next Big Disruption (2025). This is not a service that was assembled because a client asked for it. It is where 20 years of commercial and technology law experience converges with a genuine specialism built before AI governance became fashionable.

Review and redline of AI vendor agreements, procurement contracts and customer-facing AI terms against EU AI Act requirements. Clear risk identification, specific redline provisions, and negotiation support.

Review of commercial contracts that are tech-enabled for AI contract provisions and protections.

Design and implementation of an AI governance framework tailored to your regulatory environment, AI use case profile and organisation size. Policies, oversight mechanisms, accountability structures, and audit trail documentation.