Your AI Outsourcing Contract Is Obsolete. What CEOs and Heads of Legal Need to Fix Before August 2026
Here is a thought experiment for a Sunday evening. Imagine hiring a contractor to renovate your kitchen. You agree a price, a timeline, and a set of plans. The contractor then brings in a subcontractor you have never met, who uses your family photographs to train a facial recognition system, quietly rewires the electrics using a method nobody in the house understands, and refuses to explain why the oven now turns itself on at 3am.
You would call a lawyer. You might call the police.
Now imagine that contractor is your outsourcing provider, and the subcontractor is an AI system embedded in the service you are paying for. Welcome to the state of outsourcing contracting in 2026.
The contract you signed was built for humans
Most outsourcing contracts in circulation today were drafted for a world where services were delivered by people. Clear job descriptions, defined service levels, named key personnel, remediation plans that involved sitting a human being down and explaining what went wrong. That model worked because accountability was traceable. When something broke, you could follow the chain of decisions back to a person who made them.
AI upends that chain entirely. When an AI system makes an error in an outsourced business process, there may be no identifiable decision-maker, no audit trail in any human-readable sense, and no obvious way to "fix" the problem without retraining or replacing the model. The traditional remediation playbook of root cause analysis followed by a corrective action plan does not translate cleanly to a system whose internal reasoning is opaque by design.
Norton Rose Fulbright, writing for the Society for Computers and Law, put it well: the emergence of AI in outsourcing has brought entirely new complexities that require new contract provisions, not just a polish of existing ones. That assessment is generous. What many organisations actually need is a fundamental rethink.
Who owns what? The IP question nobody wants to answer
In a traditional outsourcing arrangement, intellectual property is relatively straightforward. Each party owns its background IP and licenses it for the purposes of the contract. Foreground IP created during the engagement is negotiated, usually assigned to the customer, and the data stays within agreed boundaries.
AI demolishes this neat structure. Training data, model weights, fine-tuned outputs, and the improvements that come from running your data through a supplier's model all create overlapping ownership claims that the standard IP clause was never designed to resolve.
Consider the scenario: your outsourcing provider deploys an AI model trained partly on your customer data and partly on data from their other clients. The model improves. Who owns that improvement? If the provider deploys the improved model for a competitor, have they just given your competitor an indirect window into your operations? If you insist on a fully segregated AI instance trained only on your data, you lose the benefits of broader training and may pay significantly more for a less capable system.
These are not hypothetical questions. They are live commercial negotiations happening across every major outsourcing deal in 2026, and most legacy contracts have no provisions to address them. Morgan Lewis has noted that IP allocation in AI agreements has become one of the most actively negotiated areas in technology transactions today, with standard SaaS templates proving inadequate for the nuances of AI-generated outputs and training data.
If your outsourcing contract does not distinguish between background training data, customer-contributed data, model improvements derived from your data, and general model enhancements, you have a gap that needs closing before it becomes a dispute.
Liability: who pays when the machine gets it wrong?
Traditional outsourcing liability frameworks assume that errors are identifiable, attributable, and remediable. AI challenges all three assumptions simultaneously.
When an AI system produces an incorrect output that causes financial loss, establishing fault is genuinely difficult. Was the error caused by the model design, the training data, a change in input patterns, or something the customer did (or failed to do) in providing data to the system? The supplier will argue that the customer provided poor quality data. The customer will argue that the supplier warranted a system that works. Both may be right, and neither may be able to prove it.
Clifford Chance has flagged this as a growing liability gap: businesses are deploying AI systems under legacy contracts that were written for passive, predictable software under firm human control, while the AI itself is autonomous, adaptive, and often opaque. The customer bears accountability for regulatory and legal compliance, but may lack the contractual right to understand or control the AI agent's behaviour.
The market response is evolving. We are starting to see liability "supercaps" specific to AI-related breaches, outcome-based warranties that vary depending on whether AI output is used with human oversight or fully autonomously, and expanded indemnity provisions that address the autonomous performance of AI services. But these clauses are appearing in new deals. If your outsourcing contract pre-dates this shift, you are operating without a safety net.
The agentic AI problem: when software stops being a tool and starts being an employee
The conversation has moved on again since the early days of generative AI in outsourcing. Agentic AI systems do not simply generate content or analyse data. They take actions: approving refunds, reconciling invoices, placing orders, modifying records, triggering payments. They operate inside live business workflows, and when they make a mistake, the consequences are operational and financial.
Mayer Brown has argued that contracts for agentic AI increasingly resemble managed services or outsourcing agreements rather than traditional SaaS subscriptions. The firm recommends a hybrid contracting model that combines the scalability of SaaS frameworks with BPO-style governance, performance commitments, and liability structures. That means outcome-based SLAs instead of uptime guarantees, decision-logging obligations, broader audit rights, and indemnities tied to the agent's autonomous actions.
For organisations with existing outsourcing arrangements that are now incorporating agentic AI capabilities, this shift is urgent. A contract that was drafted to govern a team of 200 people delivering a business process is fundamentally inadequate to govern an AI agent doing the same work with no human oversight. The risk profile is entirely different, and the contract needs to reflect that.
Exit: how do you leave when the AI is baked in?
Exit provisions in traditional outsourcing contracts contemplate a defined transition period, knowledge transfer, data migration, and potentially employee transfers under TUPE. These mechanisms assume that the service being transitioned is understood, documented, and portable.
AI-powered outsourcing breaks several of these assumptions. The supplier may refuse to share proprietary AI models or license them to a replacement provider. Training data may be commingled in a shared data lake from which customer-specific data cannot be cleanly extracted. The institutional knowledge that traditionally sits with people now sits inside a model, and that model is not coming with you.
Morgan Lewis has described this as a critical contracting risk: AI agreements that lack clear exit mechanics can leave the customer stranded if costs increase, performance degrades, or a provider's roadmap changes. Their recommendation is to treat AI deployments more like critical outsourced services and less like software licences, with pre-agreed transition assistance, defined cooperation obligations with replacement vendors, and contractual data portability rights.
If your exit plan assumes that you can simply migrate data to a new provider and pick up where you left off, you need to pressure-test that assumption against the reality of how the AI is actually deployed.
The EU AI Act: the regulatory overlay that changes everything
All of the above sits against a regulatory backdrop that is about to get significantly more consequential. The EU AI Act becomes fully enforceable for high-risk AI systems on 2 August 2026. Fines for non-compliance run up to EUR 35 million or 7% of global annual turnover, whichever is higher.
For outsourcing arrangements, the regulatory implications are considerable. If your outsourcing provider deploys an AI system that is classified as high-risk under Annex III of the Act, someone needs to comply with the obligations around risk management, data governance, technical documentation, transparency, human oversight, and conformity assessment. Whether that obligation falls on the provider, the deployer, or both depends on the specific roles and the contractual allocation of responsibilities.
Many existing outsourcing contracts are silent on EU AI Act classification and compliance. That silence creates risk. If the AI system deployed in your outsourcing arrangement turns out to be high-risk, and neither party has clearly accepted responsibility for compliance, you have a regulatory exposure that no amount of retrospective negotiation will cleanly resolve.
This is compounded by sector-specific overlaps. Financial services organisations face dual compliance obligations under the AI Act and DORA. Healthcare organisations face MHRA and patient safety considerations alongside GDPR. Critical infrastructure operators face NIS2. The outsourcing contract needs to address not only AI Act compliance but the specific regulatory intersection relevant to the customer's sector.
RMOK Legal has published a detailed EU AI Act Compliance Guide for UK Businesses at rmoklegal.com/guides/eu-ai-act-compliance-uk, which covers the full compliance timeline, risk classification framework, and practical steps for organisations preparing for August 2026.
What you should be doing now
The organisations that will navigate this well are the ones treating it as a commercial priority, not a legal housekeeping exercise. That means several things in practice.
First, review your existing outsourcing agreements for AI exposure. Identify every contract where AI is being used, or is likely to be introduced, in the delivery of outsourced services. Assess whether the existing provisions on IP, liability, exit, audit, and change management are adequate for AI-powered delivery. In most cases, they will not be.
Second, classify the AI systems in play. Determine whether any AI systems deployed in your outsourcing arrangements are, or could be, classified as high-risk under the EU AI Act. If they are, establish who has compliance responsibility and ensure that is reflected in the contract.
Third, negotiate AI-specific provisions. This includes layered IP provisions that distinguish between different categories of data and model improvements, outcome-based performance metrics, expanded audit and transparency rights, decision-logging obligations for agentic AI systems, clear liability allocation for autonomous AI actions, and robust exit provisions that account for AI portability.
Fourth, build governance into the relationship. The outsourcing contract is the foundation, but governance is the operating system. Regular AI-specific governance reviews, compliance monitoring, and a clear escalation path for AI-related incidents need to be embedded into the working relationship, not left to quarterly service review meetings.
The bottom line
AI is not the risk. AI without oversight is. That principle applies with particular force to outsourcing, where the customer is entrusting a critical business function to a third party and then, increasingly, to a machine operated by that third party.
The contracts governing these arrangements were built for a simpler world. The world has moved on, and the contracts need to catch up. The organisations that do this now, before the EU AI Act enforcement deadline, before the first major AI-outsourcing dispute lands, before the board asks why nobody flagged this, will be the ones that come through this transition in the strongest position.
If you are a CEO, General Counsel, or Head of Legal looking at your outsourcing portfolio and wondering whether your contracts are fit for purpose, the honest answer is almost certainly that they are not. The question is what you do about it, and how quickly.
About the Author
Rory O'Keeffe is the founder of RMOK Legal, a City of London practice specialising in AI governance, commercial technology law, and fractional general counsel services. He is an SRA-regulated solicitor with over 20 years of experience, a former Partner at Matheson and former Director of Legal Services at Accenture. Rory is an SCL-accredited Leading IT Lawyer, a member of the SCL AI Committee, and author of a chapter in the bestselling AI Advantage (2025). He hosts the Beyond The Fine Print podcast on Spotify, Apple Podcasts, and YouTube.
FAQ
-
In most cases, no. Traditional outsourcing contracts were designed for human-delivered services with clear accountability chains. AI introduces novel risks across intellectual property, liability, performance monitoring, exit, and regulatory compliance that standard ITO and BPO agreements do not address. Organisations should audit existing outsourcing contracts for AI exposure and negotiate AI-specific provisions as a priority
-
The EU AI Act becomes fully enforceable for high-risk AI systems on 2 August 2026, with fines up to EUR 35 million or 7% of global annual turnover. If an outsourcing provider deploys a high-risk AI system, the contract must clearly allocate compliance responsibility between the provider and the deployer. Many existing outsourcing contracts are silent on AI Act classification, creating significant regulatory exposure.
-
AI creates overlapping IP claims that traditional outsourcing contracts were not designed to resolve. Training data, model weights, fine-tuned outputs, and improvements derived from customer data all raise ownership questions. Contracts should use a layered approach that distinguishes between background training data, customer-contributed data, model improvements derived from customer data, and general model enhancements.
-
Agentic AI systems take autonomous actions such as approving refunds, placing orders, triggering payments, and modifying records without human intervention at each step. Leading law firms including Mayer Brown and Clifford Chance have recommended that contracts for agentic AI should resemble managed services or BPO agreements rather than traditional SaaS subscriptions, with outcome-based SLAs, decision-logging obligations, and expanded indemnities.
-
Traditional exit provisions assume that outsourced services are understood, documented, and portable. AI complicates this because proprietary models may not be transferable, training data may be commingled, and institutional knowledge sits inside a model rather than with people. Contracts should include pre-agreed transition assistance, defined cooperation obligations with replacement vendors, and contractual data portability rights.

