The Bank of England Just Admitted AI Could Break Finance. Most Businesses Are Not Ready.
Written by Rory O’Keeffe, solicitor, SCL-accredited Leading IT Lawyer, and founder of RMOK Legal. Rory advises UK and international businesses on AI governance, regulatory compliance, and commercial technology law. He is a member of the Society for Computers and Law AI Committee and author of a chapter in the bestselling book The AI Advantage (2025). He hosts Beyond The Fine Print, a tech law podcast for in-house counsel.
There is a particular kind of institutional candour that deserves our attention. Not the loud kind. Not the press conference kind. The kind that arrives in a letter to a parliamentary committee on a Wednesday afternoon, written in the careful prose of a Deputy Governor who knows exactly what she is saying and has chosen every word accordingly.
Last week, the Bank of England’s Financial Policy Committee published its April 2026 record. Buried in the measured language was a sentence that should be pinned to the wall of every boardroom in the City: advanced AI is not yet creating systemic risk in UK financial services, but those risks could increase rapidly.
Read that again. Slowly. The building is not on fire. But the wiring is not up to code and someone has left the gas on.
What Did the Bank of England Actually Say About AI Risk?
The FPC’s April record confirmed that the Bank found little evidence of advanced AI, including generative and agentic models, being deployed in ways that currently threaten financial stability. So far, so reassuring. But the Committee went further. It flagged that the pace of adoption is accelerating and that risks could grow quickly as firms push into more autonomous AI applications, particularly in payments and financial markets.
Deputy Governor for Financial Stability Sarah Breeden confirmed in a letter to Parliament’s Treasury Committee that the Bank is now conducting scenario analysis and simulations specifically focused on AI risk. One area of particular concern: “herding” behaviour, where AI agents in financial markets could synchronise their trading decisions and amplify selloffs during periods of stress. Think of it as a flash crash, but with nobody at the wheel to pump the brakes.
The Bank is also collaborating with international counterparts to understand the cross-border dynamics. Because, of course, an AI agent trained in London does not care particularly much about national boundaries when it starts selling equities at three in the morning.
Why Is Parliament Losing Patience With AI Regulation in Finance?
This did not come out of nowhere. The Treasury Select Committee published its report on AI in financial services in January 2026 and used language that, by parliamentary standards, was borderline inflammatory. It accused the Bank of England, the FCA, and HM Treasury of taking a “wait-and-see” approach that risked serious harm to consumers and the wider financial system.
Committee Chair Dame Meg Hillier put it plainly: she did not feel confident that the UK’s financial system was prepared for a major AI-related incident. Given that more than three quarters of UK financial services firms are already using AI in some form, that is not an abstract observation. It is a policy alarm.
The Committee called for AI-specific stress tests, clearer FCA guidance on how existing rules apply to AI-driven processes, and the designation of major AI and cloud providers as Critical Third Parties under the Bank’s operational resilience regime. HM Treasury’s response? It declined to commit to a deadline. Dame Meg was, according to reports, “perplexed” by this. Which, in parliamentary English, is roughly the equivalent of throwing a shoe.
What Does the FCA Expect From Businesses Using AI?
The FCA has been consistent on one point: it is not building a bespoke AI rulebook. Its position, reiterated in its March 2026 perimeter report, is that existing frameworks including Consumer Duty and the Senior Managers and Certification Regime are flexible enough to govern AI. The message to firms is: you do not get to wait for new rules. The old rules already apply.
That message carries teeth. Under the Senior Managers regime, delegating a decision to an algorithm does not delegate the accountability. If an AI model causes a data breach, produces biased outcomes, or disrupts a market, the senior manager responsible for that function is personally on the hook. Not the vendor. Not the data scientist. The person whose name is on the regulatory register.
The FCA has also committed to publishing practical examples of how firms should align AI deployment with existing conduct rules. This is a direct response to industry complaints that the current guidance is too abstract. Good. Because “you must be fair” is a principle, not a user manual.
How Does the EU AI Act Deadline Affect UK Businesses?
While the UK shapes its own regulatory path, the European Union’s AI Act is moving from theory to enforcement on a fixed timetable. The critical deadline for most organisations is 2 August 2026, when obligations for high-risk AI systems under Annex III become enforceable. That covers AI used in employment decisions, credit scoring, education, and law enforcement contexts.
Any UK business that operates in EU markets, serves EU customers, or deploys AI systems that affect individuals in the EU is within scope. This is not a theoretical concern. The fines for non-compliance reach €35 million or 7% of global turnover, whichever is higher. For context, that is a number designed to get the attention of a board, not a compliance team.
There is an interesting wrinkle in the timeline. The European Commission’s Digital Omnibus package, proposed in late 2025, has suggested delaying high-risk obligations for certain systems until December 2027. But the delay has not been confirmed, and any business treating it as a certainty is making a bet with its compliance posture that it may not be able to afford to lose.
Meanwhile, the EU’s enforcement infrastructure has its own problems. As of March 2026, only eight of twenty-seven member states had designated their national AI Act enforcement authorities, despite a deadline that passed in August 2025. So the regulation exists, the fines are real, but the machinery to enforce it is still being assembled. This is the regulatory equivalent of installing the speed cameras after the motorway has opened. The rules still apply. The fines still arrive. The fact that nobody is watching yet is a reason to prepare, not a reason to relax.
What Does This Mean for UK Businesses Right Now?
Here is what I see in practice, every week, across businesses of every size. The technology is running ahead of the governance. AI tools are being deployed in procurement, HR, customer service, financial reporting, contract review, and sales forecasting. In most cases, nobody in the legal or compliance function has been told. In many cases, nobody in the legal or compliance function has been asked.
That is the gap. Not between ambition and capability. Between deployment and oversight.
The Bank of England’s position tells you where the smart money is. If the institution responsible for the stability of the entire UK financial system has decided it needs to stress test AI risk, the question for every business leader is simple: have you?
That does not require a twelve-month programme. It requires an honest assessment of three things. First, what AI tools is your business actually using today, not theoretically, not in a roadmap, but right now? Second, who is accountable for the decisions those tools make? Third, would your current governance framework survive a regulator asking to see it?
If the answer to any of those is uncomfortable, that is not a problem. That is a starting point.
The Bottom Line
The Bank of England does not issue warnings for fun. When it commits to stress testing a risk, it is telling you it considers that risk plausible enough to model. Parliament is losing patience. The FCA is sharpening its expectations. The EU is counting down to enforcement.
And in the middle of all of this, most businesses are governing their AI usage with a policy document that someone wrote two years ago and nobody has read since.
AI is not the risk. AI without oversight is.
If you are not sure whether your business is ready for what is coming, that is exactly the conversation worth having today, not after the regulator’s letter arrives.
Frequently Asked Questions
-
The Bank of England’s Financial Policy Committee confirmed in its April 2026 record that advanced AI is not yet creating systemic risk in UK financial services. However, it warned that risks could increase rapidly as firms expand their use of generative and agentic AI, particularly in payments and financial markets. The Bank is now conducting scenario analysis and stress testing focused on AI-related risks, including “herding” behaviour by AI agents that could amplify market volatility.
-
Under the UK’s Senior Managers and Certification Regime (SMCR), the senior manager responsible for a function remains personally accountable for decisions made by AI systems within that function. Delegating a task to an algorithm does not delegate the regulatory responsibility. The FCA has signalled that it expects firms to ensure adequate human oversight, clear audit trails, and documented risk controls for any AI-driven process.
-
The main enforcement deadline for the EU AI Act is 2 August 2026, when obligations for high-risk AI systems under Annex III become enforceable. This covers AI used in employment, credit decisions, education, and law enforcement. UK businesses that operate in EU markets or deploy AI affecting EU individuals are within scope. Fines for non-compliance reach €35 million or 7% of global turnover.
-
The FCA has not introduced a standalone AI risk assessment requirement. Its approach is that existing regulatory frameworks, including Consumer Duty, SMCR, and operational resilience rules, already apply to AI-driven processes. However, the Treasury Select Committee has called for AI-specific stress tests and clearer guidance, and the FCA has committed to publishing practical examples of how firms should apply existing rules to AI deployment.
-
A fractional general counsel is an experienced senior lawyer who works with a business on a part-time or project basis, providing the strategic legal oversight of a full-time general counsel without the overhead of a permanent hire. For AI governance, a fractional GC can help a business map its AI usage, assess regulatory exposure, build an AI governance framework, and prepare for compliance with the EU AI Act and UK regulatory expectations.
Author: Rory O’Keeffe is a solicitor regulated by the Solicitors Regulation Authority (No. 8008227) and the founder of RMOK Legal, based at 60 Cannon Street in the City of London. He has over 20 years of experience in commercial, technology, and AI law, including as a Partner at Matheson and Director of Legal Services at Accenture. Rory is an SCL-accredited Leading IT Lawyer, a member of the Society for Computers and Law AI Committee, and author of a chapter in the bestselling book The AI Advantage (2025). He hosts Beyond The Fine Print, a tech law podcast for in-house counsel, available on Spotify, Apple Podcasts, and YouTube.

