WHAT WILL LAWYERS DO IN 2036? AI, Agentic Systems, and the Future of Legal Practice

Tug of war between AI and Lawyers, production against judgment.

Last week, Sullivan & Cromwell apologised to a US bankruptcy judge for filing a motion riddled with AI-generated hallucinations. Fabricated case citations. Non-existent legal sources. Inaccurate article titles. More than 40 errors on a three-page correction sheet.

This was not a scrappy startup with a ChatGPT subscription and an optimistic intern. This was one of the most prestigious law firms on the planet, with comprehensive AI policies, mandatory training requirements, and a partner who described the firm’s safeguards as “designed to prevent exactly this situation.”

The safeguards failed.

They failed quietly, politely, and exactly as all respectable governance failures do.

Now hold that thought, and consider this. Three months earlier, the Solicitors Regulation Authority authorised LawFairy, a technology‑only law firm delivering regulated legal services without traditional lawyers. It followed Garfield.Law, authorised in 2025, which offers AI‑powered litigation for small claims starting at £2.

Two pounds.

For a regulated legal service.

So, at precisely the same moment, the world’s most expensive lawyers are being caught out by AI they cannot supervise, while the world’s cheapest legal services are being delivered by systems that barely involve humans at all.

If that does not make you curious about what a lawyer actually does in ten years’ time, I am not sure what will.


First, the World: What 2036 Actually Looks Like

Before we talk about lawyers, we need to talk about the world they will operate in. The profession, for all its traditions and ceremonies, does not exist in a vacuum. It exists inside economic gravity, political turbulence, and technological acceleration, most of which are showing no signs of slowing down for our benefit.

The Atlantic Council's Global Foresight 2036 survey, published in February 2026, polled 447 experts from 72 countries. The headline finding is sobering:

63% expect the world to be worse off in ten years than it is today.

A clear majority, 58%, believe we will have achieved artificial general intelligence by 2036: systems that match or exceed human cognitive abilities across essentially any task.

Not everyone agrees, of course. But enough of the world’s geopolitical strategists are packing a conceptual toothbrush just in case.

The geopolitical backdrop complicates everything. Most respondents expect China to be the world’s leading economic power by 2036, with the United States retaining military dominance while ceding ground in trade, technology, and diplomacy. More than 40% see a meaningful risk of a major global conflict in the next decade. Regulatory fragmentation between the US, Europe, and Asia will deepen. There will not be one set of AI rules. There will be three or four, each insisting - quite sincerely - that theirs is the reasonable one.

Futurist Thomas Frey, whose 2016 predictions on AI and biometrics have held up uncomfortably well, argues that by 2036 AI will be so embedded in professional life that it becomes effectively invisible. Background infrastructure. Like electricity. Or oxygen. Every knowledge worker will have systems that understand their style, preferences, priorities, and history.

The question will not be whether to use AI.

It will be how to remain human while doing so.

For lawyers, that means operating in a regulatory environment that is more fragmented, more layered, and more technologically dense than anything we have previously navigated - alongside machines that can read, reason and produce legal work faster than any human being alive.

Comforting thought.

An abstract world map fragmented into regulatory blocs

What Is Actually Changing About Legal AI Right Now?

The shift happening in 2026 is not about whether firms use AI. Reportedly, nearly 65% already do. The shift is about what the AI does when nobody is watching.

Until recently, legal AI was a sophisticated research assistant. You asked a question. It produced an answer. You checked the answer. Prompt in, output out. The lawyer stayed firmly in charge.

That era is ending.

We are now entering the age of agentic AI: systems that plan, reason, and execute multi‑step legal workflows autonomously. Thomson Reuters launched agentic workflows in CoCounsel Legal in early 2026. LexisNexis deployed multiple specialised agents that collaborate on complex tasks. Harvey and Legora raised a combined $750 million in March alone, achieving valuations that would once have seemed ambitious even for Silicon Valley.

What does agentic mean in practice? It means AI that does not wait patiently for instructions like a helpful associate. It conducts research across sources. Reviews contracts against your firm’s playbook. Cross‑references obligations across jurisdictions. Produces structured work product. Then presents it to a human lawyer for review at designed decision points.

The lawyer becomes the quality controller, not the producer.

That distinction changes everything.

Harvard Law School’s Center on the Legal Profession quantified the impact. In high‑volume litigation, AI reduced complaint response times from sixteen hours to three or four minutes. A 200x productivity gain. One chief operating officer described it as the “80/20 inversion”: historically, lawyers spent 80% of their time collecting information and 20% analysing it. AI flips those proportions.

And yet, the study revealed something quietly astonishing. Despite these gains, firms expect very little structural change. The billable hour persists. Headcounts remain broadly intact. The prevailing belief is that lawyers will simply do more and better work in the same number of hours.

The same argument was made about word processors in the 1980s.

A 200x gain, however, is not a 20% one.

At some point, the client notices.

Clients, inconveniently, also own calculators.

Machines doing things “off‑screen”

The Susskind Question: Are We Training Lawyers for 1990?

Richard Susskind has been asking awkward questions about the legal profession for three decades. As President of the Society for Computers and Law and former technology adviser to the Lord Chief Justice, he has developed an irritating habit of being right.

His argument is familiar but no less uncomfortable for repetition. Lawyers routinely perform work for which they are overqualified, billing specialist rates for tasks better done by technology, process specialists, or alternative providers. By 2030, he argues, AI systems will be reliable enough to hollow out the traditional pyramid from both ends. Law schools, meanwhile, remain remarkably faithful to curricula designed for the late twentieth century.

They produce excellent lawyers.

They are simply optimised for a market that no longer exists.

Extend that forward to 2036. If Susskind is right - and his record suggests he often is - then lawyers starting today will reach the peak of their careers in a world where AI performs most of the analytical and drafting work that once occupied the first decade of practice.

The apprenticeship model evaporates.

The real question is not whether this happens.

It is what replaces it.


Three Lawyers Walk Into 2036

(…none of them are billing by the hour.)

The Big Law Partner

Am Law 100 profits per lawyer are up 54% since 2019. Hourly rates for elite counsel now flirt with $3,000. The quiet part is that billable hours per lawyer are declining. Firms are earning more because pricing power has temporarily outpaced client scrutiny.

Temporarily.

By 2036, the leverage model inverts. Where firms once hired eight associates per partner, they deploy eight AI agents. The survivors are T‑shaped lawyers: deep specialism, broad technological and commercial fluency. The partnership fragments into two layers: a small advisory tier delivering strategy and judgment, and a technology‑driven production tier handling volume work.

Pricing follows inevitability. Value‑based billing dominates everywhere except genuine novelty and existential risk.

SPICY TAKE: By 2036, the most profitable partners will not bill the most hours. They will design the best AI systems. Golf handicaps optional.

Management Committee meeting is now in session!


The In-House General Counsel

The GC of 2036 spends remarkably little time reviewing contracts. AI handles 70–80% of drafting, review, and monitoring. External counsel is reserved for novelty, complexity, or genuine jurisdictional peril.

The role becomes architectural. Less firefighting, more system design. Governance replaces heroics. Compliance monitoring becomes continuous. Problems are surfaced before they metastasise.

The catch is simple and uncomfortable: AI governance becomes a board‑level function. The GC owns legal risk generated by every algorithm in the organisation, from recruitment to procurement.

If this sounds abstract, consider banking. A recent Zango report shows major UK financial institutions deploying AI faster than compliance teams can track. Some cannot identify all the systems in use.

If that is the state of play in banking, imagine a mid‑market tech company with a legal team of three.

SPICY TAKE: The in‑house lawyer who says “AI is an IT issue” in 2036 will be as negligent as the one who said “GDPR is a European problem” in 2018.

A control room / dashboard‑like environment Slight overload Lots of signals

General Counsel - please take a seat!


The Solo/Boutique Practitioner

Against expectation, they may be the biggest winners.

Boutiques adopt AI fastest because they have no choice. For them, AI is leverage. One senior lawyer with the right infrastructure delivers what a ten‑person team once did.

Niche expertise becomes the premium. AI commoditises general legal knowledge. What remains valuable is context, judgment, and lived industry experience.

The fractional GC model illustrates the shift. One experienced lawyer serves multiple clients, with AI handling production and the human handling judgment, trust and accountability.

SPICY TAKE: The boutique lawyer who publishes their AI governance framework will own a marketing asset worth more than any directory ranking. Transparency becomes the trust signal.

A sparse workspace One desk One screen Light coming in

Mission Control….all systems are a go!


The Skills That Define the 2036 Lawyer

This is the part that should concern law firms, law schools, and regulators in roughly equal measure.

  • AI literacy is currently a nice-to-have. By 2036, it will be a core competence. Understanding how AI reasons, where it fails, and how to govern it will be as fundamental as understanding statutory interpretation is today. The lawyer who does not understand how their AI system reaches a conclusion will be as dangerous as a surgeon who does not understand anatomy.

  • Data fluency will be essential. Lawyers will need to understand data architectures, privacy engineering, and algorithmic decision-making. Not as technologists, but as professionals who can assess risk, advise boards, and challenge vendors.

  • Systems thinking will be central. Designing legal workflows, governance architectures, and risk frameworks will become core lawyer work. The adversarial mindset that traditional legal training breeds is excellent for litigation, but it does not teach you how to build a compliance system from scratch.

  • Multi-jurisdictional fluency will define commercial practice. The Atlantic Council survey anticipates a multipolar world with competing regulatory frameworks. Every significant transaction will involve navigating the EU AI Act, whatever the US federal framework becomes, China's AI regulations, and emerging frameworks in Singapore, India, and the Gulf states simultaneously. The lawyer of 2036 will not specialise in one jurisdiction's rules. They will specialise in the architecture of compliance across jurisdictions.

  • And then there are the skills AI cannot replicate: commercial judgment,emotionalintelligence, ethicalreasoning under genuine uncertainty, and the ability to sit across from a frightened founder and say the thing they need to hear rather than the thing they want to hear.

These are not soft skills. By 2036, they are the hard skills.


The Reversal Problem: What if AI Becomes the Bottleneck?

There is, however, a less fashionable question lurking underneath all this optimism, and it deserves to be asked before we declare the future settled.

What if the problem is not that lawyers fail to adopt AI fast enough, but that they adopt it too thoroughly, optimise for it too completely, and then discover that the assumptions underpinning that optimisation no longer hold?

History, alas, is not kind to professions that confuse technological acceleration with inevitability.

We have been here before. Entire industries have enthusiastically re‑engineered themselves around efficiency gains that later proved fragile. Manufacturing outsourced itself into dependency. Newsrooms gutted institutional memory in the name of speed and click‑through rates, only to discover belatedly that credibility does not scale on a spreadsheet. Banks automated risk assessment so successfully that nobody noticed the models had quietly forgotten how people behave when things go wrong.

It would be charmingly naive to assume the legal profession is immune.

Consider the less‑discussed constraints already gathering at the edges of legal AI adoption:

  • Large‑scale models are expensive to train and increasingly expensive to run.

  • Energy constraints and sustainability regulation are no longer abstract concerns but line items.

  • Data sovereignty rules are tightening, not loosening, particularly in precisely the jurisdictions most enthusiastic about AI regulation.

  • AI sovereignty is becoming a matter of geopolitical policy, not enterprise choice. The ability to move seamlessly between models, jurisdictions, or providers is far more constrained than vendor slide decks suggest.

And then there is concentration risk. As with cloud infrastructure, semiconductor fabrication, and financial plumbing, advanced AI capability is consolidating rapidly into a small number of global conglomerates and sovereign ecosystems. Legal services built on the assumption of permanent, cheap, cross‑border access to frontier models may discover that access becomes conditional, restricted, or materially more expensive at exactly the wrong moment.

None of this requires AI to “fail” in any dramatic sense. It merely requires it to become scarcer, more regulated, more politically encumbered, or more costly. A perfectly plausible outcome.

The harder problem, however, is not technological. It is human.

What happens if the profession successfully reorganises itself around AI‑driven production, trims training pipelines, compresses junior layers, and redefines competence as oversight rather than immersion - only to discover that it has quietly amputated its own learning substrate?

Much legal expertise is not stored in statutes, playbooks, or models. It lives in pattern recognition built over years. In a thousand half‑remembered matters. In the instinct that something feels wrong before the reason is articulate. In the accumulated scar tissue of decisions that went badly once and were never repeated.

Those things do not survive well in organisations optimised for minimum human involvement.

Other sectors have already learned this lesson the hard way. They cut deeply, celebrated productivity gains, then found themselves rehiring - more cautiously, more expensively, and with less depth - after realising they had thinned out precisely the corporate wisdom that made the system resilient. You can rebuild headcount. Rebuilding judgment takes much longer.

Law is especially exposed because our traditional apprenticeship model was never merely about producing documents cheaply. It was how judgment was formed. Automate the work without redesigning how that judgment is cultivated, and you risk creating a generation extremely well trained to supervise machines they do not truly understand, and underprepared to operate without them when needed.

This is the reversal problem.

It is not about distrusting AI. It is about avoiding a one‑way door. The most dangerous future is not one where AI replaces lawyers, but one where lawyers design a profession that functions brilliantly only under conditions that cannot be guaranteed.

The resilient legal organisations of 2036 will not be those that use the most AI. They will be the ones that can still operate competently when AI is constrained, unavailable, unaffordable, or jurisdictionally unusable.

They will treat human capability not as inefficiency, but as redundancy. They will preserve deep training pathways even while automating output. They will optimise for reversibility as much as for speed.

That may sound unfashionable. It is also, historically speaking, how professions endure.

Or, to put it more plainly: civilisation has a long track record of building extremely clever machines, and a less impressive one when it comes to remembering what we used to know once the machines took over. Lawyers would be wise not to make themselves the punchline of that story.


reviewing a broken modern machine in a law library

What Should Heads of Legal Do Right Now?

If you have read this far and you are running a legal team, a law firm, or a department, here are five things you can do this quarter. Not next year. Not when the budget allows. This quarter.

  1. Build AI literacy across your team. Not a lunchtime seminar. An ongoing programme that covers how the tools work, where they fail, and what your firm's governance framework requires. If you do not have a governance framework, that is the first problem to solve.

  2. Audit your AI use today. Know which tools are being used, by whom, and for what. You cannot govern what you cannot see. After the Sullivan & Cromwell incident, every firm should be asking: could that happen here? And if the answer is 'we don't know,' that is the answer.

  3. Redesign your training pathways. Junior lawyers need exposure to AI tools, legal engineering, and data fluency from day one. The work that taught previous generations, document review, research, first drafts, is being automated. If your trainees are still being trained as though it is 2015, you are building a workforce for a market that no longer exists.

  4. Test one alternative fee arrangement. Pick one matter type, one client. Flat fee. Build the muscle now, before the market forces it. The firms that figure out value-based pricing early will own the relationship when hourly billing becomes untenable.

  5. Invest in the irreplaceables. Judgment. Relationships. Ethics. Commercial context. The ability to advise under uncertainty. AI will keep getting better at producing legal work. It will not get better at understanding why a particular client, in a particular industry, at a particular moment, needs a particular answer delivered in a particular way.


The View From Here

By 2036, the profession will look different. Fewer traditional lawyers. More legal work. Different skills. Fractured business models. A more sceptical client base.

The billable hour will not die, but it will never again be safe from scrutiny. The partnership model will fragment. The sole practitioner will not disappear. Properly equipped, they may thrive.

The most valuable lawyers will not be the ones who know the most law.

They will be the ones who know when to trust the machine—and when not to.

That is not a technology question.

It is a judgment question.

And judgment, for now at least, remains stubbornly human.

AI is not the risk.

AI without oversight is.

And oversight, in the end, is what lawyers do.


About the Author

Rory O'Keeffe is the founder of RMOK Legal, a City of London commercial law firm specialising in AI governance, commercial technology law, and fractional general counsel services. An SCL-accredited Leading IT Lawyer and member of the Society for Computers and Law AI Committee, Rory has over 20 years of experience spanning big law (Matheson), Fortune 500 in-house (Accenture), and sole practice. He is the author of AI Advantage: Thriving Within Civilisation's Next Big Disruption (2025) and hosts the Beyond The Fine Print podcast. RMOK Legal won Commercial Law Firm of the Year at the 2026 Corporate LiveWire Innovation & Excellence Awards.


FAQ

  • AI will not replace lawyers, but it will fundamentally change what lawyers do. By 2036, AI will handle the production work (drafting, research, compliance monitoring, contract review), while lawyers focus on judgment, governance, client relationships, and ethical reasoning. The SRA has already authorised two AI-native law firms in England and Wales, signalling that technology-led delivery is permitted for narrow, standardised areas of law.

  • Agentic AI refers to AI systems that plan, reason, and execute multi-step workflows autonomously, rather than simply responding to individual prompts. In legal practice, agentic AI can conduct research across multiple sources, review contracts against firm playbooks, cross-reference regulatory requirements, and generate structured work product with human oversight at key decision points. Thomson Reuters and LexisNexis both launched agentic legal AI products in early 2026.

  • Lawyers in 2036 will need AI literacy (understanding how AI systems reason and fail), data fluency (understanding data architectures and algorithmic decision-making), systems thinking (designing governance frameworks and legal workflows), and multi-jurisdictional regulatory fluency across competing global AI frameworks. These will sit alongside traditional skills such as commercial judgment, ethical reasoning, and emotional intelligence, which AI cannot replicate.

  • AI compresses the time required for many legal tasks, making time-based billing increasingly difficult to justify ethically and commercially. Harvard Law School research found AI delivering 200x productivity gains in high-volume litigation. 72% of US firms already offer alternative fee arrangements. By 2036, value-based and fixed-fee pricing is expected to dominate for routine and mid-complexity work.

  • The SRA authorised Garfield.Law in May 2025 and LawFairy in February 2026, both of which deliver regulated legal services primarily through technology. The SRA requires that named regulated solicitors remain accountable. However, the SRA has not yet issued substantive guidance on how the duty of competence applies specifically to AI tools in practice.

  • The EU AI Act, with enforcement beginning in August 2026, imposes fines of up to 35 million euros or 7% of global turnover for non-compliance. UK law firms advising clients who operate in EU markets must understand the Act's risk classification system, transparency requirements, and governance obligations. By 2036, similar frameworks are expected to exist across major jurisdictions, making AI governance a core legal competence.

  • The Atlantic Council's Global Foresight 2036 survey of 447 experts from 72 countries found that 63% expect the world to be worse off, 58% believe artificial general intelligence will have been achieved, and most expect China to be the world's leading economic power. Regulatory fragmentation between the US, EU, and Asia will deepen, creating a multipolar legal landscape where cross-jurisdictional compliance becomes a defining skill for commercial lawyers.

Next
Next

In-House Legal Operating Models: Key Takeaways from the 2026 LexisNexis Architects of Change Report