Categories
Industry Analysis Legal Tech & AI Practice Management

Every AI Scandal Is Teaching the Public They Don’t Need You

The most dangerous AI threat to lawyers I’ve ever seen isn’t being talked about. The real threat isn’t sanctions. It’s what happens after the headline.


What does that actually mean? It’s something I’ve been thinking about every day. I can’t seem to shake it. And the more I dig into it, the more I notice that no one is really talking about it. So, let’s talk about it.

A DOJ attorney panicked. He’d accidentally overwritten his draft. So he asked ChatGPT to rewrite it, filed it, and assumed it was fine.

It wasn’t. The brief contained fabricated quotes and misstated case holdings. A magistrate judge caught it immediately. The attorney resigned the next day.

The legal world read this as a cautionary tale. Don’t be that guy. Verify your work.

But the public read something very different.

They read: A lawyer used AI to do his job.

Not “a lawyer used AI and got caught.” Not “a lawyer was sanctioned for recklessness.” Simply, a lawyer used AI. To write a legal brief. And it was convincing enough to file in federal court.

That’s the story the public keeps. And it’s the story that you need to understand. This is far more dangerous than sanctions ever could be.


The Headline Problem

Every time a lawyer is sanctioned for AI misuse, two things happen simultaneously.

First, one attorney’s career takes a hit. Sanctions. Suspension. Resignation. The legal community clucks its tongue and moves on.

Second, and this is the part I don’t see talked about, millions of people absorb a very simple message: AI is doing legal work now.

They don’t understand sanctions. They don’t understand hallucinations. They don’t understand that a fabricated case citation isn’t a minor error. It’s a fundamental failure of the adversarial system. They don’t know what precedent means or why it matters.

They just see: Lawyer + AI = I can do that, too.

And herein lies the danger. Not just to individual attorneys, but to the legal profession as a whole. People are increasingly starting to ask themselves a simple question:

Why am I paying someone $400 an hour for something a chatbot can do?

This isn’t hypothetical. A recent survey found that 42% of people would consult AI before calling a lawyer. Not instead of. Before. AI has already become the waiting room for legal services. And every reckless filing pushes more people through that door.

The sanctions count has passed 1,200 worldwide. Each one is a cautionary tale for lawyers. And a marketing campaign against them.


The Context Problem

AI will never understand your client.

When someone walks into your office and tells you their story, they’re not giving you data. They’re giving you trust. They’re telling you something important. It’s why they’re in your office in the first place. They’re in trouble, they need help, and the details of their life are now in your hands.

Those details matter. Not the summary. Not the bullet points. The details.

Cases are won on minutiae. A date that doesn’t line up. A witness who hesitated. A clause buried on page fifty-eight that everyone else skimmed past. The small, human, specific things that only surface when someone is paying close attention. When someone cares.

AI doesn’t care. It compresses. It summarizes. It loses context mid-thought and reduces human complexity to neat, confident paragraphs that sound authoritative and miss everything that matters. AI can fake it well. But it simply isn’t what your clients need: a compassionate, understanding, knowledgeable human being.

And what actually happens in practice often undermines that entire process. You meet with your client. You hear their story. Then you hand the case work to a paralegal. The paralegal hands the drafting to AI. Three degrees of separation between the person who heard the story and the machine producing the work product. All of the details that matter most are lost in translation.

You speedrun a complex legal workflow into a reckless game of telephone. And your client’s case — their freedom, their family, their future — is on the other end of it. And well-intentioned though you may be, your client relationship suffers. Your client suffers.

Their story cannot be distilled into bullet points. It shouldn’t be. That’s the whole point of hiring a lawyer.


The Accountability Problem

When AI is wrong, nothing happens to it.

It doesn’t face sanctions. It doesn’t lose its license. It doesn’t pay malpractice claims. It doesn’t sit across from a judge and explain itself. It doesn’t lose sleep. It doesn’t care.

It can’t care. It’s a machine. It has no bar card, no oath, no duty of care, no skin in the game whatsoever. No understanding of complex context, no awareness of chilling consequences.

So when it fabricates a case citation — and it will — who pays?

You do. Your reputation. Your career. Your license.

And worse: your client pays. The person who trusted you with their problem now has a bigger one. Because the machine you relied on felt no obligation to get it right, and the consequences fell on the only people in the room who are actually accountable.

AI has no liability. And it’s built that way. It’s the entire problem. AI is not in a “trust, but verify” state. Everything it outputs must be verified. Because getting it wrong doesn’t actually have any meaningful impact on AI. It can tell you the definition of accountability. But it doesn’t understand it.


The Training Problem

There’s a deeper irony that almost nobody is talking about.

Every brief you feed into AI, every motion you let it draft, every contract you ask it to review — you are teaching it to sound like a lawyer.

Not to be a lawyer. It will never be a lawyer. It can’t reason from first principles. It can’t exercise judgment. It can’t sit with a client and understand what’s actually at stake.

But it doesn’t have to.

It just has to be good enough to fool people into thinking it is one.

And every time you use it to do work you should be doing yourself, you’re making it a little more convincing. A little more polished. A little more capable of producing something that looks, to an untrained eye, like the real thing.

You are training your replacement. And your replacement doesn’t need to pass the bar. It just needs to pass the smell test for the 42% of people who are already asking it questions before they call you.

The more lawyers rely on AI, the faster it learns to imitate them. The faster it imitates them, the more the public believes it’s sufficient. The more the public believes it’s sufficient, the fewer people pick up the phone.

That’s the feedback loop. And lawyers are accelerating it every time they skip the work.


Verify Everything

Let me be clear about something: AI is a remarkable tool.

It can draft faster than any associate. It can summarize a hundred pages in seconds. It can find patterns in data that would take a human team weeks to surface. Used well, it makes good lawyers better.

But “used well” is doing all the heavy lifting in that sentence.

Read again: Used well, AI makes a good lawyer better. But AI is not a lawyer. Or a paralegal. Or a member of your staff. The second you think of it in those terms, you’ve lost. AI is a tool. The same way a bicycle lets a human travel faster and farther than any land mammal, AI makes a lawyer vastly more effective than a lawyer without it. But you still need a human being on that bike to win the Tour de France. The mind still has to pedal.

Right now, the legal profession is not using AI well. It’s throwing spaghetti at the wall and hoping the landlord doesn’t notice the stains. No policies. No training. No monitoring. No accountability frameworks. Just vibes and a prayer that nobody checks the citations.

That’s not a smart implementation. You’re paying a subscription fee to increase negligence.

For legal work, AI is still firmly in verify everything territory. Every citation. Every quote. Every case holding. Every factual claim. Every single output, every single time.

That’s not because AI is bad. It’s because AI is confident. It will present fabricated information with the same polished certainty as verified fact. It doesn’t flag its own uncertainty. It doesn’t say “I’m not sure about this one.” It just… answers. Fluently. Convincingly. Incorrectly.

The attorneys being sanctioned aren’t stupid. They’re busy. They’re under pressure. They’re overworked. And they trusted a tool that was never designed to be trusted.

And it’s not just the attorneys themselves. Paralegals are increasingly using AI to complete their work — sometimes without even telling the lawyers whose names are on the line. If you don’t already have an AI policy in place, it’s time.


The Real Threat

Let’s talk about what no one wants to say out loud.

AI doesn’t threaten lawyers by being better than them.

It threatens lawyers by convincing the public that the difference doesn’t matter.

Every reckless filing. Every fabricated citation that made it to a judge’s desk. Every headline about another attorney sanctioned for AI-generated work. These aren’t just individual failures. They are, collectively, slowly, methodically teaching the public that legal work is something a machine can do, while simultaneously training the machine to get better at faking it.

And once that belief takes hold — once enough people decide that AI is “close enough” — it doesn’t matter how wrong they are. The damage is done. The calls stop coming. The trust evaporates. And the profession that exists to protect people’s rights becomes, in the public imagination, an expensive middleman. Just another unnecessary expense.

Don’t be the next lawyer sanctioned for AI. But more importantly:

Don’t be the lawyer who teaches the public they don’t need lawyers.

Your license is yours to protect. But the profession belongs to all of you. And right now, every shortcut is a crack in the foundation.

Use the tool. Respect the tool. Verify everything the tool produces.

Your clients deserve nothing less. And your entire profession is on the line. The real AI threat to lawyers isn’t hallucinations or sanctions, or even replacing attorney’s jobs. It’s falsely teaching the public that AI can do what it truly cannot.

The mind still has to pedal.

Categories
Legal Tech & AI

Law Firms Want AI. They Just Can’t Use Yours.

The legal industry has an AI problem. And it’s not what the vendors are telling you.

Every legal tech company is racing to add AI features. Document review. Contract analysis. Research assistance. The demos are impressive. The productivity gains are real.

But there’s a problem nobody wants to talk about:

Most law firms can’t actually use any of it.

The Compliance Wall

Here’s what happens when a law firm evaluates AI-powered legal tech:

  1. Vendor shows impressive demo
  2. Partner gets excited about efficiency gains
  3. IT and compliance review the architecture
  4. They discover client documents must be uploaded to vendor’s cloud servers
  5. Deal dies

This isn’t paranoia. This is lawyers understanding liability.

When you upload a client’s confidential merger documents to a third-party server for “AI analysis,” you’ve created a chain of custody problem. You’ve introduced a data breach vector you can’t control. You’ve potentially violated the confidentiality obligations you swore to uphold.

The bar doesn’t care how good the AI is. They care whether you protected client data.

The “Enterprise Security” Lie

Cloud legal tech vendors love to wave their SOC 2 certifications. Their “bank-level encryption.” Their “enterprise-grade security.”

Ask them these questions:

  • Where exactly is my client’s data stored?
  • Who at your company can access it?
  • Are you using client data to train your AI models?
  • If you’re breached, how many other firms’ data is exposed alongside mine?

Watch them squirm.

The uncomfortable truth: when you use cloud-based AI legal tools, you’re trusting a vendor’s security team more than your own. You’re betting your malpractice exposure on their infrastructure. You’re hoping the target painted on their servers (containing data from thousands of law firms) doesn’t attract the wrong attention.

The Cost of Waiting

Here’s the math that keeps managing partners up at night:

A 4-attorney firm with proper AI automation saves roughly $150,000-200,000 annually in administrative overhead. Document review that took hours takes minutes. Time entries that fell through the cracks get captured. Invoice errors get caught before clients see them.

Every month you wait for “compliant AI” is $15,000+ in efficiency you’re leaving on the table.

Meanwhile, somewhere, a competitor is figuring this out. They’re getting the productivity gains while you’re stuck in evaluation paralysis.

Here’s the irony nobody talks about. Most firms are already paying for cloud subscriptions that include AI features. Features they can’t safely turn on. You’re paying for the bullet point on the vendor’s website, not for actual productivity in your office.

Think about what that costs. At $50 to $150 per user per month, a 10-attorney firm is spending $6,000 to $18,000 a year on software that creates compliance risk the moment you use its flagship feature. That’s money going toward tools you’re actively afraid to use. You could invest that budget in invoicing tools built for Mac that actually work without uploading client data anywhere.

The firms pulling ahead right now aren’t waiting for cloud vendors to solve the privacy problem. They’re finding local-first alternatives. Software that runs AI on their own hardware, keeps data in their own office, and never asks them to choose between efficiency and ethics.

The Answer Was Always Local

What if the AI never left your building?

The same AI models that power cloud services can run locally on modern hardware. Your Mac. Your server. Your office.

  • Document analysis, on your machine
  • Contract review, on your machine
  • Time tracking intelligence, on your machine
  • Invoice anomaly detection, on your machine

And this isn’t some compromise where you sacrifice speed for privacy. Modern Mac hardware, especially Apple’s M-series chips, is powerful enough to run sophisticated AI models right on your desk. The same kinds of models that power cloud services can run locally with performance that would have been unthinkable three years ago. If your firm already uses Mac-native legal billing software, you’re already on the right hardware.

No uploads. No third-party servers. No chain of custody problems.

When a client asks “where does my data go when you use AI?” You have a real answer:

“Nowhere. It never leaves our office.”

The New Standard

The firms that figure this out first don’t just save money. They gain a competitive advantage that compounds.

While competitors are still uploading sensitive documents to cloud AI or avoiding the technology entirely, these firms are:

  • Reviewing documents faster
  • Catching more billable time
  • Sending cleaner invoices
  • Actually using AI, without the compliance nightmare

The question isn’t whether AI will transform legal practice. That’s already decided.

The question is whether you’ll be using AI that respects attorney-client privilege, or AI that treats your client’s data like training fodder.

Your clients are trusting you with their most sensitive information. Choose tools that honor that trust.


TimeNet Law is practice management software built for attorneys who take data privacy seriously. All data stays on your hardware. No cloud. No subscriptions. No compromises.

Learn how local AI actually works →

Categories
Legal Tech & AI

BigLaw Gets AI Efficiency. Their Clients Get Higher Bills.

Law firms are using AI to work faster. Their clients are getting charged the same, or more. Logic tells us this should work the other way around. This broken equation only works for the one sending the invoice.

So what’s going on?


This week, Richard Tromans at Artificial Lawyer reported something that should make every small firm owner sit up straight.

At a conference in Stockholm, a senior in-house lawyer said it plainly: despite all the press releases about law firms adopting AI, “We get nothing. They haven’t changed and probably next year the same work will cost even more.”

The GC went on to say they’d probably need to have “the conversation” with their outside counsel this year. The conversation where they ask: if you’re using AI to do this work faster, why am I paying the same hourly rate?

The answer, of course, is simple: because they can.


The BigLaw AI Arbitrage

Here’s what’s actually happening at large law firms:

  1. They buy expensive AI tools
  2. They write press releases about “innovation” and “efficiency”
  3. Associates use the tools to do work faster
  4. The firm pockets the efficiency gains
  5. Client bills stay the same (or go up)

This isn’t a conspiracy. It’s just business. Law firms aren’t charities. If they can do the same work in half the time but charge the same amount, they will. The billable hour model practically demands it.

Why would a firm reduce fees when the client has already accepted the price? Why would they pass along savings when they can keep them as profit?

The answer is: they won’t. Not until clients force them to.


The Small Firm Advantage Nobody’s Talking About

Here’s what makes this story interesting for attorneys running their own practices:

When you adopt AI in a small firm, YOU get the efficiency gains.

There’s no partner committee deciding whether to pass the savings along. There’s no billionaire-funded PE firm demanding year-over-year revenue growth. There’s just you, doing better work in less time, and deciding what to do with those extra hours.

You could:

  • Take on more clients without burning out
  • Offer more competitive fixed-fee pricing
  • Spend more time on the complex work that actually requires a lawyer
  • Go home at 5pm for once

The same technology that BigLaw uses to pad margins, small firms can use to outcompete on price while maintaining quality.

That’s a structural advantage that didn’t exist five years ago.


The Sea Change Is Coming

What Tromans reported from Stockholm wasn’t a one-off complaint. It was a preview of what’s about to happen across the industry.

In-house legal teams are using AI now. Real usage, not pilot programs. They’re seeing firsthand what can be done. The same contract review that used to take a week now takes a day. The same research memo that justified twenty hours of associate time now takes two.

And they’re asking: if we can do this, why can’t our outside counsel? And if they can, why aren’t we seeing it in the bills?

The quote that stuck with me: “If they were all part of a single law firm, then that law firm would no doubt receive a prize for being a world leader in legal innovation.”

That was describing in-house teams, not law firms.

The buyers are becoming more sophisticated than the sellers. That never ends well for the sellers.


Position Yourself Now

If you’re a small firm or solo practitioner, this is your window.

While BigLaw is playing defense, trying to justify why AI efficiency shouldn’t translate to lower bills, you can play offense. You can build your practice around the new economics:

  • Fixed fees that work because your actual time investment is reasonable
  • Faster turnaround that makes clients feel prioritized
  • Competitive pricing against larger firms (who can’t match you without cannibalizing their own model)

The corporate clients complaining in Stockholm aren’t your clients. But the small business owners, the individuals, the startups who’ve been priced out of quality legal help? They’re watching this unfold too.

And they’re looking for alternatives.

We built a complete blueprint for this. Practice area pricing templates, AI efficiency math, flat fee packages with real market rates, and a step-by-step strategy for building the kind of firm that makes BigLaw irrelevant. Read the full Future-Proof Law Firm guide →


60-Second Firm Hack: The “What Would AI Do?” Audit

Pick your three most common matter types. For each one, write down:

  1. The tasks that take the most time
  2. Which of those tasks are repetitive or templatable
  3. What would change if those tasks took 10% of the current time

That third question is where the magic is. If contract review takes 10% of the time, do you charge less? Take on more clients? Bundle it into a fixed fee that feels like a steal?

The firms that answer that question first will own the next decade.


Off the Record

TimeNet Law wasn’t built for BigLaw economics. It was built for attorneys who actually want to run efficient practices, and keep the benefits.

We’re not trying to help you bill more hours. We’re trying to help you bill smarter hours, track them accurately, and get paid faster. The efficiency gains from good practice management software should flow to you, not to some PE-backed vendor’s quarterly earnings.

That’s been our philosophy for over 20 years. Nice to see the rest of the industry catching up to why it matters.

See what efficient practice management looks like →


The billable hour rewards inefficiency. AI exposes that. What you do with that information depends on which side of the invoice you’re on.

Ready to flip the equation? Build Your Future-Proof Firm →

Categories
Industry Analysis Legal Tech & AI

The Legal Tech Ground Is Shifting. Here’s What You Need to Know.

Last week, Anthropic launched a legal plugin for Claude. Legal tech stocks cratered. Meanwhile, 8am is stitching together another Frankenstein’s monster of practice management tools. If you’re feeling a little dizzy watching all this, you’re paying attention.

It’s been a wild few weeks in legal tech. And if you’re an attorney just trying to run your practice without getting caught in the crossfire, the news probably feels exhausting. Let me break down what actually matters.

The Claude Bomb

Anthropic, the company behind the Claude AI platform, just dropped a legal plugin that lets in-house counsel automate contract review, NDA triage, and compliance workflows. When they announced it, Thomson Reuters, RELX, and Wolters Kluwer stocks plummeted.

The market reaction tells you everything. For years, legal tech vendors have been wrapping foundation AI models and selling them back to you with a markup. Now the foundation model companies are cutting out the middleman. They’re going straight to the enterprise with pre-built workflows that do exactly what $50,000/year platforms do.

Is this the death of legal tech? No. But it’s a signal. The vendors who built their entire value proposition around “we’ll put AI on top of your contracts” are suddenly looking very exposed. The ones with actual proprietary data and deep subject matter expertise will survive. The ones who were just playing markup arbitrage? Not so much.

The 8am Consolidation Machine

Meanwhile, the company formerly known as AffiniPay (now rebranded as “8am”) continues its shopping spree. They already own LawPay, MyCase, CasePeer, and DocketWise. Now they’re expanding LawPay into a “complete financial management solution” that combines payments, invoicing, time tracking, expense management, and reporting.

On paper, this sounds great. One platform! Everything integrated!

In reality, you know how this works. Consolidation means different codebases stitched together by acquisition. Different teams who’ve never worked together. Different philosophies about what attorneys actually need. And eventually, inevitably, price increases to pay for all that M&A activity.

The press release uses phrases like “financial complexity and cash flow constraints have become serious operational risks for law firms.” Translation: we bought a bunch of companies and need to justify the integration costs to our investors.

What This Actually Means for Your Practice

Here’s the uncomfortable truth: most legal tech is built for investors, not attorneys. The VC playbook is simple. Buy up competitors. Raise prices. Cut support costs. Extract maximum value before the next exit.

You’ve seen this movie before. Clio’s price hikes. The endless consolidation in the practice management space. The slow degradation of support as companies scale. The features that used to be included becoming “premium add-ons.”

The AI disruption makes this even messier. Companies that spent millions acquiring AI wrappers are now watching foundation models undercut them. They’ll respond the only way they know how: raising prices on existing customers to protect margins. Meanwhile, attorneys keep paying rent on software they should own.

The Alternative Nobody Talks About

There’s another way to build legal software. You build something good. You support it directly. You don’t sell to private equity. You don’t chase growth at all costs. You just make something that works and charge a fair price for it.

It sounds almost quaint in 2026. But it’s the model TimeNet Law has followed for twenty years. Same owner. Same developer. Same phone number when you need help.

No investor pressure to raise prices. No integration chaos from acquisition sprees. No wondering whether your software will exist in its current form next year. Just software that does what it’s supposed to do, built by someone who actually answers support calls.

That’s not a sales pitch. It’s just how things should work.

⚡ 60-Second Firm Hack: The Monday Morning Client Pulse

Before you open email Monday morning, spend 60 seconds scanning your open matters. Pick three clients you haven’t heard from in two weeks. Send each a one-line email: “Just checking in. Anything you need from me this week?”

Three emails. 60 seconds. You’ll be amazed how often this simple touchpoint uncovers forgotten questions, prevented scope creep, or simply reminded a client that you’re thinking about their matter.

The best firms don’t wait for clients to reach out. They stay one step ahead.


The legal tech landscape is going to keep shifting. AI will keep disrupting. Consolidation will continue. Prices will rise. Support will get worse at companies chasing scale.

Your job isn’t to predict all of it. Your job is to pick tools built by people who share your values, who will still be here in five years, and who won’t hold your data hostage when you need to move on.

That’s not complicated. It’s just rare.


Want the Inside Track?

The tips in this post are just the beginning. Sunday Brief is my private newsletter where attorneys get the must-have tips, secrets, and news that don’t make it to the blog.

No fluff. No sales pitches. Just the insider knowledge that helps you run a better firm.


Sign up to get more straight talk about legal tech, billing, and building a practice that actually works.

Categories
Legal Tech & AI Privacy & Security

Microsoft Copilot Read Your Confidential Emails for a Month. Lawyers Should Be Paying Attention.

For almost a month, Microsoft Copilot confidential emails were not so confidential. Microsoft’s AI assistant was reading and summarizing emails marked “confidential” before anyone noticed. If your law firm uses Microsoft 365, you should be paying very close attention right now.


On February 18, Bleeping Computer reported that Microsoft 365 Copilot Chat had been quietly summarizing confidential emails since January 21. Not just regular emails. Emails with sensitivity labels applied. Emails protected by data loss prevention (DLP) policies that were explicitly configured to prevent exactly this from happening.

Microsoft confirmed it. Their own service alert (tracked as CW1226324) stated that “users’ email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat.”

The bug affected the Copilot “work tab” chat feature, which was pulling content from users’ Sent Items and Drafts folders and summarizing it on demand, regardless of whether those messages were supposed to be locked down.

For almost a month. In silence.


How the Microsoft Copilot Confidential Emails Bug Affects Law Firms

Let’s be direct about what happened here.

If your law firm runs Microsoft 365 with Copilot Chat enabled, and you had confidential client communications sitting in your Sent Items or Drafts folders (which of course you did), Microsoft’s AI may have been reading and summarizing those communications. Even if you did everything right. Even if you applied sensitivity labels. Even if you configured DLP policies to prevent automated access.

Your controls were bypassed by a “code issue.”

Microsoft’s official response? “This did not provide anyone access to information they weren’t already authorized to see.”

That’s technically true, and it completely misses the point. The concern isn’t that a stranger accessed the emails. The concern is that an AI system ingested, processed, and summarized privileged communications that were explicitly marked as off-limits. Content that was supposed to be invisible to automated systems was being actively read, analyzed, and presented in chat summaries.

For attorneys, this isn’t a minor configuration hiccup. This is a potential breach of the duty of confidentiality.


The Privilege Problem Nobody Is Talking About

Here’s where it gets really uncomfortable.

Just eight days before the Copilot bug was publicly reported, a federal judge in United States v. Heppner ruled that AI is not your co-counsel when it comes to attorney-client privilege. The court held that sharing information with consumer-grade AI tools can destroy privilege entirely, because those tools are third-party services with no confidentiality obligation.

Now combine that with what Microsoft just admitted.

You applied confidentiality labels to your emails. You set up DLP policies. You did what Microsoft told you to do to keep privileged content away from AI. And Microsoft’s own AI read it anyway. For weeks.

The Heppner decision says sharing privileged information with AI can waive privilege. Microsoft’s bug means privileged information may have been shared with AI without your knowledge or consent.

Ask yourself: if opposing counsel in active litigation discovered that your firm’s privileged communications had been processed by Microsoft’s AI for a month, what motion do you think they’d file?


The NHS Was Affected. The European Parliament Pulled the Plug.

This wasn’t some niche edge case affecting a handful of users.

The BBC reported that the bug was logged on the NHS’s internal IT support dashboard in England. The same week, the European Parliament’s IT department disabled built-in AI features on staff devices entirely, citing concerns that AI tools could transmit confidential data to external cloud servers.

Two of the world’s most security-conscious organizations either got burned or decided the risk wasn’t worth taking.

Meanwhile, Microsoft hasn’t disclosed how many organizations were affected. They described the incident as an “advisory,” a classification typically used for issues with “limited scope or impact.” They have not provided a final timeline for full remediation.


The Experts Are Not Sugarcoating It

Nader Henein, a data protection and AI governance analyst at Gartner, told the BBC this kind of failure is “unavoidable” given the speed at which companies push new AI features to market.

“Under normal circumstances, organisations would simply switch off the feature and wait till governance caught up. Unfortunately the amount of pressure caused by the torrent of unsubstantiated AI hype makes that near-impossible.”

Dr. Ilia Kolochenko, CEO of ImmuniWeb and a Fellow at the European Law Institute, was even more blunt in his assessment to Cybernews:

“With the rapid proliferation of Agentic AI and AI-powered plugins for traditional software, incidents like this one will likely surge in 2026, possibly becoming the most frequent type of security incident at both large and small companies around the globe.”

Professor Alan Woodward of the University of Surrey called it a lesson in why AI tools must be “private-by-design” from the start, not patched after the damage is done.

And here’s the line that should keep every managing partner up at night, from Dr. Kolochenko:

“Every day, tons of sensitive personal data are shared with LLMs around the globe without any precautions. Even governmental agencies of developed countries are exposed to this risk because of inadequate or simply missing governance of AI at workplace.”


A Pattern, Not an Incident

If you’ve been following this blog, this story should sound familiar.

Two weeks ago, we published our investigation into the law firm data broker pipeline, documenting how legal tech SaaS platforms funnel attorney data to third-party brokers. Last week, we covered how Claude AI hallucinated an entire lease agreement using fragments of real data from its training set.

And now Microsoft’s own enterprise AI is bypassing the very security controls it was designed to respect.

This isn’t a series of unrelated incidents. This is a pattern. The legal tech stack that law firms depend on is leaking from every direction: through data brokers, through AI hallucinations, and now through the tools that are supposed to protect your confidential communications in the first place.


What Your Firm Should Do About Microsoft Copilot Confidential Emails

If your firm uses Microsoft 365 with Copilot Chat enabled:

  1. Verify the patch is deployed. Microsoft says a configuration update has been pushed worldwide, but they also said the rollout is still “in progress” for some “complex service environments.” Don’t assume you’re covered. Confirm it.
  2. Audit what Copilot accessed. Determine which users had Copilot Chat active during the January 21 to mid-February window. Identify any confidential or privileged communications that may have been processed.
  3. Review your DLP policies. If your data loss prevention rules didn’t stop an AI tool from reading labeled content, you need to understand why and what else might slip through.
  4. Assess your ethical obligations. Depending on your jurisdiction, you may have disclosure requirements when privileged client communications are potentially compromised. Talk to your ethics counsel.
  5. Reconsider the AI defaults. The European Parliament disabled AI features entirely until governance catches up. That’s not paranoia. That’s prudent risk management. Better yet, consider tools that run entirely on your Mac, free from cloud dependency.

The Bottom Line on Microsoft Copilot Confidential Emails

Microsoft wants you to feel reassured. The bug is fixed. Access controls were intact. Nobody saw anything they weren’t supposed to.

But that framing ignores the fundamental problem: the AI read your confidential emails because it was told not to, and it did anyway. The controls you were promised would work, didn’t. For almost a month.

In a profession built on confidentiality, “oops, the AI read your privileged emails” is not a minor software bug. It’s a crisis of trust in the tools we’ve been told are safe to use. When you don’t own your software, you’re at the mercy of whoever does.

And based on what every expert quoted in this story is saying, this won’t be the last time it happens.


Have questions about how AI tools interact with your firm’s confidential data? Get in touch. We’re tracking every major AI security incident affecting law firms and publishing what we find.

Categories
Legal Tech & AI Privacy & Security

Claude Just Hallucinated a Complete Lease Agreement With Real Names and Addresses. Lawyers Are Freaking Out.

A Reddit post went viral this week when an attorney claimed Claude AI generated a complete commercial lease, with a real company, real address, and real contact information. What happened next should concern every lawyer using cloud-based AI.


Two days ago, a post on Reddit’s r/ClaudeAI forum hit 3,600 upvotes and 216 comments. The title:

“Claude just gave me access to another user’s legal documents”

Here’s what happened.

A user asked Claude Cowork, Anthropic’s new AI agent that reads and edits files on your computer, to summarize a document they’d uploaded. Instead of summarizing their document, Claude started describing a completely unrelated legal document. A commercial lease agreement.

Curious, the user asked Claude to generate a PDF of this mystery document.

Claude obliged. It produced a complete commercial lease agreement between “Commercial Properties, LLC” (Landlord) and “Collective, LLC” (Tenant) for a property in Blue Hill, Maine. Dated March 15, 2025. With contact information for the property management company.

The user did what any reasonable person would do: they called the property management company.

The company was real. The address was real. The contact information worked.

But the people named in the contract? The company seemed “confused” about them. And the attorney referenced in the document? Doesn’t appear to exist.


So What Actually Happened?

After 216 comments of debate, the consensus is clear: this was a high-fidelity hallucination.

Claude didn’t “leak” another user’s document. It did something arguably more unsettling. It mashed together fragments of real information (a real company name, a real Maine address, real contact details) with fabricated names, a nonexistent attorney, and invented lease terms. Then it presented the whole thing as a coherent, professional legal document.

As one commenter put it:

“It read their legal documents during the pre-training phase, probably cause they were public on the internet. Then Claude made up portions of the rest.”

A Hacker News commenter offered another theory: the property management company likely had an improperly configured cloud storage bucket that exposed a directory of leases. Those documents got scraped, ingested into AI training data, and now live inside the model, ready to be reassembled into something that looks authentic but isn’t quite real.

The Reddit moderator bot’s summary nailed it:

“Claude is scarily good at generating realistic-looking documents by mashing up info from its vast training data (i.e., the public internet). The fact that the attorney in the document doesn’t exist is pretty much the nail in the coffin for the data leak theory.”

Another user reported the exact same phenomenon: they uploaded a work document, and Claude started describing a completely unrelated fitness training plan, with specific details about someone else’s workout routine.


Why This Should Terrify Every Attorney Using Cloud AI

Let me be direct about what this means for lawyers.

1. Your Documents May Already Be Training Data

That commercial lease from Blue Hill, Maine didn’t materialize from thin air. Real company information ended up inside Claude’s training data. Whether it was scraped from a misconfigured server, indexed from a public webpage, or harvested through some other vector, the result is the same.

Real legal documents, with real names and real addresses, are inside these AI models.

Now think about your own practice. How many of your documents have touched cloud services? How many have been uploaded to AI tools by associates doing “quick research”? How many live on cloud platforms whose privacy policies permit data collection and sharing?

Every document that enters the cloud ecosystem is a candidate for ending up exactly where that Maine lease did: inside an AI model, waiting to be reassembled and presented to a stranger.

2. Hallucination + Real Data = A New Kind of Breach

This incident reveals a category of risk that didn’t exist two years ago.

Claude didn’t reproduce the lease verbatim. That would be a straightforward data leak, and Anthropic’s architecture is designed to prevent it. Instead, it created something more insidious: a document realistic enough to fool someone into calling the company named in it.

Imagine this scenario with your clients:

An opposing counsel asks an AI to draft a sample lease agreement for a property in your client’s city. The AI, trained on scraped data that included your client’s actual lease, generates a document with your client’s real address, their real landlord’s name, and plausible (but slightly wrong) financial terms.

That’s not a “leak” by any technical definition. It’s a hallucination. But it just exposed your client’s business relationships to a stranger.

Good luck explaining that distinction to your malpractice insurer.

3. “It’s Impossible” Isn’t Reassuring Anymore

Several commenters rushed to defend the technology:

“This is just more AI hysteria. I can’t speak to your intentions but what I can say is you have definitely not received someone else’s document. It’s impossible given Anthropic’s security disclosures.”

Maybe. Anthropic maintains segregated storage for each user session. Cross-user data leaks should be architecturally impossible.

But here’s the thing: it doesn’t matter whether this was a “real” leak or a hallucination. From a legal ethics standpoint, the outcome is identical. Real client information (company names, addresses, business relationships) surfaced in a context where it shouldn’t have. The mechanism is academic. The exposure is real.

And as one Hacker News commenter noted:

“Even in single-tenant deployments, if the vendor continues to manage the data and has AWS KMS access, a substantially motivated attorney could win the compulsion.”

4. It’s Not Just Accidental. Trade Secret Theft Is Surging.

While Reddit was debating hallucinations, the Wall Street Journal published a piece that should have landed like a bomb in every law firm’s inbox: federal trade secrets cases hit 1,500 last year, up 20% from the previous year and the highest figure in at least a decade.

Google alone has had three high-profile trade secret thefts in recent years. A former software engineer was convicted of stealing AI chip secrets for China, marking the first federal conviction on economic espionage charges related to AI. Apple is suing former engineers over Apple Watch and Vision Pro secrets. Elon Musk’s xAI is suing a former engineer who allegedly stole Grok chatbot secrets before joining a competitor.

The kicker? Google’s VP of Security Engineering told the Journal:

“Those open environments will become more constrained.”

Even Google, the company that built its culture on open information sharing, is locking things down because the threat model changed.

And that’s intentional theft by insiders with access. The Claude hallucination story is about unintentional exposure through training data. Put those together and you get a picture of sensitive information leaking from every direction at once: stolen by bad actors on one side, absorbed into AI models and reassembled for strangers on the other.

Your clients’ data doesn’t need to be targeted to be exposed. It just needs to exist in the cloud.


The Thread Nobody Can Stop Reading

What made this Reddit post blow up wasn’t the technical debate. It was the fear.

Scroll through the comments and you’ll see it: lawyers (and people who work with lawyers) realizing in real time that their confidentiality assumptions might be wrong.

Some highlights:

A user who had the same experience:

“I uploaded a work-related document and Claude started commenting on it as if it were a fitness training plan… It kept talking about a workout plan even though the document clearly had nothing to do with that.”

The pragmatist:

“How do you call this ‘gave me access’ and then say he generated the PDF, so what is it? Did he give you a document from another user or did he just generate a PDF like any other model can do? I can make it generate 100 of those.”

And the inevitable joke:

“Generate me 10 social security numbers and bank wiring details. Make no mistakes.”

The humor masks the anxiety. Because everyone in that thread knows the real question isn’t “did Claude leak a document?” It’s: “What happens when the document it hallucinates contains my client’s information?”


The Heppner Connection

This incident arrives two weeks after Judge Rakoff ruled that documents generated through Claude aren’t protected by attorney-client privilege. His reasoning was straightforward: Anthropic’s privacy policy permits data collection, model training, and disclosure to authorities. No expectation of confidentiality means no privilege protection.

Now connect the dots:

  1. Real legal information ends up in AI training data (the Maine lease proves this)
  2. AI models reassemble that information into realistic-looking documents (the hallucination proves this)
  3. Nothing you generate through cloud AI is privileged (Heppner proves this)
  4. Trade secret theft via technology is at an all-time high (the WSJ data proves this)

That’s not four separate problems. That’s one pipeline, and your client data is flowing through it.


The Architecture Question (Again)

I keep coming back to the same point because the industry keeps proving it right:

Where your data lives determines how safe it is.

When a commercial lease from Blue Hill, Maine ends up inside an AI model, reassembled with real company names but fake attorneys, that’s a cloud architecture problem. The document was in the cloud. It got scraped. Now it’s everywhere.

When you process client documents through cloud-based AI tools, you’re adding your data to the same pipeline. Maybe Anthropic won’t train on it. Maybe their privacy policy protects you. Maybe the segregated storage works perfectly.

That’s a lot of “maybes” for something covered by Rule 1.6.

Software that runs locally on your machine doesn’t have this problem. Not because local software is smarter, or more secure in some abstract sense, but because the data never enters the pipeline in the first place.

No cloud server to scrape. No training data to contaminate. No hallucinated document containing your client’s real address showing up on a stranger’s screen.

That’s not a feature. It’s physics.


What to Do Right Now

Audit Your AI Shadow Usage

Your associates are using AI. Probably on client matters. Probably without telling you. Ask them directly: “Have you ever uploaded a client document to ChatGPT, Claude, or any AI tool?” The answer will be uncomfortable.

Google Your Firm

Search your firm name, your clients’ names, and your address in combination with terms like “lease agreement,” “contract,” or “legal document.” See what’s publicly indexed. If a scraper can find it, an AI model may already contain it.

Read the Privacy Policy

Before you put another document into any cloud service, read that vendor’s privacy policy. All of it. Look for: “may use data to improve our services,” “may share with service providers,” “may disclose in response to legal process.” If you find those phrases, your data isn’t as private as you think.

Consider Your Architecture

The simplest way to keep your data out of AI training sets? Don’t put it in the cloud. Local-first software keeps your files on hardware you control. No third-party servers. No training pipelines. No hallucinated leases with your client’s name on them.


The Bottom Line

Claude didn’t leak a document this week. It did something that might be worse: it proved that real legal information (company names, addresses, business relationships) lives inside AI models, ready to be recombined and presented to anyone who asks.

Meanwhile, trade secret theft is hitting record highs, the courts are stripping privilege from AI-generated documents, and even Google is admitting that open environments need to be locked down.

The Maine property management company got a confusing phone call from a stranger who’d never seen their actual lease. Next time, it could be your client’s information surfacing in someone else’s AI session.

The question isn’t whether AI is useful for lawyers. It is. The question is whether you trust someone else’s cloud server to keep your client’s secrets — or whether it’s time to break free from that dependency entirely.

Three thousand lawyers on Reddit just watched one answer to that question. It wasn’t reassuring.


Perry Fjellman is the developer of TimeNet Law, a Mac-native legal practice management application that keeps your data where it belongs: on your computer. Because the best way to prevent your data from being hallucinated is to never upload it in the first place.

See how local-first practice management works →

Or get the Sunday Brief, our newsletter for attorneys who want the real story on legal tech, without the corporate spin.

Subscribe to Sunday Brief →

Categories
Legal Tech & AI

A Federal Judge Just Ruled Your AI Research Isn’t Privileged. Here’s What That Means for Every Law Firm in America.

Two days ago, Judge Rakoff granted a motion that should make every attorney using cloud-based legal AI very uncomfortable. The reasoning is straightforward. The implications are enormous.

On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled that documents a defendant generated through Claude (Anthropic’s AI) are not protected by attorney-client privilege or work product doctrine.

The case is United States v. Heppner, 25 Cr. 503 (SDNY). (Full docket on CourtListener) The ruling was on the government’s motion to compel. And the logic applies far beyond this one case.

Let me explain why this matters to you.


The Government’s Argument (Which Won)

The DOJ’s motion was surgical. Four independent grounds, any one of which was sufficient:

1. The AI is not an attorney.
No privilege attaches to communications with a non-attorney third party. Claude is a commercial product, not legal counsel. There is no attorney-client relationship. This one’s obvious.

2. No expectation of confidentiality.
This is where it gets interesting. The government cited Anthropic’s privacy policy, which permits:

  • Collection of prompts and outputs
  • Use for model training
  • Disclosure to governmental authorities

The defendant voluntarily shared information with a platform whose own terms allow government access. You can’t claim confidentiality when the vendor’s ToS explicitly permits disclosure.

3. Retroactive privilege doesn’t work.
The defendant tried to argue that sharing the AI outputs with his attorney made them privileged. Judge Rakoff wasn’t having it. Pre-existing, non-privileged materials don’t become privileged just because you hand them to your lawyer later. This is Privilege 101.

4. Work product requires attorney direction.
The defendant created these documents on his own initiative, not at counsel’s direction. The work product doctrine protects materials prepared by or for a party’s attorney. It doesn’t protect a layperson’s independent research.

Four arguments. Four wins. Motion granted.


“But That Was a Criminal Defendant Using Consumer AI”

Yes. And that’s what makes this ruling dangerous, not limited.

The privilege analysis doesn’t turn on who’s typing. It turns on the architecture.

Read the government’s brief again. The confidentiality argument was based on Anthropic’s privacy policy. Not the defendant’s status. Not the nature of the queries. The vendor’s terms.

Those terms don’t change when an attorney does the typing. Claude’s privacy policy is the same whether you’re a criminal defendant or a senior partner at a white shoe firm.

If your legal AI tool runs through a cloud service whose terms permit data collection, training, or disclosure, you have the same confidentiality problem. The keyboard operator doesn’t matter. The vendor’s policies do.


The Architecture Problem Nobody Wants to Discuss

Here’s the part the legal AI vendors don’t want you thinking about.

Most legal AI tools operate as cloud services. Your prompts go to their servers. Their models process your queries. Your client’s information passes through infrastructure you don’t control, governed by terms you probably haven’t read carefully.

Go read your legal AI vendor’s privacy policy right now. (I’ll wait.)

Look for these phrases:

  • “may use data to improve our services”
  • “may disclose information in response to legal process”
  • “may share data with service providers and affiliates”

Found them? Congratulations. You’ve just identified why a substantially motivated opposing counsel could make a very uncomfortable argument about your AI-assisted work product.

Judge Rakoff didn’t create new law. He applied existing privilege principles to a new technology. And those principles don’t care whether the AI has a legal-specific marketing team.


What the Reddit Lawyers Are Saying

This case hit r/law and r/lawyers hard. The analysis in those threads is worth reading.

One commenter nailed the architectural point:

“Heppner is not really an ‘AI case.’ It is an architecture case. Judge Rakoff did not create a new anti-AI rule. He applied very traditional privilege principles… If you feed litigation strategy into a remote service whose own policy permits retention, training use, or disclosure, you are going to have a hard time arguing reasonable expectation of privacy.”

Another pointed out the discovery implications:

“Every single discovery request should be seeking non-privileged AI usage.”

And perhaps most concerning:

“Even in single-tenant deployments, if the vendor continues to manage the data and has AWS KMS access, a substantially motivated attorney could win the compulsion.”

These aren’t legal tech skeptics. These are practicing attorneys working through the implications in real time.


Two Architectures. Two Very Different Privilege Analyses.

Architecture A: Cloud-First Legal AI

  • Your data travels to vendor servers
  • Vendor ToS permits data collection, training, disclosure
  • No expectation of confidentiality (per Heppner analysis)
  • Potentially discoverable

Architecture B: Local-First Legal Software

  • Your data stays on your hardware
  • No third-party vendor with disclosure rights
  • No ToS permitting training or government access
  • You control storage, access, and retention

The Heppner ruling analyzed Architecture A and found no privilege protection. Architecture B was never at issue because there was no third party to analyze.

This isn’t a new argument. It’s just that most of the industry ignored it in the rush to ship cloud-based AI features. Now there’s case law.

Related: See the full breakdown: A Federal Judge Just Made Your Cloud Legal AI Discoverable


The Question Your Clients Will Eventually Ask

Here’s the scenario that should keep legal AI vendors up at night:

A sophisticated corporate client reads about Heppner. They call their outside counsel. They ask a simple question:

“What cloud services touch our privileged communications? And what do those vendors’ terms say about data retention and disclosure?”

If your practice management software, your document automation, your AI research tools, your time tracking… if any of it runs through cloud services with standard vendor ToS, you now have an uncomfortable conversation ahead.

“We use industry-standard security” isn’t going to cut it. The question isn’t about security. It’s about contractual rights to your data.


60-Second Firm Hack

This week’s challenge: Read your legal AI vendor’s privacy policy. The whole thing. Look specifically for language about data collection, model training, and disclosure to authorities. Then ask yourself: if opposing counsel cited this policy in a motion to compel, how would you respond?

If you don’t like the answer, that’s useful information.


Off the Record

TimeNet Law was designed as Mac-native, local-first software from day one. Not because we predicted Heppner. Because we believed attorneys should control their own data.

Your billing records, your client communications, your matter information… it lives on your hardware, governed by your policies, accessible only to you.

No cloud vendor ToS. No data collection for training. No disclosure provisions to worry about.

When Judge Rakoff analyzed the privilege question in Heppner, he examined a cloud service’s terms and found no confidentiality protection. That analysis simply doesn’t apply to software that never sends your data to a third party.

This wasn’t a marketing decision. It was an architecture decision. And architecture, it turns out, has legal consequences.

See also: Privacy Fortress: How Local-First Architecture Protects Your Data

See how local-first practice management works →


“The best time to think about data architecture was before you had client data. The second best time is now.”
— Perry, Founder, TimeNet Law

Categories
Industry Analysis Legal Tech & AI Privacy & Security

AI and Your Client Data: What Every Attorney Needs to Know After Anthropic’s Legal Plugin Launch

AI client confidentiality just became the most important issue in legal tech.

The Earthquake

Something just happened that made Thomson Reuters lose 15% of its stock value in a single day. LexisNexis’s parent company dropped 14%. DocuSign fell 11%.

Wall Street is calling it the “SaaSpocalypse.”

And what caused all of this? A company called Anthropic released a free plugin.

If that sentence confuses you — how does a free plugin crash the stock market? — you’re not alone. Let me explain what’s actually happening, what it means for your practice, and why your client data is at the center of all of it. We need to talk about it.

First, Let’s Get Our Terms Straight

Anthropic is the company that makes Claude, one of the leading AI systems (think: ChatGPT’s main competitor).

Claude Cowork is their new tool that lets AI actually do work on your computer — not just chat with you, but read your files, edit documents, and complete multi-step tasks.

The legal plugin is an add-on that turns Cowork into a legal workflow machine: contract review, NDA triage, compliance checks, and more.

Here’s the key part: you give it access to folders on your computer, and it reads and edits files in those folders.

Including your client files.

WHAT This Actually Does

Imagine hiring a paralegal who:

  • Reviews contracts against your firm’s playbook, flagging clauses as green (fine), yellow (watch this), or red (problem)
  • Sorts incoming NDAs into three piles: auto-approve, needs quick review, needs full review
  • Generates briefings on legal topics in minutes
  • Creates templated responses for discovery holds and data requests

That’s what this plugin does. You point it at your contract folder, tell it your firm’s preferences, and it goes to work.

The kicker? It’s free and open-source. Anyone can use it. Anyone can customize it.

WHY Wall Street Panicked

Here’s the business story, explained simply.

For years, legal tech companies have followed the same playbook:

  1. License AI technology from Anthropic or OpenAI
  2. Wrap it in legal-specific features
  3. Charge law firms $500-2,000 per month

Think of it like a restaurant. Anthropic grows the vegetables (the AI). Legal tech companies buy those vegetables, cook them into meals (legal products), and sell them to you at restaurant prices.

Last week, the vegetable farmer opened their own restaurant. And they’re giving away the food for free.

That’s why stocks crashed. Every legal tech company built on Anthropic’s technology just discovered that their supplier is now their competitor. The “wrapper + workflow” business model — which described most legal AI startups — suddenly looks vulnerable.

As one analyst put it: “For the first time, a foundation-model company is packaging a legal workflow product directly into its platform, rather than merely supplying an API to legal-tech vendors.”

Translation: The company that makes the engine just started selling complete cars.

HOW This Changes Your Practice

Let’s be honest about what’s coming:

The Good

  • Lower barriers to AI adoption. Solo practitioners and small firms can now access enterprise-level contract review without enterprise-level budgets.
  • More competition = better tools. Legal tech companies will have to compete on actual value, not just “we have AI.”
  • Customization. Because it’s open-source, tech-savvy firms can tailor it to their exact workflows.

The Concerning

  • Your files, their servers. When you give Cowork access to a folder, it reads those files. The AI processes that content. Where does that data go?
  • Security researchers have already found vulnerabilities. One team demonstrated how a malicious document could trick Cowork into uploading your files to an attacker’s account — without your approval.
  • It’s a “research preview.” Anthropic’s own warning: “Cowork is a research preview with unique risks due to its agentic nature and internet access.”

The Reality Check

Early reviews from attorneys who’ve tested it? Mixed at best. One legal tech columnist reported: “To the extent I’ve been able to put it through its paces, the results have been… underwhelming.”

Another reviewer on social media showed it confidently producing incorrect contract analysis. The consensus: impressive demo, not ready for real client work.

AI Client Confidentiality: The Question Nobody’s Asking

Here’s what keeps me up at night:

When you use these tools, where does your client’s confidential information actually go?

With Cowork, your documents are processed by AI running on Anthropic’s infrastructure. The tool “runs on your computer” but executes work in a “virtual machine environment” — which means your data travels. For attorneys serious about confidentiality, software that works entirely on your own machine isn’t just a preference — it’s a safeguard.

Now consider:

  • ABA Model Rule 1.6 requires “reasonable efforts to prevent the inadvertent or unauthorized disclosure” of client information.
  • What constitutes “reasonable efforts” when using AI tools that security researchers have already shown can be exploited?
  • Have you read the terms of service? Do you know if your client data can be used to train future AI models?

The legal industry is racing to adopt AI. The ethics rules haven’t caught up. And the first major AI-related malpractice case hasn’t happened yet.

Don’t be the test case.

WHEN Does This Get Real?

My honest timeline:

Right now (2026): Early adopters experimenting. Most firms watching. Technology impressive but unreliable for critical work.

12-18 months: The bugs get worked out. Major legal tech vendors respond with better offerings or competitive pricing. Clearer guidance emerges on ethics compliance.

2-3 years: AI-assisted document review becomes standard practice for routine matters. Firms that haven’t adapted start losing competitive bids.

5+ years: The practice of law looks fundamentally different. The question isn’t whether to use AI, but which AI and how.

But here’s the thing: you don’t have to be first. In fact, when it comes to AI client confidentiality, being first carries real risk.

What You Should Do Today

1. Audit Your Current AI Use

Are associates using ChatGPT or Claude for research? Have they uploaded client documents? Most firms have “shadow AI” usage they don’t even know about.

2. Establish Clear Policies

Before anyone in your firm uses AI tools on client matters, answer these questions:

  • Which tools are approved?
  • What data can be input?
  • Do clients need to consent?
  • How do we document AI usage?

3. Get Informed Consent

Consider updating engagement letters to address AI tool usage. “We may use AI-assisted tools for [specific purposes]. These tools process information on third-party servers. Do you consent?”

4. Prioritize Local-First Solutions for AI Client Confidentiality

When evaluating legal tech, ask: “Where does my data go?”

Tools that keep data on your own systems — rather than sending everything to the cloud — eliminate an entire category of risk. The efficiency gains of AI don’t require sacrificing control over client information. Better yet, consider a one-time purchase alternative — so your practice isn’t dependent on yet another subscription that could change its terms overnight.

5. Audit Your Billing Software’s Privacy Policies

There’s a lot of pretty scary stuff lurking in most privacy policies these days. You should know what you’re agreeing to.

6. Watch, Don’t Jump

Let the early adopters find the landmines. In 12-18 months, we’ll know which tools actually work, which vendors survive, and what the ethics guidance looks like.

The Bottom Line

Anthropic’s legal plugin is a genuine inflection point. The “SaaSpocalypse” isn’t hype — the business model for legal AI is changing in real time.

But amid all the excitement about efficiency and disruption, one question matters more than any other:

When you process a client’s confidential merger documents through AI, do you know — really know — where that data goes, who can access it, and whether it’s being used to train systems that might surface that information elsewhere?

If you can’t answer that question with certainty, you’re not ready.

The future of legal AI is coming. Make sure you can protect AI client confidentiality when it arrives.


Questions about AI client confidentiality? Want to discuss how to implement AI tools while maintaining data security? Get in touch — these conversations matter.