The most dangerous AI threat to lawyers I’ve ever seen isn’t being talked about. The real threat isn’t sanctions. It’s what happens after the headline.
What does that actually mean? It’s something I’ve been thinking about every day. I can’t seem to shake it. And the more I dig into it, the more I notice that no one is really talking about it. So, let’s talk about it.
A DOJ attorney panicked. He’d accidentally overwritten his draft. So he asked ChatGPT to rewrite it, filed it, and assumed it was fine.
It wasn’t. The brief contained fabricated quotes and misstated case holdings. A magistrate judge caught it immediately. The attorney resigned the next day.
The legal world read this as a cautionary tale. Don’t be that guy. Verify your work.
But the public read something very different.
They read: A lawyer used AI to do his job.
Not “a lawyer used AI and got caught.” Not “a lawyer was sanctioned for recklessness.” Simply, a lawyer used AI. To write a legal brief. And it was convincing enough to file in federal court.
That’s the story the public keeps. And it’s the story that you need to understand. This is far more dangerous than sanctions ever could be.
The Headline Problem
Every time a lawyer is sanctioned for AI misuse, two things happen simultaneously.
First, one attorney’s career takes a hit. Sanctions. Suspension. Resignation. The legal community clucks its tongue and moves on.
Second, and this is the part I don’t see talked about, millions of people absorb a very simple message: AI is doing legal work now.
They don’t understand sanctions. They don’t understand hallucinations. They don’t understand that a fabricated case citation isn’t a minor error. It’s a fundamental failure of the adversarial system. They don’t know what precedent means or why it matters.
They just see: Lawyer + AI = I can do that, too.
And herein lies the danger. Not just to individual attorneys, but to the legal profession as a whole. People are increasingly starting to ask themselves a simple question:
Why am I paying someone $400 an hour for something a chatbot can do?
This isn’t hypothetical. A recent survey found that 42% of people would consult AI before calling a lawyer. Not instead of. Before. AI has already become the waiting room for legal services. And every reckless filing pushes more people through that door.
The sanctions count has passed 1,200 worldwide. Each one is a cautionary tale for lawyers. And a marketing campaign against them.
The Context Problem
AI will never understand your client.
When someone walks into your office and tells you their story, they’re not giving you data. They’re giving you trust. They’re telling you something important. It’s why they’re in your office in the first place. They’re in trouble, they need help, and the details of their life are now in your hands.
Those details matter. Not the summary. Not the bullet points. The details.
Cases are won on minutiae. A date that doesn’t line up. A witness who hesitated. A clause buried on page fifty-eight that everyone else skimmed past. The small, human, specific things that only surface when someone is paying close attention. When someone cares.
AI doesn’t care. It compresses. It summarizes. It loses context mid-thought and reduces human complexity to neat, confident paragraphs that sound authoritative and miss everything that matters. AI can fake it well. But it simply isn’t what your clients need: a compassionate, understanding, knowledgeable human being.
And what actually happens in practice often undermines that entire process. You meet with your client. You hear their story. Then you hand the case work to a paralegal. The paralegal hands the drafting to AI. Three degrees of separation between the person who heard the story and the machine producing the work product. All of the details that matter most are lost in translation.
You speedrun a complex legal workflow into a reckless game of telephone. And your client’s case — their freedom, their family, their future — is on the other end of it. And well-intentioned though you may be, your client relationship suffers. Your client suffers.
Their story cannot be distilled into bullet points. It shouldn’t be. That’s the whole point of hiring a lawyer.
The Accountability Problem
When AI is wrong, nothing happens to it.
It doesn’t face sanctions. It doesn’t lose its license. It doesn’t pay malpractice claims. It doesn’t sit across from a judge and explain itself. It doesn’t lose sleep. It doesn’t care.
It can’t care. It’s a machine. It has no bar card, no oath, no duty of care, no skin in the game whatsoever. No understanding of complex context, no awareness of chilling consequences.
So when it fabricates a case citation — and it will — who pays?
You do. Your reputation. Your career. Your license.
And worse: your client pays. The person who trusted you with their problem now has a bigger one. Because the machine you relied on felt no obligation to get it right, and the consequences fell on the only people in the room who are actually accountable.
AI has no liability. And it’s built that way. It’s the entire problem. AI is not in a “trust, but verify” state. Everything it outputs must be verified. Because getting it wrong doesn’t actually have any meaningful impact on AI. It can tell you the definition of accountability. But it doesn’t understand it.
The Training Problem
There’s a deeper irony that almost nobody is talking about.
Every brief you feed into AI, every motion you let it draft, every contract you ask it to review — you are teaching it to sound like a lawyer.
Not to be a lawyer. It will never be a lawyer. It can’t reason from first principles. It can’t exercise judgment. It can’t sit with a client and understand what’s actually at stake.
But it doesn’t have to.
It just has to be good enough to fool people into thinking it is one.
And every time you use it to do work you should be doing yourself, you’re making it a little more convincing. A little more polished. A little more capable of producing something that looks, to an untrained eye, like the real thing.
You are training your replacement. And your replacement doesn’t need to pass the bar. It just needs to pass the smell test for the 42% of people who are already asking it questions before they call you.
The more lawyers rely on AI, the faster it learns to imitate them. The faster it imitates them, the more the public believes it’s sufficient. The more the public believes it’s sufficient, the fewer people pick up the phone.
That’s the feedback loop. And lawyers are accelerating it every time they skip the work.
Verify Everything
Let me be clear about something: AI is a remarkable tool.
It can draft faster than any associate. It can summarize a hundred pages in seconds. It can find patterns in data that would take a human team weeks to surface. Used well, it makes good lawyers better.
But “used well” is doing all the heavy lifting in that sentence.
Read again: Used well, AI makes a good lawyer better. But AI is not a lawyer. Or a paralegal. Or a member of your staff. The second you think of it in those terms, you’ve lost. AI is a tool. The same way a bicycle lets a human travel faster and farther than any land mammal, AI makes a lawyer vastly more effective than a lawyer without it. But you still need a human being on that bike to win the Tour de France. The mind still has to pedal.
Right now, the legal profession is not using AI well. It’s throwing spaghetti at the wall and hoping the landlord doesn’t notice the stains. No policies. No training. No monitoring. No accountability frameworks. Just vibes and a prayer that nobody checks the citations.
That’s not a smart implementation. You’re paying a subscription fee to increase negligence.
For legal work, AI is still firmly in verify everything territory. Every citation. Every quote. Every case holding. Every factual claim. Every single output, every single time.
That’s not because AI is bad. It’s because AI is confident. It will present fabricated information with the same polished certainty as verified fact. It doesn’t flag its own uncertainty. It doesn’t say “I’m not sure about this one.” It just… answers. Fluently. Convincingly. Incorrectly.
The attorneys being sanctioned aren’t stupid. They’re busy. They’re under pressure. They’re overworked. And they trusted a tool that was never designed to be trusted.
And it’s not just the attorneys themselves. Paralegals are increasingly using AI to complete their work — sometimes without even telling the lawyers whose names are on the line. If you don’t already have an AI policy in place, it’s time.
The Real Threat
Let’s talk about what no one wants to say out loud.
AI doesn’t threaten lawyers by being better than them.
It threatens lawyers by convincing the public that the difference doesn’t matter.
Every reckless filing. Every fabricated citation that made it to a judge’s desk. Every headline about another attorney sanctioned for AI-generated work. These aren’t just individual failures. They are, collectively, slowly, methodically teaching the public that legal work is something a machine can do, while simultaneously training the machine to get better at faking it.
And once that belief takes hold — once enough people decide that AI is “close enough” — it doesn’t matter how wrong they are. The damage is done. The calls stop coming. The trust evaporates. And the profession that exists to protect people’s rights becomes, in the public imagination, an expensive middleman. Just another unnecessary expense.
Don’t be the next lawyer sanctioned for AI. But more importantly:
Don’t be the lawyer who teaches the public they don’t need lawyers.
Your license is yours to protect. But the profession belongs to all of you. And right now, every shortcut is a crack in the foundation.
Use the tool. Respect the tool. Verify everything the tool produces.
Your clients deserve nothing less. And your entire profession is on the line. The real AI threat to lawyers isn’t hallucinations or sanctions, or even replacing attorney’s jobs. It’s falsely teaching the public that AI can do what it truly cannot.