AI client confidentiality just became the most important issue in legal tech.
The Earthquake
Something just happened that made Thomson Reuters lose 15% of its stock value in a single day. LexisNexis’s parent company dropped 14%. DocuSign fell 11%.
Wall Street is calling it the “SaaSpocalypse.”
And what caused all of this? A company called Anthropic released a free plugin.
If that sentence confuses you — how does a free plugin crash the stock market? — you’re not alone. Let me explain what’s actually happening, what it means for your practice, and why your client data is at the center of all of it. We need to talk about it.
First, Let’s Get Our Terms Straight
Anthropic is the company that makes Claude, one of the leading AI systems (think: ChatGPT’s main competitor).
Claude Cowork is their new tool that lets AI actually do work on your computer — not just chat with you, but read your files, edit documents, and complete multi-step tasks.
The legal plugin is an add-on that turns Cowork into a legal workflow machine: contract review, NDA triage, compliance checks, and more.
Here’s the key part: you give it access to folders on your computer, and it reads and edits files in those folders.
Including your client files.
WHAT This Actually Does
Imagine hiring a paralegal who:
- Reviews contracts against your firm’s playbook, flagging clauses as green (fine), yellow (watch this), or red (problem)
- Sorts incoming NDAs into three piles: auto-approve, needs quick review, needs full review
- Generates briefings on legal topics in minutes
- Creates templated responses for discovery holds and data requests
That’s what this plugin does. You point it at your contract folder, tell it your firm’s preferences, and it goes to work.
The kicker? It’s free and open-source. Anyone can use it. Anyone can customize it.
WHY Wall Street Panicked
Here’s the business story, explained simply.
For years, legal tech companies have followed the same playbook:
- License AI technology from Anthropic or OpenAI
- Wrap it in legal-specific features
- Charge law firms $500-2,000 per month
Think of it like a restaurant. Anthropic grows the vegetables (the AI). Legal tech companies buy those vegetables, cook them into meals (legal products), and sell them to you at restaurant prices.
Last week, the vegetable farmer opened their own restaurant. And they’re giving away the food for free.
That’s why stocks crashed. Every legal tech company built on Anthropic’s technology just discovered that their supplier is now their competitor. The “wrapper + workflow” business model — which described most legal AI startups — suddenly looks vulnerable.
As one analyst put it: “For the first time, a foundation-model company is packaging a legal workflow product directly into its platform, rather than merely supplying an API to legal-tech vendors.”
Translation: The company that makes the engine just started selling complete cars.
HOW This Changes Your Practice
Let’s be honest about what’s coming:
The Good
- Lower barriers to AI adoption. Solo practitioners and small firms can now access enterprise-level contract review without enterprise-level budgets.
- More competition = better tools. Legal tech companies will have to compete on actual value, not just “we have AI.”
- Customization. Because it’s open-source, tech-savvy firms can tailor it to their exact workflows.
The Concerning
- Your files, their servers. When you give Cowork access to a folder, it reads those files. The AI processes that content. Where does that data go?
- Security researchers have already found vulnerabilities. One team demonstrated how a malicious document could trick Cowork into uploading your files to an attacker’s account — without your approval.
- It’s a “research preview.” Anthropic’s own warning: “Cowork is a research preview with unique risks due to its agentic nature and internet access.”
The Reality Check
Early reviews from attorneys who’ve tested it? Mixed at best. One legal tech columnist reported: “To the extent I’ve been able to put it through its paces, the results have been… underwhelming.”
Another reviewer on social media showed it confidently producing incorrect contract analysis. The consensus: impressive demo, not ready for real client work.
AI Client Confidentiality: The Question Nobody’s Asking
Here’s what keeps me up at night:
When you use these tools, where does your client’s confidential information actually go?
With Cowork, your documents are processed by AI running on Anthropic’s infrastructure. The tool “runs on your computer” but executes work in a “virtual machine environment” — which means your data travels. For attorneys serious about confidentiality, software that works entirely on your own machine isn’t just a preference — it’s a safeguard.
Now consider:
- ABA Model Rule 1.6 requires “reasonable efforts to prevent the inadvertent or unauthorized disclosure” of client information.
- What constitutes “reasonable efforts” when using AI tools that security researchers have already shown can be exploited?
- Have you read the terms of service? Do you know if your client data can be used to train future AI models?
The legal industry is racing to adopt AI. The ethics rules haven’t caught up. And the first major AI-related malpractice case hasn’t happened yet.
Don’t be the test case.
WHEN Does This Get Real?
My honest timeline:
Right now (2026): Early adopters experimenting. Most firms watching. Technology impressive but unreliable for critical work.
12-18 months: The bugs get worked out. Major legal tech vendors respond with better offerings or competitive pricing. Clearer guidance emerges on ethics compliance.
2-3 years: AI-assisted document review becomes standard practice for routine matters. Firms that haven’t adapted start losing competitive bids.
5+ years: The practice of law looks fundamentally different. The question isn’t whether to use AI, but which AI and how.
But here’s the thing: you don’t have to be first. In fact, when it comes to AI client confidentiality, being first carries real risk.
What You Should Do Today
1. Audit Your Current AI Use
Are associates using ChatGPT or Claude for research? Have they uploaded client documents? Most firms have “shadow AI” usage they don’t even know about.
2. Establish Clear Policies
Before anyone in your firm uses AI tools on client matters, answer these questions:
- Which tools are approved?
- What data can be input?
- Do clients need to consent?
- How do we document AI usage?
3. Get Informed Consent
Consider updating engagement letters to address AI tool usage. “We may use AI-assisted tools for [specific purposes]. These tools process information on third-party servers. Do you consent?”
4. Prioritize Local-First Solutions for AI Client Confidentiality
When evaluating legal tech, ask: “Where does my data go?”
Tools that keep data on your own systems — rather than sending everything to the cloud — eliminate an entire category of risk. The efficiency gains of AI don’t require sacrificing control over client information. Better yet, consider a one-time purchase alternative — so your practice isn’t dependent on yet another subscription that could change its terms overnight.
5. Audit Your Billing Software’s Privacy Policies
There’s a lot of pretty scary stuff lurking in most privacy policies these days. You should know what you’re agreeing to.
6. Watch, Don’t Jump
Let the early adopters find the landmines. In 12-18 months, we’ll know which tools actually work, which vendors survive, and what the ethics guidance looks like.
The Bottom Line
Anthropic’s legal plugin is a genuine inflection point. The “SaaSpocalypse” isn’t hype — the business model for legal AI is changing in real time.
But amid all the excitement about efficiency and disruption, one question matters more than any other:
When you process a client’s confidential merger documents through AI, do you know — really know — where that data goes, who can access it, and whether it’s being used to train systems that might surface that information elsewhere?
If you can’t answer that question with certainty, you’re not ready.
The future of legal AI is coming. Make sure you can protect AI client confidentiality when it arrives.
Questions about AI client confidentiality? Want to discuss how to implement AI tools while maintaining data security? Get in touch — these conversations matter.