Two days ago, Judge Rakoff granted a motion that should make every attorney using cloud-based legal AI very uncomfortable. The reasoning is straightforward. The implications are enormous.
On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled that documents a defendant generated through Claude (Anthropic’s AI) are not protected by attorney-client privilege or work product doctrine.
The case is United States v. Heppner, 25 Cr. 503 (SDNY). (Full docket on CourtListener) The ruling was on the government’s motion to compel. And the logic applies far beyond this one case.
Let me explain why this matters to you.
The Government’s Argument (Which Won)
The DOJ’s motion was surgical. Four independent grounds, any one of which was sufficient:
1. The AI is not an attorney.
No privilege attaches to communications with a non-attorney third party. Claude is a commercial product, not legal counsel. There is no attorney-client relationship. This one’s obvious.
2. No expectation of confidentiality.
This is where it gets interesting. The government cited Anthropic’s privacy policy, which permits:
- Collection of prompts and outputs
- Use for model training
- Disclosure to governmental authorities
The defendant voluntarily shared information with a platform whose own terms allow government access. You can’t claim confidentiality when the vendor’s ToS explicitly permits disclosure.
3. Retroactive privilege doesn’t work.
The defendant tried to argue that sharing the AI outputs with his attorney made them privileged. Judge Rakoff wasn’t having it. Pre-existing, non-privileged materials don’t become privileged just because you hand them to your lawyer later. This is Privilege 101.
4. Work product requires attorney direction.
The defendant created these documents on his own initiative, not at counsel’s direction. The work product doctrine protects materials prepared by or for a party’s attorney. It doesn’t protect a layperson’s independent research.
Four arguments. Four wins. Motion granted.
“But That Was a Criminal Defendant Using Consumer AI”
Yes. And that’s what makes this ruling dangerous, not limited.
The privilege analysis doesn’t turn on who’s typing. It turns on the architecture.
Read the government’s brief again. The confidentiality argument was based on Anthropic’s privacy policy. Not the defendant’s status. Not the nature of the queries. The vendor’s terms.
Those terms don’t change when an attorney does the typing. Claude’s privacy policy is the same whether you’re a criminal defendant or a senior partner at a white shoe firm.
If your legal AI tool runs through a cloud service whose terms permit data collection, training, or disclosure, you have the same confidentiality problem. The keyboard operator doesn’t matter. The vendor’s policies do.
The Architecture Problem Nobody Wants to Discuss
Here’s the part the legal AI vendors don’t want you thinking about.
Most legal AI tools operate as cloud services. Your prompts go to their servers. Their models process your queries. Your client’s information passes through infrastructure you don’t control, governed by terms you probably haven’t read carefully.
Go read your legal AI vendor’s privacy policy right now. (I’ll wait.)
Look for these phrases:
- “may use data to improve our services”
- “may disclose information in response to legal process”
- “may share data with service providers and affiliates”
Found them? Congratulations. You’ve just identified why a substantially motivated opposing counsel could make a very uncomfortable argument about your AI-assisted work product.
Judge Rakoff didn’t create new law. He applied existing privilege principles to a new technology. And those principles don’t care whether the AI has a legal-specific marketing team.
What the Reddit Lawyers Are Saying
This case hit r/law and r/lawyers hard. The analysis in those threads is worth reading.
One commenter nailed the architectural point:
“Heppner is not really an ‘AI case.’ It is an architecture case. Judge Rakoff did not create a new anti-AI rule. He applied very traditional privilege principles… If you feed litigation strategy into a remote service whose own policy permits retention, training use, or disclosure, you are going to have a hard time arguing reasonable expectation of privacy.”
Another pointed out the discovery implications:
“Every single discovery request should be seeking non-privileged AI usage.”
And perhaps most concerning:
“Even in single-tenant deployments, if the vendor continues to manage the data and has AWS KMS access, a substantially motivated attorney could win the compulsion.”
These aren’t legal tech skeptics. These are practicing attorneys working through the implications in real time.
Two Architectures. Two Very Different Privilege Analyses.
Architecture A: Cloud-First Legal AI
- Your data travels to vendor servers
- Vendor ToS permits data collection, training, disclosure
- No expectation of confidentiality (per Heppner analysis)
- Potentially discoverable
Architecture B: Local-First Legal Software
- Your data stays on your hardware
- No third-party vendor with disclosure rights
- No ToS permitting training or government access
- You control storage, access, and retention
The Heppner ruling analyzed Architecture A and found no privilege protection. Architecture B was never at issue because there was no third party to analyze.
This isn’t a new argument. It’s just that most of the industry ignored it in the rush to ship cloud-based AI features. Now there’s case law.
Related: See the full breakdown: A Federal Judge Just Made Your Cloud Legal AI Discoverable
The Question Your Clients Will Eventually Ask
Here’s the scenario that should keep legal AI vendors up at night:
A sophisticated corporate client reads about Heppner. They call their outside counsel. They ask a simple question:
“What cloud services touch our privileged communications? And what do those vendors’ terms say about data retention and disclosure?”
If your practice management software, your document automation, your AI research tools, your time tracking… if any of it runs through cloud services with standard vendor ToS, you now have an uncomfortable conversation ahead.
“We use industry-standard security” isn’t going to cut it. The question isn’t about security. It’s about contractual rights to your data.
60-Second Firm Hack
This week’s challenge: Read your legal AI vendor’s privacy policy. The whole thing. Look specifically for language about data collection, model training, and disclosure to authorities. Then ask yourself: if opposing counsel cited this policy in a motion to compel, how would you respond?
If you don’t like the answer, that’s useful information.
Off the Record
TimeNet Law was designed as Mac-native, local-first software from day one. Not because we predicted Heppner. Because we believed attorneys should control their own data.
Your billing records, your client communications, your matter information… it lives on your hardware, governed by your policies, accessible only to you.
No cloud vendor ToS. No data collection for training. No disclosure provisions to worry about.
When Judge Rakoff analyzed the privilege question in Heppner, he examined a cloud service’s terms and found no confidentiality protection. That analysis simply doesn’t apply to software that never sends your data to a third party.
This wasn’t a marketing decision. It was an architecture decision. And architecture, it turns out, has legal consequences.
See also: Privacy Fortress: How Local-First Architecture Protects Your Data
See how local-first practice management works →
“The best time to think about data architecture was before you had client data. The second best time is now.”
— Perry, Founder, TimeNet Law