Breaking: Federal Court Ruling

A Federal Judge Just Made Your Cloud Legal AI Discoverable

The privilege analysis doesn't change based on who's typing. It changes based on where the data goes.

US v. Heppner, 25 Cr. 503 (S.D.N.Y. Feb. 10, 2026)

Protect Your Practice

What Just Happened

Two days ago, Judge Rakoff granted a motion that should make every attorney using cloud-based legal AI sit up and pay attention.

The government argued that documents a defendant generated through Claude weren't protected by attorney-client privilege or work product doctrine. Judge Rakoff agreed.

February 6, 2026

Government files motion arguing AI-generated documents aren't privileged

February 10, 2026

Judge Rakoff grants motion

February 12, 2026

Legal tech community realizes the implications

The Government's Argument

The reasoning was straightforward and that's what makes it dangerous:

"The AI tool is not an attorney. No privilege attaches to communications with a non-attorney third party."

No confidentiality. Anthropic's privacy policy permits collection of prompts and outputs, use for training, and disclosure to governmental authorities. The defendant voluntarily shared information with a platform whose own terms allow government access.

Retroactive privilege fails. Sending pre-existing non-privileged documents to counsel after the fact doesn't make them privileged.

Work product doesn't apply. The defendant's attorney didn't direct him to use Claude. Self-directed AI research isn't protected.

The keyboard operator doesn't matter. The vendor's policies do.

"But That Was a Criminal Defendant Using a Consumer App"

Yes. And the government's argument had nothing to do with that.

Read the motion again. The analysis turned on two things:

  1. Whether the vendor's ToS permits data collection, training use, or government disclosure
  2. Whether that undermines the reasonable expectation of confidentiality

That analysis doesn't change if it's an attorney doing the prompting. The platform's terms are the same regardless of who's sitting at the keyboard.

What the Reddit lawyers are saying:

"Every single discovery request should now be seeking non-privileged AI usage."

"If privilege analysis turns on vendor data retention and disclosure rights, does this implicate every legal AI platform operating as a Remote Computing Service under the SCA? Potentially, yes."

Two Architectures. Two Outcomes.

The key variable isn't "AI." It's whether the system functions as a third-party repository with independent rights over the data.

Architecture A: Cloud-Based

  • Data stored on vendor servers
  • Vendor ToS permits training use
  • Third-party employees have access
  • Subpoenas go to vendor, not you
  • Unknown jurisdictions
  • Silent disclosure possible

Architecture B: Local-First

  • Data stays on your hardware
  • No vendor training rights
  • You control access
  • Subpoenas come to you
  • Your jurisdiction
  • You know what's disclosed
"Privilege-safe architecture is not about branding. It is about control. If the AI runs inside the firm's walls, under its contracts, with no vendor reuse rights, the analysis looks very different." - Legal AI architect on Reddit, 2 hours after ruling

The Question Your Clients Will Ask

"Are you using any cloud-based AI tools that could make our communications discoverable?"

After Heppner, sophisticated clients and opposing counsel will start asking. Corporate legal departments will add it to their outside counsel guidelines. Malpractice carriers will want to know.

What's your answer going to be?

The 60-Second Firm Hack

Pull up every legal AI tool you use. Find the privacy policy. Search for "training," "government," and "disclosure." If any of those words appear with permissive language, you have a Heppner problem.

There's a Reason We Built TimeNet Law for Your Mac

Your data. Your hardware. Your control. No vendor ToS. No third-party training. No silent disclosures.

Start Your Free Trial

Mac-native. Local-first. Actually private.