A Manhattan federal judge ordered a defendant to hand over 31 Claude-generated documents, ruling no attorney-client privilege exists between users and commercial AI platforms — and creating a circuit split with an opposite Michigan ruling the same day.
Federal Judge Rules AI Chatbot Conversations Are Admissible as Court Evidence
By Hector Herrera | April 22, 2026 | Legal
A Manhattan federal judge ruled this week that conversations held with commercial AI platforms carry no attorney-client privilege — and ordered a defendant to turn over 31 Claude-generated documents for discovery. The ruling, delivered April 15, directly contradicts a same-day decision from a Michigan magistrate who reached the opposite conclusion, setting up a circuit split that courts will be untangling for years.
What Happened
The Manhattan case involved a defendant who had used Anthropic's Claude to generate legal documents and strategy notes while working with counsel. When opposing attorneys requested those documents in discovery, the defendant asserted attorney-client privilege — arguing that conversations with an AI acting in a quasi-legal capacity deserved the same protection as communications with a licensed attorney.
The federal judge disagreed. In her ruling, she held that no attorney-client relationship exists between a user and a commercial AI platform. Claude, she wrote, is a product — not an attorney, not an agent of an attorney, and not a participant in a confidential legal relationship. The 31 documents were ordered produced.
The same day, a Michigan magistrate judge took the opposite position in an unrelated case, finding that AI-generated documents prepared at the direction of counsel and in anticipation of litigation could fall under work-product protection — a related but distinct doctrine.
Why This Matters
Attorney-client privilege is one of the oldest and most jealously protected rights in American law. It protects confidential communications between a client and their lawyer from being disclosed without consent. Work-product doctrine extends similar protection to documents prepared by or for attorneys in anticipation of litigation. Neither doctrine was designed with AI in mind.
The Manhattan ruling puts anyone who uses commercial AI platforms for legal work in a precarious position. If you typed your case strategy into Claude, ChatGPT, or any other commercial AI — even while working with a lawyer — those conversations may now be discoverable. The AI platform itself is a third party to the attorney-client relationship, and sharing privileged information with a third party generally destroys the privilege.
That's not a theoretical risk. It's the rule.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The Immediate Industry Response
The ruling prompted rapid action from the legal sector. According to Technology.org, over a dozen major law firms issued urgent client warnings about "AI chat hygiene" within 24 hours — advising clients to:
- Stop using commercial AI platforms for case-sensitive communications
- Treat AI-generated documents as non-privileged unless created within a firm's secured, internal AI system
- Review existing AI-generated materials for anything that could be harmful if produced in discovery
- Document the legal context for any AI use that might later be characterized as work product
Several BigLaw firms are now accelerating deployment of private, on-premise AI systems specifically to avoid the third-party disclosure problem.
The Circuit Split Problem
The Michigan magistrate's decision on the same day creates an immediate conflict. Under the work-product doctrine, documents prepared in anticipation of litigation — even by third parties — can be protected if they reflect the "mental impressions, conclusions, opinions, or legal theories" of the attorney. The Michigan ruling found that AI-generated content directed by counsel could qualify.
These two decisions are not technically in conflict — they involve different doctrines (attorney-client privilege vs. work product) and different facts. But they will be read as conflicting, because the practical question they answer is the same: Can I use AI in legal work without those outputs becoming evidence against me?
The answer is now: it depends on which court you're in, which doctrine applies, and whether your AI use was tightly controlled by counsel or done independently.
The Supreme Court will eventually need to clarify this. Until then, every jurisdiction is making its own rules.
What This Means for You
If you're a business executive, in-house counsel, or anyone who has used commercial AI tools to think through legal exposure, draft agreements, or analyze litigation risk — those conversations are likely not protected. Treat them accordingly.
If you're a practicing attorney, the safest path is to keep AI use within your firm's own secured infrastructure and document clearly when AI outputs are prepared under attorney direction for litigation purposes.
The "AI chat hygiene" warnings firms issued this week are not overcautious. They're responding to a real ruling with real consequences.
What to Watch
The Second Circuit (which covers Manhattan federal courts) will likely see this question escalate. Watch for the circuit courts to formally address AI privilege questions within the next 12-18 months as more cases produce conflicting district court rulings. Legislators in several states are already drafting bills to extend privilege protections to attorney-directed AI use — none have passed yet.
Hector Herrera covers AI in law and policy for NexChron. This article is informational and does not constitute legal advice.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.