Legal & Compliance | 4 min read

Courts and Law Firms Warn: Using AI Chatbots With Attorney Communications May Waive Legal Privilege

Major law firms are now writing AI privilege warnings into client contracts as courts sanction attorneys for AI-hallucinated citations — two converging risks the legal profession is only beginning to reckon with.

Hector Herrera
Hector Herrera
A law office featuring contracts, servers, related to Courts and Law Firms Warn: Using AI Chatbots With Attorney C
Why this matters Major law firms are now writing AI privilege warnings into client contracts as courts sanction attorneys for AI-hallucinated citations — two converging risks the legal profession is only beginning to reckon with.

Courts and Law Firms Warn: Using AI Chatbots With Attorney Communications May Waive Legal Privilege

By Hector Herrera | April 23, 2026 | Legal

A growing number of major U.S. law firms and courts have begun warning clients explicitly: feeding attorney communications into AI chatbots may waive attorney-client privilege — one of the most fundamental protections in the U.S. legal system. At the same time, courts are levying sanctions against lawyers who submitted AI-generated legal citations that turned out to be fabricated, putting the profession on notice that AI misuse carries direct professional consequences.

These two risks are arriving simultaneously, and many legal and business users are not aware of either.

The Privilege Problem

Attorney-client privilege — the legal protection that keeps communications between a client and their attorney confidential and protected from discovery in litigation — has a fundamental requirement: the communication must be kept confidential. Sharing privileged communications with a third party generally destroys the privilege, because the communication is no longer confidential.

According to Claims Journal, major law firms including Sher Tremonte have begun writing AI-related privilege risk explicitly into client contracts, warning clients that inputting attorney communications into AI chatbots could constitute disclosure to a third party — and thereby waive the privilege over those communications.

The legal analysis turns on how AI chatbots handle data. Most commercial AI chat interfaces — including default configurations of tools used for productivity — transmit user inputs to servers operated by the AI provider, where they may be used to train future models, reviewed by human contractors for safety evaluation, or retained in logs. Under a strict application of the third-party disclosure doctrine, submitting a privileged communication to those services is arguably disclosing it to the AI company.

Why this matters in practice: If privilege is waived over a communication, opposing counsel in litigation can demand to see it. An email exchange between a company and its attorneys about litigation strategy, a draft legal memo analyzing regulatory exposure, or internal advice about a contract dispute — if those communications were fed into an AI chatbot, the waiver argument becomes available to the opposing party in any subsequent litigation.

Courts have not issued definitive rulings on AI-specific privilege waiver, but the direction of law firm guidance is consistent: the risk is real enough to warn clients explicitly and, in some cases, contract around it.

The Hallucination Liability Problem

Separate from the privilege issue, courts are levying sanctions against attorneys who submitted AI-generated legal citations that did not exist.

The MyPillow case is the most recent prominent example. According to reporting cited by Claims Journal, attorneys for MyPillow CEO Mike Lindell were fined $3,000 per lawyer after submitting briefs containing fabricated case citations — citations that appeared to be real cases but were generated by an AI system that hallucinated the details. The court's sanction was based on the attorneys' professional obligation to verify citations before submitting them, regardless of how the citations were generated.

AI hallucination (the term for when a language model generates confident-sounding but factually incorrect or invented information) is a known, documented limitation of current large language models. These systems produce fluent, authoritative-sounding text — including plausible-looking legal citations with realistic case names, court names, and citation formats — that is entirely fabricated. A model does not "know" that a case it cites doesn't exist; it generates the citation because it fits the pattern of what a legal brief looks like.

The legal profession is the highest-profile environment where this failure mode carries immediate professional consequences, but the underlying risk extends to any domain where AI-generated factual claims are presented as authoritative.

What Legal and Business Users Should Do

For attorneys and legal professionals:

  • Never submit AI-generated citations without independent verification. Check every case cite in Westlaw, LexisNexis, or the primary court record before including it in any filing. This is not optional — it is an existing professional obligation that now requires explicit workflow enforcement in the AI era.
  • Review your firm's data retention policies for AI tools before use. Many enterprise AI contracts include data residency and training data opt-out provisions. Using a tool under an enterprise agreement with appropriate provisions is materially different from using the consumer version of the same tool.
  • Do not input privileged communications into AI tools without evaluating the data handling. If you cannot confirm that inputs are not retained or used for training, treat the tool as a third party for privilege purposes.

For businesses and executives:

  • Assume your AI chatbot usage is discoverable. Until you have confirmed the data handling policies of every AI tool your team uses, assume that anything entered could surface in litigation. This is not a reason to avoid AI tools — it is a reason to use them deliberately with an understanding of where the data goes.
  • Establish internal AI use policies for sensitive matters. Mergers and acquisitions, regulatory investigations, litigation strategy, and employment disputes are the highest-risk categories. Clear guidance to employees about what should and should not be entered into AI tools — before a lawsuit, not during — is basic risk management.
  • Ask your outside counsel about their AI policies. Firms have varying approaches to AI use in client matters; understanding your outside counsel's policies is relevant to your own privilege and confidentiality posture.

The Broader Pattern

The legal profession is encountering AI risks in concentrated form because it combines several features that make AI failure expensive: strict professional obligations, adversarial proceedings where opposing counsel is actively looking for errors, and foundational protections — privilege — that depend on careful information hygiene.

But the underlying risks extend well beyond law. Any professional context where factual accuracy matters (medicine, finance, engineering) faces the hallucination problem. Any context where confidentiality matters faces the disclosure problem. The legal profession's experience is a leading indicator, not a unique situation.

What to Watch

Two developments will shape how these risks evolve: first, whether any court issues a definitive ruling on AI chatbot use as a privilege waiver — a ruling that would clarify the legal standard and likely trigger significant changes to enterprise AI policies industry-wide. Second, whether AI providers revise their consumer terms of service in response to the legal profession's pressure, offering clearer confidentiality guarantees that would reduce (though not eliminate) the privilege risk.

The risks are clear and present. The safeguards are lagging.


Hector Herrera is the founder of Hex AI Systems and editor of NexChron.

Key Takeaways

  • By Hector Herrera | April 23, 2026 | Legal
  • Why this matters in practice:
  • For attorneys and legal professionals:
  • Never submit AI-generated citations without independent verification.
  • Review your firm's data retention policies for AI tools before use.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron