Legal & Compliance | 3 min read

Oregon's Top Court Warns AI-Generated Erroneous Filings Are 'Rapidly Escalating'

Oregon's Court of Appeals chief judge issued a public warning that AI-fabricated court filings are "rapidly escalating," with one attorney fined $10,000 and another $8,000 for AI-hallucinated citations.

Hector Herrera
Hector Herrera
A law office related to Oregon's Top Court Warns AI-Generated Erroneous Filings Are
Why this matters Oregon's Court of Appeals chief judge issued a public warning that AI-fabricated court filings are "rapidly escalating," with one attorney fined $10,000 and another $8,000 for AI-hallucinated citations.

Oregon's Top Court: AI-Generated Erroneous Filings Are 'Rapidly Escalating'

By Hector Herrera | May 16, 2026 | Legal

Oregon's Court of Appeals chief judge issued a public warning this week that AI-generated court filings containing fabricated case citations are "rapidly escalating" — adding a state judicial voice to a national problem that is now generating documented penalties at a rate of four to five new incidents per day.

The warning matters because it comes from the judiciary, not from a bar association or a law school. Courts are the last stop.

What Oregon's Judge Said

According to the Oregon Capital Chronicle, the Court of Appeals chief judge described a pattern in which attorneys are submitting briefs that cite cases that do not exist, quote passages that were never written, and attribute legal propositions to real judges who said no such thing — all generated by AI systems that produce confident, citation-formatted text regardless of accuracy.

Two specific cases from Oregon illustrate the scale of the problem:

  • One attorney received a $10,000 fine for submitting briefs with fabricated case citations
  • A second attorney was fined approximately $8,000 for briefs containing invented quotations falsely attributed to real court decisions

Both attorneys used AI tools to assist with legal research or drafting. Neither apparently verified the outputs before filing.

The National Picture

Oregon's warning arrives as the national documented count of AI hallucination incidents in legal filings has exceeded 660 cases, growing at four to five new incidents per day. NexChron has tracked this pattern since early 2026, when a Texas federal judge began requiring attorneys to certify that AI-generated content had been human-verified before submission.

What makes this pattern persistent is that AI language models generate case citations in the correct format — proper citation structure, plausible case names, accurate-looking volume and page numbers — but the underlying cases either don't exist or the quoted passages are invented. An attorney scanning for obvious errors would not catch the fabrication without independently verifying each citation against a legal database like Westlaw or LexisNexis.

The verification step takes time. Attorneys under billing pressure or deadline constraints skip it. Courts pay the cost.

Why Judges Are Losing Patience

Early AI hallucination incidents in courts were treated as isolated errors. Judges issued warnings. Bar associations published guidance. Law firms promised internal protocols.

None of it worked at scale. The incident count kept rising because the structural incentive — AI saves time on legal drafting — remains intact, while the professional penalty for getting caught has, until recently, been modest. Fines in the $8,000-$10,000 range are painful for solo practitioners but manageable for attorneys at large firms.

Some judges have begun escalating. Sanctions now include:

  • Referral to state bar disciplinary boards
  • Adverse findings against the submitting party's claims
  • Required certification protocols that apply to all future filings by the attorney in that court
  • In at least two federal cases, suspension of the attorney's right to practice before that court

The Bar Association Response

Oregon's state bar, like bars in most states, has issued guidance recommending (but not requiring) that attorneys disclose AI use and verify AI-generated content. The guidance is voluntary. Mandatory disclosure rules are under consideration in approximately a dozen states but have not been widely enacted.

The American Bar Association issued guidance in 2023 that attorneys have an ethical duty of competence when using AI tools — meaning ignorance of AI limitations is not a defense. But "duty of competence" guidance without enforcement mechanisms has limited effect on practitioners who are not already following it.

What to Watch

Oregon's public warning signals that courts are moving from quiet frustration to active deterrence. The practical question is whether escalating financial penalties change attorney behavior at scale, or whether the economics of AI-assisted drafting still make the risk worth taking.

The 660-case figure will not stop rising until either AI tools become reliable enough to self-verify legal citations (they are not close to that), or verification becomes mandatory and machine-auditable, or penalties become severe enough to make the risk calculation change.

None of those conditions exist yet.


Sources: Oregon Capital Chronicle

Key Takeaways

  • By Hector Herrera | May 16, 2026 | Legal
  • What makes this pattern persistent
  • None of it worked at scale.
  • The 660-case figure will not stop rising

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron