U.S. lawmakers have introduced 1,561 AI bills across 45 states with no agreed federal standard — creating a compliance crisis for companies navigating 50 simultaneous regulatory regimes.
The U.S. Has 1,561 AI Bills and No Agreed-Upon Test for Any of Them
By Hector Herrera | May 16, 2026 | Government
As of mid-May 2026, U.S. lawmakers have introduced 1,561 AI-related bills across 45 states — and there is no shared federal standard to evaluate whether any of them make sense. That gap is the actual crisis in American AI governance, and it is getting worse.
The problem isn't that states are legislating AI. It's that they're doing it with different definitions, different thresholds, and different enforcement mechanisms, with no coordinating framework underneath any of it.
The Numbers
According to a Fortune analysis published May 15, the scale of state-level AI legislation is unlike anything Congress has seen from a technology sector:
- 1,561 AI-related bills introduced across 45 states as of mid-May 2026
- 145 bills enacted in 2025 alone, before the current session's activity
- 45 states have active AI legislation — the five that don't are the outliers
- No federal preemption legislation has passed Congress, despite multiple attempts
The White House has issued executive orders on AI — both the Biden-era safety framework and the Trump administration's rollback — but executive orders don't preempt state law. Only an act of Congress can do that, and Congress has not acted.
What the Patchwork Actually Looks Like
Here's what companies are navigating right now:
Colorado passed the Colorado AI Act in 2024, targeting high-risk AI systems. It has been amended, stayed, and challenged multiple times. Its status as of May 2026 remains uncertain.
Connecticut passed SB 5 in May 2026, creating disclosure and impact assessment requirements for AI systems. It applies to developers and deployers operating in-state.
Texas took the opposite approach — limiting local government AI restrictions in an attempt to attract AI investment.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
California has the most active AI legislative calendar, with bills ranging from deepfake disclosure to mandatory safety testing for frontier models.
A company deploying an AI hiring tool across all 50 states is now effectively subject to 50 different regulatory regimes with conflicting requirements, different definitions of what counts as "consequential" AI use, and different enforcement agencies.
Why There's No Federal Test
The lack of a federal standard isn't an accident — it's a political failure with several causes.
Congress moves slowly on technology. The last major federal technology legislation was the Communications Decency Act in 1996. AI is moving on a timeline measured in quarters, not the multi-year cycles of federal rulemaking.
The White House position keeps shifting. The Biden administration's October 2023 executive order mandated safety testing for frontier models and created reporting requirements. The Trump administration rescinded large portions of it in early 2025 and directed agencies to prioritize AI "innovation" over precautionary rules.
Industry prefers state law — selectively. Large AI companies often support federal preemption when state laws are stricter, and oppose it when federal law might impose requirements they prefer to avoid. This creates an incentive structure that delays resolution.
There is no agreed definition of "AI." Different bills define artificial intelligence differently. A bill regulating "AI-generated content" may or may not cover a system that uses machine learning to flag fraud. Without a common taxonomy, even well-intentioned legislation creates gaps and overlaps.
The Compliance Crisis Is Already Here
This isn't a theoretical future problem. Companies are already making decisions based on the patchwork:
- Some firms are choosing the most restrictive applicable state law as their de facto national standard — expensive but legally defensible
- Others are geofencing features by state, meaning users in different states get different product experiences based on local law
- Legal and compliance teams at AI companies have expanded significantly in 2025-2026, with AI-specific regulatory counsel now a standard role at any company with meaningful AI deployment
Small and mid-sized companies are hardest hit. A startup that lacks in-house legal capacity cannot realistically track 1,561 bills across 45 states. The compliance burden disproportionately benefits large incumbents who can afford it — the opposite of what innovation policy should do.
What Would a Federal Standard Actually Require?
Researchers and policy advocates who have studied this closely identify several minimum elements:
- A common definition of AI that doesn't sweep in every software system or exclude consequential uses
- A risk-tiered framework — not all AI is equally consequential; a spam filter and a medical diagnostic system should not have identical requirements
- Preemption with a floor — federal law that sets minimum standards while allowing states to add protections in areas of specific local concern (employment, housing)
- Safe harbor provisions for companies that follow the federal framework, reducing liability under conflicting state laws
None of these are technically difficult concepts. The difficulty is political — who controls the framework, who enforces it, and which industry interests get carved out.
What to Watch
The current legislative session in most states ends between May and July 2026. The volume of state AI bills actually enacted by mid-summer will tell us whether this is a temporary surge or a permanent structural reality.
If Congress still hasn't moved on preemption by the end of 2026, the 1,561-bill patchwork will look simple compared to what's coming in 2027. Companies are already planning for that scenario.
Sources: Fortune, May 15, 2026
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.