Government & Policy | 4 min read

Congress Stalls While States Sprint: The Federal AI Regulation Gap Is Getting Dangerous

More than 38 states have enacted AI laws, Colorado's high-stakes AI Act takes effect June 30, and Congress still has not passed a single comprehensive AI statute — leaving courts and state AGs to fill the vacuum.

Hector Herrera
Hector Herrera
A government building interior featuring documents, related to Congress Stalls While States Sprint: The Federal AI Regulati from an unusual angle or perspective
Why this matters More than 38 states have enacted AI laws, Colorado's high-stakes AI Act takes effect June 30, and Congress still has not passed a single comprehensive AI statute — leaving courts and state AGs to fill the vacuum.

Congress Stalls While States Sprint: The Federal AI Regulation Gap Is Getting Dangerous

By Hector Herrera | May 7, 2026 | Vertical: Government | Type: Government Policy

More than 38 states have enacted AI statutes, Colorado's high-stakes AI Act takes effect June 30, and Congress has not passed a single comprehensive AI law. A May 6 analysis documents the accelerating mismatch between the pace of state-level AI lawmaking and the absence of federal action — a vacuum that state attorneys general and courts are now filling by default. Legal experts warn that the resulting patchwork will raise compliance costs and create conflicting obligations for every company operating across state lines.

This is not a technology policy debate anymore. It is a governance failure with direct operational consequences arriving in weeks, not years.

Where Federal Legislation Stands

Two competing federal frameworks exist on paper. Neither has the votes to advance.

The White House preemption framework proposes that federal guidelines supersede state AI laws — a position that has broad industry support and significant bipartisan opposition from state governments that have already invested in their own regulatory infrastructure.

The GUARDRAILS Act, backed by Democratic lawmakers, takes the opposite position: establish federal AI standards that set a floor, not a ceiling, allowing states to layer additional requirements on top. Industry opposes this because it preserves the patchwork rather than eliminating it.

The result is paralysis. Neither approach has built the coalition needed to advance, and neither side is close to compromise. Meanwhile, the calendar keeps moving. Colorado's AI Act — which imposes obligations on developers and deployers of high-risk AI systems — goes into effect June 30 regardless of what happens in Washington.

The 38-State Patchwork in Practice

The compliance problem created by divergent state laws is not theoretical. Consider a company that operates a customer-facing AI system in all 50 states:

  • Colorado requires risk assessments and transparency disclosures for high-risk AI starting June 30
  • California has its own AI disclosure and bias audit requirements
  • Texas has enacted AI-in-employment rules with different definitions of what constitutes high-risk
  • Illinois, New York, and Maryland have sector-specific AI regulations in hiring, finance, and health that overlap but don't align

Each law uses different definitions of "AI system," "high-risk," "developer," and "deployer." A risk assessment that satisfies Colorado may not satisfy California. A transparency disclosure compliant in Illinois may be insufficient in New York.

For large enterprises, this means building and maintaining 38-plus compliance frameworks instead of one. For small and mid-size companies, it often means choosing between geographic restriction and regulatory exposure — neither of which is a good business outcome.

Who Is Filling the Vacuum

In the absence of federal law, three actors are making de facto AI policy:

State attorneys general. California and New York AGs have announced AI enforcement priorities. Texas has flagged AI-in-employment investigations. AG actions are increasingly the practical enforcement mechanism for AI harms, especially in consumer-facing products.

Courts. Judges are ruling on AI-related liability cases without statutory guidance, establishing common law precedents that vary by circuit. A hallucination-caused harm that creates liability in the Ninth Circuit may not create the same liability in the Fifth. This circuit-level divergence compounds the state statutory patchwork.

Industry consortia and standards bodies. NIST's AI Risk Management Framework is being adopted voluntarily by companies seeking a defensible compliance posture. IEEE and ISO working groups are developing interoperability standards. Voluntary frameworks are better than nothing, but they lack enforcement teeth.

What the Divergence Costs

The compliance cost argument is real, but it is not the only cost. A more serious concern is the governance gap for AI systems that cause genuine harm.

When a high-risk AI system produces a discriminatory credit decision, makes a consequential medical recommendation, or generates a false output that causes legal or reputational damage, the question of who is accountable and under what standard is unsettled in most jurisdictions. State laws are beginning to answer that question, but inconsistently. Federal law would create a single, predictable standard.

The flip side — and the reason federal preemption is contested — is that weak federal standards would displace stronger state protections. Colorado's law, for example, is more demanding than what the White House preemption framework would likely require. If federal law preempts Colorado, companies operating in Colorado get a compliance discount at the cost of consumer protections.

That is the genuine tension at the center of this debate, and it is not resolved by asserting that federal law is always better.

What to Watch

The June 30 Colorado AI Act effective date is the first hard deadline. Watch for whether Colorado enforcement actions materialize in Q3, and whether they create political momentum for federal action or fuel the preemption argument. Also watch for the White House's next move: if executive action on AI preemption advances before a congressional vote, it will face immediate legal challenge from states — and that litigation could define the federal-state AI governance question more concretely than any legislation.

Key Takeaways

  • By Hector Herrera | May 7, 2026 | Vertical: Government | Type: Government Policy
  • The result is paralysis.
  • Illinois, New York, and Maryland
  • For large enterprises, this means building and maintaining 38-plus compliance frameworks instead of one.
  • State attorneys general.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron