Government & Policy | 4 min read

25 U.S. State AI Laws Now Enacted as Colorado's Landmark Act Nears June Compliance Deadline

Twenty-five U.S. states have enacted AI laws in 2026, with Colorado's Act taking effect in June — creating the most fragmented AI compliance landscape businesses have ever faced.

Hector Herrera
Hector Herrera
A government building interior featuring document, related to 25 U.S. State AI Laws Now Enacted as Colorado's Landmark Act from an unusual angle or perspective
Why this matters Twenty-five U.S. states have enacted AI laws in 2026, with Colorado's Act taking effect in June — creating the most fragmented AI compliance landscape businesses have ever faced.

25 U.S. State AI Laws Now Enacted as Colorado's Landmark Act Nears June Compliance Deadline

By Hector Herrera | April 28, 2026 | Government

Twenty-five U.S. states have enacted AI laws in 2026, with 27 more bills having cleared both legislative chambers — and Colorado's AI Act takes effect in June, giving companies weeks to comply or face enforcement. The result is the most fragmented AI compliance landscape American businesses have ever had to navigate.

That assessment comes from a comprehensive tracker published April 24 by Cooley LLP, one of the most detailed inventories of active state AI legislation published to date. The picture it paints is one of regulatory acceleration: states aren't waiting for Congress, and companies that assumed a federal standard would arrive before state laws matured are now running out of time.

What the Cooley Tracker Shows

The Cooley data, current as of April 24, counts:

  • 25 AI laws enacted across U.S. states in 2026
  • 27 additional bills that have passed both legislative chambers and are awaiting governor signatures or final procedural steps
  • Colorado's AI Act takes effect in June 2026, making it among the first state-level AI governance laws in the U.S. to carry real enforcement weight
  • New York's amended RAISE Act has shifted from a broad high-risk AI prohibition framework toward a transparency-and-reporting model — a significant softening that reflects industry pushback but still imposes new disclosure obligations

The Colorado law requires developers and deployers of high-risk AI systems — defined as systems making consequential decisions in employment, housing, credit, education, and healthcare — to conduct impact assessments, disclose AI use to affected individuals, and establish mechanisms for consumers to appeal automated decisions.

The New York Pivot

New York's amended RAISE Act represents the other major story in the tracker. The original version proposed strict pre-deployment requirements for high-risk AI, drawing sharp opposition from the tech industry. The amended version moves toward a transparency and reporting framework — companies must document and disclose how AI systems work rather than seek pre-approval — but the obligations are still substantial enough to require meaningful compliance work.

That pivot matters because New York's regulatory choices tend to influence other states. If the RAISE Act becomes a template, other legislatures considering strict pre-deployment review may move toward disclosure-plus-accountability models instead.

A Compliance Map That Won't Hold Still

What makes this genuinely difficult for corporate legal and compliance teams is the variation across states. Illinois has its own AI requirements layered on top of existing biometric privacy law. Texas and California have active requirements. Colorado is the most comprehensive. And none of them align cleanly.

A company operating nationally now faces a compliance matrix that includes:

  • Colorado: Impact assessments, consumer disclosure, appeal rights — effective June 2026
  • New York: Transparency reporting, documentation requirements
  • Illinois: AI provisions intersecting with BIPA (Biometric Information Privacy Act)
  • California: Multiple AI-related bills active across employment, consumer protection, and government use
  • Texas: Provisions targeting automated decision systems in specific sectors

Companies that built a single AI governance program assuming regulatory uniformity are discovering it doesn't cover everything. Those that built flexible, modular programs are better positioned — but they're still spending significant legal hours mapping each state's requirements to their specific AI deployments.

What This Means for Businesses

The compliance deadline is real. Colorado's June effective date isn't theoretical. Companies deploying AI in employment, lending, housing, or similar high-stakes contexts need to have completed impact assessments, updated their disclosure language, and tested their appeal workflows before summer.

Legal teams are taking operational ownership. Eversheds Sutherland's April 2026 global AI regulatory bulletin documented a structural shift: legal departments aren't just advising on AI risk anymore, they're running AI governance programs themselves. That's a meaningful change in how corporations organize accountability.

The EU timeline may be shifting. European institutions are reportedly reconsidering pushing key AI Act obligations to 2027–2028, according to the Eversheds bulletin. For multinationals, that could provide some breathing room on Brussels — but it doesn't ease the U.S. state pressure at all.

Small and mid-size companies are the most exposed. Large enterprises have legal teams and dedicated compliance budgets. Startups and mid-market companies deploying AI in consequential contexts often don't, and few have the resources to build state-by-state compliance programs from scratch.

What to Watch

The next 60 days are critical. If Colorado's Act takes effect without major enforcement actions, other states will be watching to see whether the compliance machinery actually functions. And if Congress doesn't move on federal AI legislation before the end of the year, the state-by-state patchwork will continue to expand — potentially reaching 40 or more enacted laws by the end of 2026.

The more interesting long-term question is whether one state's framework eventually becomes the de facto national standard — the way California's privacy law effectively shaped U.S. data practice before federal law caught up. Right now, Colorado is the most likely candidate.

Key Takeaways

  • By Hector Herrera | April 28, 2026 | Government
  • New York's amended RAISE Act
  • high-risk AI systems
  • transparency and reporting framework
  • The compliance deadline is real.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron