Government & Policy | 4 min read

State AI Laws in Flux: Colorado Eyes Full Repeal, New York Softens RAISE Act

Colorado is considering fully repealing and reenacting its AI Act, resetting the clock to January 2027. New York's Governor has stripped the RAISE Act's most restrictive provisions in favor of transparency requirements.

Hector Herrera
Hector Herrera
A government building interior related to State AI Laws in Flux: Colorado Eyes Full Repeal, New York S from an unusual angle or perspective
Why this matters Colorado is considering fully repealing and reenacting its AI Act, resetting the clock to January 2027. New York's Governor has stripped the RAISE Act's most restrictive provisions in favor of transparency requirements.

State AI Laws in Flux: Colorado Eyes Full Repeal, New York Softens RAISE Act

By Hector Herrera | April 26, 2026 | Government

The state-level AI regulation landscape is being rewritten faster than most businesses can track. Colorado is considering repealing and fully reenacting its AI Act. New York's Governor just removed the most contentious provisions of the RAISE Act. A Cooley LLP analysis published April 24 maps the current state of play — and the picture is one of significant turbulence as states scramble to balance consumer protection with economic competitiveness.

The volatility is not random. It reflects genuine disagreement among policymakers about what AI governance should accomplish, combined with intense lobbying by technology companies and mounting pressure from the White House to defer to federal preemption rather than build a 50-state patchwork.


Colorado: From Trailblazer to Reconsideration

Colorado was the first U.S. state to pass a comprehensive AI Act, doing so in May 2024. The law targeted high-risk AI systems and imposed developer and deployer obligations to protect consumers from algorithmic discrimination. It was seen at the time as a model other states might follow.

By March 2026, Colorado's own working group was circulating a draft that would fully repeal and reenact the Act, resetting the effective date from February 2026 to January 2027 and significantly restructuring the underlying approach. The working group's revised framework narrows the definition of "high-risk" AI systems and focuses more tightly on automated decision-making that directly affects consumers in high-stakes contexts — lending, employment, housing, healthcare.

The repeal-and-reenact approach is significant. Colorado isn't simply amending its original law. It's acknowledging that the original framework had structural problems that couldn't be fixed at the margins.

What changed: after passage, a coalition of Colorado-based businesses and national technology companies raised concerns about compliance costs, definitional ambiguity, and the practical difficulty of auditing AI systems in the way the original law required. The working group's draft reflects those concerns — but also the state's genuine interest in remaining a meaningful consumer protection jurisdiction rather than simply deferring to whatever federal rules eventually emerge.


New York: Walking Back the RAISE Act's Sharp Edges

New York's RAISE Act (Responsible AI Safety and Education Act) was introduced as one of the toughest state AI governance proposals in the country. The original version included prohibitions on AI models that pose an "unreasonable risk of critical harm" — language broad enough to potentially capture a wide range of general-purpose AI systems.

Governor Hochul's March 2026 amendments removed those prohibitions entirely. The amended RAISE Act shifts the regulatory focus to transparency and reporting requirements rather than prohibition and pre-deployment risk evaluation. Developers and deployers would need to document AI system capabilities and limitations and make certain disclosures to consumers and regulators — a far less burdensome regime than the original proposal.

The directional change is clear: New York is moving toward disclosure-based governance and away from structural restrictions on AI development or deployment. For AI companies operating at scale, disclosure requirements are manageable compliance exercises. Prohibitions based on risk classifications are existential threats to product roadmaps.


The Federal Preemption Pressure

Neither state is moving in isolation. Both Colorado's reconsideration and New York's softening are happening in the context of the White House's aggressive push for federal preemption — the position that federal AI rules should supersede conflicting state laws, rather than allowing a state-by-state patchwork to develop.

The administration's argument is economic as much as governance-related: companies building AI products can navigate one federal compliance framework, but cannot practically comply with 50 different state-level regimes with different definitions, different audit requirements, and different enforcement mechanisms.

States that want to maintain meaningful AI governance authority have a narrow window. Once federal preemption is codified — if it is — states may be limited to areas explicitly preserved by federal law. That creates urgency for states that want their frameworks to remain in effect, and political pressure on those that are still drafting.


What Businesses Need to Know

The turbulence creates real compliance uncertainty for companies that are either building AI systems or deploying them in consumer-facing contexts:

  • Colorado's reset to January 2027 gives businesses a planning window, but the scope of the new framework is still being finalized. Don't assume the delay means no obligations.
  • New York's disclosure-focused RAISE Act is more navigable than its predecessor, but disclosure requirements still require documentation infrastructure and legal review.
  • Multi-state compliance remains complicated. Even with Colorado and New York moderating, California's AI governance proposals, Texas's consumer protection framework, and a growing list of other state-level activity create overlapping obligations.
  • Federal preemption is not yet enacted. Until it is, assume state laws apply. Monitor both NIST AI RMF compliance as a baseline and state-specific requirements for your key operating jurisdictions.

What to Watch

Watch Colorado's legislature for a vote on the working group's repeal-and-reenact draft. If Colorado passes a revised framework before mid-2026, it will either serve as a new model for other states or effectively mark the high-water point of state-level AI governance ambition.

In New York, watch whether the softened RAISE Act advances out of committee in its amended form. If it passes with disclosure-only requirements, New York joins a growing group of states that have chosen transparency over prohibition as their governing approach.

The central question remains: will Congress act on federal preemption before states finalize their frameworks, or will another year of state-level turbulence pass without federal resolution?

Source: Cooley LLP

Key Takeaways

  • By Hector Herrera | April 26, 2026 | Government
  • fully repeal and reenact the Act
  • transparency and reporting requirements
  • Colorado's reset to January 2027
  • New York's disclosure-focused RAISE Act

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron