Government & Policy | 5 min read

States Are Enforcing AI Law While Congress Stalls — and a Constitutional Fight Is Coming

With federal AI legislation stalled, state attorneys general are deploying existing consumer protection, employment, and privacy statutes against AI companies. Colorado and Texas laws are live. A constitutional showdown over federal preemption is forming.

Hector Herrera
Hector Herrera
A government building interior where a person is deploying related to States Are Enforcing AI Law While Congress Stalls — and a Co from an unusual angle or perspective
Why this matters With federal AI legislation stalled, state attorneys general are deploying existing consumer protection, employment, and privacy statutes against AI companies. Colorado and Texas laws are live. A constitutional showdown over federal preemption is forming.

States Are Enforcing AI Law While Congress Stalls — and a Constitutional Fight Is Coming

By Hector Herrera | April 20, 2026 | Government

With Congress unable to pass comprehensive federal AI legislation, state attorneys general and regulators are filling the vacuum — deploying existing consumer protection, employment, and privacy statutes against AI companies that the federal government has not yet touched. The result is a rapidly fragmenting legal landscape that is forcing every company using AI in customer-facing decisions to navigate up to 50 different compliance regimes simultaneously, according to analysis from Morgan Lewis. A constitutional showdown between state AI authority and potential federal preemption is now directly in view.

This is the predictable consequence of federal inaction: states do not wait. The question of whether state or federal law governs AI is not academic — it will determine which companies face which obligations, and whether a national AI standard is even legally achievable.

What Is Already Live

Two significant state AI laws are now in effect or imminent:

Texas TRAIGA — the Texas Responsible AI Governance Act — is already live, requiring companies to disclose when AI systems are used in consequential decisions affecting Texas residents in healthcare, employment, and government services. Texas is not traditionally known for consumer-friendly regulation, which makes TRAIGA's passage notable: the business community and the state legislature agreed that disclosure requirements are a minimum floor. AI systems that make — or materially influence — decisions about whether someone gets a job, a loan, or a medical procedure must now be disclosed and documented.

Colorado AI Act takes effect in June 2026 and goes further. Colorado requires companies deploying AI in "high-risk" contexts to conduct impact assessments, implement risk management programs, and provide consumers with meaningful disclosure and recourse when AI influences a consequential decision. The Colorado law is the most comprehensive state AI statute in the country. It was modeled in part on the EU AI Act's risk-tiered framework — the same framework now under pressure from Brussels as a potential economic competitiveness drag.

Beyond those two statutes, state attorneys general are not waiting for new laws. They are applying:

  • Consumer protection statutes against companies whose AI systems produce deceptive or unfair outcomes
  • Employment discrimination law against automated hiring screening tools that exhibit disparate impact on protected classes
  • Privacy law (particularly California CCPA/CPRA) against companies collecting data to train AI without adequate disclosure
  • Existing financial regulation against AI-driven lending or insurance decisions that violate fair lending rules

The Federal Preemption Fight

The White House is pushing in the opposite direction. The administration has been advancing a federal framework that would preempt state AI laws — establishing a single national standard that would supersede state-level requirements. The argument for preemption is that a patchwork of 50 state laws creates impossible compliance complexity for companies operating nationally.

That push has triggered the GUARDRAILS Act in Congress — legislation that would block federal preemption and preserve state authority to regulate AI. The GUARDRAILS Act reflects a coalition of state attorneys general, consumer advocacy groups, and some tech companies that prefer navigating state law to the uncertainty of what a federal standard might ultimately require.

Morgan Lewis frames this as a significant constitutional question: under what circumstances can the federal government preempt state consumer protection and employment law to create uniform AI standards? The answer is not obvious. Federal preemption of state consumer protection regimes is constitutionally possible but politically contentious, and courts have been skeptical of broad preemption claims in health and safety contexts.

What This Means for Companies

Any company using AI in a decision that affects a real person — a hiring algorithm, a loan approval model, a health insurance prior authorization system, a customer service chatbot that handles account access — now faces enforcement risk in multiple jurisdictions simultaneously.

The practical compliance burden is significant:

Documentation requirements are already real in Texas and Colorado. If your AI system makes or influences a consequential decision, you need to be able to explain how it works, what data it uses, and what your process is for contesting AI-driven outcomes. "The model decided" is not a legally sufficient explanation in either state.

Impact assessments will be required in Colorado by June 2026. A risk assessment for AI in a high-risk deployment context requires identifying potential harms, documenting mitigation measures, and putting a human review process in place for adverse outcomes.

Employment screening tools face the highest immediate scrutiny. New York City's Local Law 144 on automated employment decision tools has been in effect since 2023. Several other cities and states are moving similar legislation. Any AI tool used in resume screening, interview assessment, or workforce scheduling is now a regulatory target.

Healthcare AI is the most politically sensitive category. States are separately passing legislation that specifically bans AI-only denial of health insurance claims without human physician review. The prior authorization AI being deployed by UnitedHealth and others is being watched closely by state insurance commissioners.

The EU Parallel

For companies operating internationally, the EU AI Act is adding pressure on top of U.S. state requirements. The EU law, which is phased in through 2027, categorizes AI applications by risk level and imposes corresponding obligations. High-risk EU categories overlap significantly with the Texas and Colorado definitions — hiring, healthcare, credit, and government services.

Companies facing simultaneous EU AI Act compliance and state-level compliance in the U.S. are now building AI governance programs that dwarf what was required even 18 months ago. Legal teams that had three people working on AI policy in 2024 now have fifteen.

What to Watch

The GUARDRAILS Act vote timeline is the most consequential near-term indicator. If it passes, federal preemption of state AI law is blocked, and the fragmented enforcement landscape continues — and intensifies, as more states pass laws modeled on Colorado and Texas. If it fails, the White House's preemption framework moves forward, and every state law in this category faces potential override. Watch also for the first major enforcement action by a state attorney general under existing consumer protection statutes — not under a new AI law, but under fraud, unfair business practices, or discrimination statutes applied to AI outputs. That enforcement action, wherever it comes first, will establish the template that other AGs follow.


Hector Herrera is the founder of Hex AI Systems and editor of NexChron.

Key Takeaways

  • By Hector Herrera | April 20, 2026 | Government
  • Consumer protection statutes
  • Employment discrimination law
  • Existing financial regulation
  • preempt state AI laws

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron