Government & Policy | 4 min read

Federal AI Policy Is Stalled. States Are Moving — and the Compliance Map Is Getting Complicated.

With federal AI legislation stalled, states are using antitrust, consumer protection, and false claims law to enforce AI conduct now — creating a fragmented compliance landscape for multi-state businesses.

Hector Herrera
Hector Herrera
A government building interior featuring contracts, related to Federal AI Policy Is Stalled. States Are Moving — and the Co from an unusual angle or perspective
Why this matters With federal AI legislation stalled, states are using antitrust, consumer protection, and false claims law to enforce AI conduct now — creating a fragmented compliance landscape for multi-state businesses.

Federal AI Policy Is Stalled. States Are Moving — and the Compliance Map Is Getting Complicated.

By Hector Herrera | April 23, 2026 | Government

The White House's National Policy Framework for AI has stalled in Congress, and the GUARDRAILS Act — which would have established federal preemption over state AI laws — has failed to advance. The result is what legal experts are describing as a regulatory vacuum that states are filling aggressively, using tools that have been on the books for decades and were never designed with AI in mind.

State AI enforcement is not a prediction. It is happening now.

How States Are Acting Without New AI Laws

According to Morgan Lewis, states are applying three existing legal frameworks to AI conduct:

Antitrust law: State attorneys general are examining whether AI-driven pricing algorithms facilitate coordination between competitors — essentially, whether competing companies using similar AI pricing tools are achieving the economic effects of price-fixing without the explicit agreement that antitrust law traditionally requires. This is active in housing, insurance, and consumer goods markets.

Consumer protection law: State consumer protection statutes prohibit unfair or deceptive practices. AI systems that produce statistically different outcomes for protected groups in lending, insurance, or employment — even without discriminatory intent — are being examined under these statutes. Several state AGs have issued civil investigative demands (the pre-litigation investigative tool available to state attorneys general) to companies deploying AI in consumer-facing decisions.

False claims statutes: In states where government contracts are involved, false claims laws — originally designed for defense contractor fraud — are being used to examine whether AI systems used in government billing or compliance reporting produce inaccurate outputs that vendors knew or should have known were unreliable.

The Federal Agency Approach

At the federal level, agencies that lack new AI-specific authority are doing the same thing: applying existing law to AI conduct rather than writing new rules.

The FTC has brought enforcement actions under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, and has issued guidance on AI in advertising and product claims. The CFPB (Consumer Financial Protection Bureau) has used its existing fair lending authority under ECOA (Equal Credit Opportunity Act) to examine AI-driven credit decisions. The EEOC has issued guidance on AI in employment decisions under Title VII and the ADA.

The result is AI governance without AI law: a patchwork of enforcement actions under statutes written for a different era, applied by agencies working at the edge of their existing authority. Legal experts describe this as increasing compliance risk because the rules are not predictable in the way that formal rulemaking produces predictable rules.

The Compliance Problem for Multi-State Businesses

For a company operating across multiple states, this environment creates a genuinely difficult compliance challenge.

Different state standards. A consumer protection analysis of an AI hiring tool in California will apply different standards than the same analysis in Texas. There is no uniform federal floor. Companies building compliance programs must either engineer to the most restrictive state standards (expensive, but clean) or maintain state-specific compliance programs (very expensive and difficult to scale, especially when the "standards" are emerging through enforcement actions rather than published rules).

Discovery risk. Enforcement actions using existing law proceed through standard litigation discovery. That means company documents — including internal audits, algorithmic performance data, bias analyses, and internal communications about known problems — become discoverable. Companies that have documented problems with their AI systems without fixing them face significant exposure when enforcement arrives.

Vendor liability. States are not limiting their attention to companies that deploy AI. They are also examining vendors who provide the AI tools that produce problematic outputs. If a third-party AI vendor's tool produces discriminatory lending decisions, both the deploying financial institution and the vendor face potential liability under existing law. Vendor contracts and indemnification provisions written before the current enforcement environment may not reflect that risk allocation accurately.

The speed of action. State AGs operate faster than federal rulemaking. An attorney general can issue a civil investigative demand without notice and can initiate litigation within months. Federal rulemaking typically takes years. The enforcement environment is moving faster than compliance counsel can track, in many organizations.

What Businesses Should Do Now

This is not planning for a future risk. Enforcement is active.

Three immediate priorities for organizations using AI in consumer-facing, employment, or government contract contexts:

1. Conduct an AI inventory. Know which AI systems are making or influencing decisions that affect consumers, employees, or government counterparties. Include third-party tools embedded in your workflows — if a vendor's model makes a credit decision that affects a consumer, you own the compliance obligation even if you did not build the model. Most organizations that have done this exercise are surprised by how many AI-influenced decisions exist in their operations.

2. Run disparate impact analyses. AI systems producing statistically different outcomes for legally protected groups — by race, sex, national origin, age, disability, or other protected characteristics — are the primary enforcement target. Running these analyses proactively, documenting the results, and addressing identified problems before an enforcement action is materially better for both compliance and litigation position.

3. Review vendor contracts for AI liability allocation. Most enterprise AI vendor contracts were written before the current enforcement environment. Audit your indemnification provisions, representations about model performance, audit rights, and data breach obligations. The risk has shifted; the contracts should reflect that.

What to Watch

Two legislative developments matter for the trajectory of this landscape. First, whether any version of the GUARDRAILS Act or a successor bill advances — federal preemption would simplify the compliance environment significantly by establishing a national floor and limiting the patchwork. Second, whether state AG enforcement actions under existing-law frameworks result in consent decrees that effectively create industry-specific AI standards through litigation. That process — standards emerging from settlements rather than rulemaking — is slower and less predictable, but it is the current trajectory.

The regulatory vacuum is filling. The organizations treating it as a current compliance problem rather than a future planning exercise will be better positioned for what comes next.


Hector Herrera is the founder of Hex AI Systems and editor of NexChron.

Key Takeaways

  • By Hector Herrera | April 23, 2026 | Government
  • Consumer protection law:
  • False claims statutes:
  • Different state standards.
  • The speed of action.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron