The White House Wants One National AI Law — and No New AI Agency
The Trump administration's National Policy Framework for AI calls on Congress to preempt state AI laws that impose undue burdens, replace 600-plus state bills with a national standard, and rely on existing agencies rather than a new federal AI regulator.
Why this matters
The Trump administration's National Policy Framework for AI calls on Congress to preempt state AI laws that impose undue burdens, replace 600-plus state bills with a national standard, and rely on existing agencies rather than a new federal AI regulator.
The White House Wants One National AI Law — and No New AI Agency
By Hector Herrera | April 16, 2026 | Government
The Trump administration released its National Policy Framework for Artificial Intelligence this month, laying out the federal government's position on AI regulation heading into the second half of 2026. The core ask to Congress: preempt state AI laws that "impose undue burdens," create a single national standard, and do not build a new federal AI regulatory agency. The framework, paired with a legislative draft from Sen. Marsha Blackburn, defines what AI governance may look like under a Republican-controlled Congress — and sets up a direct confrontation with California and a growing number of states moving in the opposite direction.
This is not a set of regulations. It is a policy statement that tells industry and legislators what the administration wants — and that the administration is willing to push for federal preemption to get it.
Federal preemption of state AI laws. The framework calls on Congress to preempt state AI regulations that "impose undue burdens" on AI development and deployment. More than 600 state-level AI bills have been introduced across the country in recent legislative sessions. The administration's position is that this patchwork creates compliance complexity that disadvantages American AI companies and fragments what should be a national market.
No new federal AI regulatory agency. The framework explicitly opposes creating a dedicated AI regulatory body — the equivalent of an FDA or FCC for artificial intelligence. Instead, existing agencies — the FTC, FDA, CFPB, SEC, and sector-specific regulators — would retain authority over AI applications within their existing jurisdictions, and industry-led standards would fill gaps.
Pro-development posture. The framework frames AI development as a national competitiveness priority and positions regulatory clarity as a tool to accelerate deployment, not constrain it. The comparison is explicitly to China's state-backed AI investment program — a framing designed to make opposition to the framework seem like unilateral disarmament.
Industry-led standards. Rather than government-mandated requirements, the framework looks to bodies like NIST (National Institute of Standards and Technology) and industry consortia to develop voluntary standards for AI safety and governance. NIST's AI Risk Management Framework, already widely adopted by enterprise AI teams, would likely serve as the foundation.
The Blackburn Bill
Paired with the White House framework is a discussion draft of the TRUMP AMERICA AI Act, sponsored by Sen. Marsha Blackburn (R-TN). A "discussion draft" means it has not been formally introduced — it is a proposal circulated for stakeholder feedback before going through committee. Its provisions are expected to closely track the White House framework.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
Key provisions expected in the draft:
Statutory authority for federal preemption of conflicting state AI laws
Designation of NIST as the lead coordinating body for AI standards
Liability carve-outs for AI developers who comply with federal standards
Restrictions on state enforcement actions against AI systems that comply with federal standards
The liability carve-out is the provision most likely to generate significant lobbying interest. If developers who meet federal standards are shielded from state-level enforcement and private lawsuits, the commercial value of achieving federal compliance is substantial.
The California Collision
The framework creates a direct collision with California. Governor Newsom signed Executive Order N-5-26 on March 30, which explicitly reserves California's right to override Trump administration federal supply chain bans on AI companies and establishes California's own vendor certification requirements covering bias, civil rights, and illegal content.
California's position is that as the nation's largest state economy and the home of most major AI companies, it has both the authority and the obligation to set its own standards. The White House framework's position is that this authority should be preempted by Congress.
The legal question: Federal preemption of state law is constitutionally permissible when Congress explicitly legislates it. The question is whether Congress will actually pass legislation with a preemption clause that is broad enough to displace California's requirements. State attorneys general — including California's — would challenge any such preemption in court.
The political calculation: Preemption fights are a standard feature of federal-state regulatory conflict (see: financial services, environmental law, telecommunications). They take years to resolve even when Congress acts. The AI preemption fight, if it reaches federal legislation, is likely to be litigated for years regardless of the initial outcome.
What the Framework Does Not Address
The framework's emphasis on competitiveness and deregulation leaves several significant policy questions unanswered:
AI in hiring and lending. The FTC and CFPB have ongoing investigations into AI-driven decisions in employment and credit. The framework does not clarify how existing agencies should handle AI-specific concerns within their existing statutory authority.
National security and export controls. The framework is silent on chip export controls, the ongoing coalition between US labs to block adversarial model distillation, and the question of whether military AI applications require different standards than commercial ones.
Frontier model safety. The framework's industry-led standards approach does not create any mandatory evaluation requirement for frontier models before deployment — a gap that safety researchers have argued is the most important regulatory question of the current moment.
What to Watch
The Blackburn discussion draft's transformation into an introduced bill is the immediate milestone. Committee markups in Senate Commerce (where AI legislation typically originates) are the next legislative step. Whether a preemption provision survives committee is the central question — it will attract intense opposition from state governments and civil rights organizations regardless of what industry groups support.
The parallel track: California and the 20+ states with active AI legislation will continue advancing their own frameworks regardless of what Congress does. If federal preemption fails to pass — which is not uncommon for complex regulatory legislation — the 600-state-bill landscape becomes the permanent operating environment.
Hector Herrera is the founder of Hex AI Systems and editor of NexChron.
Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.