National security officials invoked the Defense Production Act to compel AI developers to share safety testing results with the government — a move that contradicts the administration's deregulatory AI posture and signals an internal White House power struggle.
U.S. Intelligence Agencies Push for Expanded AI Oversight Authority
By Hector Herrera | May 13, 2026 | Government
U.S. intelligence agencies are lobbying for expanded authority to regulate the most advanced AI models — and they have already used emergency powers to begin. National security officials invoked the Defense Production Act to compel AI developers training frontier models to share safety testing results with the federal government. That move puts the intelligence community on a collision course with the tech industry and with the Trump administration's own deregulatory agenda.
Two Contradictory Forces in One Administration
The Trump administration came to office with a clear stated position on AI: remove Biden-era safety requirements, reduce regulatory friction for American AI companies, and let industry lead. That posture is now being pulled in a different direction from inside the administration itself.
According to reporting by The Washington Post, national security officials centered in the intelligence community have concluded that the cybersecurity risks posed by the most advanced AI models require federal oversight that a voluntary framework cannot provide. The internal dispute has escalated to the White House level, where different factions are pressing for control over how AI policy is shaped.
What the Defense Production Act Invocation Means
The Defense Production Act (DPA) is a Korean War-era statute that gives the federal government authority to direct private industry in the interest of national security. The Biden administration used it to require AI developers to share safety testing results with the government. The Trump administration did not reverse that requirement.
The significance is in what it signals: even under an administration rhetorically committed to AI deregulation, national security concerns are producing binding oversight mechanisms on the sector's most advanced models. The DPA invocation effectively creates a mandatory reporting relationship between frontier AI developers and the federal government on safety-relevant findings — without going through the legislative process that would normally create such a requirement.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
What Intelligence Agencies Believe the Risk Is
The intelligence community's concern is specific. Their focus is on adversarial use of frontier AI models — particularly models capable of providing meaningful assistance to actors attempting to develop biological, chemical, or radiological weapons, or to conduct cyberattacks on critical infrastructure.
The argument inside the government is that models crossing certain capability thresholds require government visibility before they are deployed. The downside risk of a miscalibrated deployment is not a product liability issue — it is a national security incident. That framing puts intelligence agencies in direct tension with the tech industry's position that safety is a matter for companies to determine.
The Power Struggle Inside the White House
The Washington Post's reporting describes a genuine internal conflict. Commerce Department officials, who have traditionally handled AI policy coordination, are now competing with intelligence community officials for authority over how advanced AI is governed. Tech industry advocates who expected a fully deregulatory posture are discovering that national security equities create a floor of oversight that no administration can easily waive.
The outcome of this internal dispute will have significant implications for how the largest AI developers — OpenAI, Anthropic, Google DeepMind, Meta, and others — interact with the federal government on safety testing and disclosure. If the intelligence community's position prevails, mandatory safety data sharing becomes a standard feature of operating at the frontier of AI capability.
What This Means for AI Developers
For companies training frontier-scale models, the practical implication is that some form of mandatory engagement with the federal government on safety testing is likely regardless of which faction wins the internal policy debate. The question is not whether — it is on whose terms, through which agency, and with what disclosure requirements.
Developers already in conversations with NIST, the intelligence community, or the Defense Department should treat those relationships as durable institutional features of the landscape, not temporary Biden-era artifacts. The national security community's interest in AI oversight is structural, not political.
What to Watch
The internal White House dispute will likely produce a formal policy document — either an executive order, a National Security Memorandum, or a Commerce Department rulemaking — that establishes which agency has primary authority over frontier AI safety oversight. The timing and content of that document will define the operating environment for the largest AI developers for the remainder of the decade.
Watch also for how companies respond publicly. Developers who have voluntarily shared safety testing data with the government have a different posture than those who have resisted — and that distinction will matter when formal authority is established.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.