The Trump administration is considering requiring frontier AI labs to submit safety test results to the government — a reversal driven by national security concerns over Anthropic's Mythos model.
Trump Administration Reverses Course, Considers Mandatory Safety Testing for Frontier AI Models
By Hector Herrera | May 9, 2026 | Government
The Trump administration is considering requiring companies that train frontier AI models to submit safety testing results to the federal government — a significant policy reversal for an administration that dismantled Biden-era AI oversight in its first weeks in office. The shift is being driven by national security concerns after Anthropic's Mythos model demonstrated the ability to autonomously identify and exploit cybersecurity vulnerabilities.
What Happened
When President Trump took office in January 2025, one of his first AI-related moves was revoking President Biden's October 2023 executive order on AI safety, which had required developers of powerful AI models to share safety test results with the government before public release. The Trump White House framed the requirements as regulatory overreach that would slow American AI development against China. A replacement executive order emphasized innovation and removed the pre-release safety reporting mandates.
That position is now shifting.
According to Fortune, officials are actively considering a mandate that would require frontier AI labs — those training the largest and most capable models — to share safety test results with federal agencies. The proposal is not yet formal policy, but sources told Fortune the internal discussions are substantive.
The Mythos Catalyst
The reported catalyst is Anthropic's Mythos model, which demonstrated the ability to autonomously identify and exploit cybersecurity vulnerabilities in ways that alarmed intelligence and national security officials. The distinction matters: earlier AI systems could assist human researchers in finding security flaws; Mythos reportedly operates with enough autonomy to raise concerns about commercial deployment without government visibility.
There is a notable irony here. Anthropic disclosed Mythos's capabilities through its own responsible scaling policy — the company's internal framework for evaluating whether new models cross capability thresholds requiring additional safeguards. Anthropic's transparency may have accelerated the regulatory response rather than forestalled it.
Cybersecurity capability is the specific concern. An AI system that can independently find and exploit software vulnerabilities is useful for offensive and defensive cyber operations alike — which makes it exactly the kind of technology that national security agencies want to know about before it reaches commercial customers.
Get this in your inbox.
Daily AI intelligence. Free. No spam.
The Defense Production Act
The legal mechanism under consideration is the Defense Production Act (DPA) — a 1950 statute that authorizes the executive branch to direct industrial production for national security purposes. The DPA has been used for semiconductor supply chains, COVID-19 medical equipment, and critical energy infrastructure. Applying it to AI model training would represent a novel expansion of the law's scope.
Using the DPA would let the administration impose requirements without going through the standard federal rulemaking process, which requires public comment periods and can take years. It also bypasses the need for congressional legislation — a practical advantage when AI-specific bills have repeatedly stalled on Capitol Hill.
Any DPA-based AI mandate would face immediate legal challenge. Technology trade groups would almost certainly argue that training an AI model doesn't constitute "industrial production" in the statutory sense. Courts have grown increasingly skeptical of broad executive authority claims in recent years, and the legal battles could take years to resolve.
What This Means for Labs
For the handful of companies training frontier AI models — Anthropic, OpenAI, Google DeepMind, Meta, and xAI — the policy shift signals that national security concerns create a regulatory floor regardless of which party controls the White House. The assumption that the Trump administration's deregulatory posture was stable may need revision.
The specific focus on cybersecurity capability creates a new compliance consideration. Models capable of autonomous vulnerability discovery may eventually face requirements similar to those applied to other dual-use technologies: export controls, disclosure requirements, or deployment restrictions on certain customer categories.
For Anthropic specifically, being cited as the catalyst for new federal oversight places the company in an unusual position. The responsible scaling policy was designed in part to demonstrate that safety-focused labs could self-regulate effectively enough that government mandates weren't necessary. It now appears to have served as a roadmap for what regulators want from everyone.
The Broader Policy Landscape
The United States is not acting in isolation. The EU AI Act, which took effect in 2024, already requires providers of high-risk AI systems to conduct conformity assessments before deployment. The UK has been developing its own AI safety testing regime through the AI Safety Institute. China has implemented registration requirements for large AI models.
If the Trump administration moves forward with safety testing requirements — even using the DPA as a temporary mechanism — it would signal alignment between the US and major trading partners on the principle that frontier AI capabilities require some form of government visibility, even if the specific mechanisms differ.
Enterprise Implications
The immediate impact on enterprise AI buyers is indirect. A safety testing mandate would regulate the labs training foundation models, not the companies building applications on top of them. But disclosure requirements could eventually influence which models are approved for use in regulated industries — defense contracting, financial services, healthcare — where government-approved vendor status is a prerequisite.
For companies making long-term AI infrastructure decisions, policy risk belongs in the analysis. The regulatory environment for the most powerful AI systems is moving in a direction that is difficult to predict, and procurement choices made today may be affected by rules that don't exist yet.
What to Watch
Whether the administration formally pursues a Defense Production Act invocation, pivots to voluntary commitments from labs, or pushes for congressional legislation. No formal proposal has been announced — the conversations are reportedly preliminary.
Also watch Anthropic's public response. The company that triggered this policy conversation will need to navigate the position of being simultaneously the case study for why oversight is necessary and the industry's most prominent advocate for responsible AI development. How Anthropic responds will shape the industry's posture toward whatever requirement emerges.
Did this help you understand AI better?
Your feedback helps us write more useful content.
Get tomorrow's AI briefing
Join readers who start their day with NexChron. Free, daily, no spam.