Government & Policy | 4 min read

Google, Microsoft, and xAI Agree to Give US Government Early Access to Frontier AI Models

Google DeepMind, Microsoft, and xAI signed agreements with NIST to provide pre-release frontier model access to US government evaluators. All five major US AI labs are now part of the program.

Hector Herrera
Hector Herrera
A government building interior where a person is operating related to a major tech company, a major software company, and xAI Agre from an unusual angle or perspective
Why this matters Google DeepMind, Microsoft, and xAI signed agreements with NIST to provide pre-release frontier model access to US government evaluators. All five major US AI labs are now part of the program.

Google, Microsoft, and xAI Agree to Give US Government Early Access to Frontier AI Models

By Hector Herrera | May 5, 2026 | Government

Google DeepMind, Microsoft, and xAI have signed voluntary agreements with the US government to hand over pre-release access to their most powerful AI models before public launch — including versions with reduced safety guardrails so federal evaluators can probe what these systems can actually do. The move consolidates a federal evaluation framework that now covers the five most capable AI labs operating in the United States.

The agreements are with NIST's Center for AI Standards and Innovation (CAISI) — the National Institute of Standards and Technology division responsible for developing technical AI standards. Google DeepMind, Microsoft, and xAI join OpenAI and Anthropic, which had already signed similar agreements. The program gives CAISI evaluators access to frontier models — the most capable AI systems each company produces — with enough lead time to assess risks before those models reach the public.

What the Agreements Cover

The core mechanism is pre-deployment access. Under the agreements, participating companies provide CAISI with model access before public release. Critically, that access includes models with reduced or disabled safety guardrails — the filters and refusal behaviors companies layer on top of base models for consumer deployment. Stripping those guardrails lets government evaluators test raw model capabilities: what can the underlying system do when it isn't being constrained?

The evaluations focus on national security risk: whether a model could meaningfully accelerate development of biological, chemical, nuclear, or radiological weapons; whether it exhibits deceptive behavior; whether it can be jailbroken in ways that create downstream harm. CAISI doesn't approve or block model releases — the agreements are voluntary, and publication of evaluation results is not guaranteed. The framework is about generating an independent technical assessment before launch, not regulatory gatekeeping.

With all five of the largest US frontier AI labs now signed, the informal network covers:

  • AnthropicClaude model family
  • OpenAIGPT and o-series models
  • Google DeepMindGemini model family
  • Microsoft — frontier models developed in-house or through its deep OpenAI partnership
  • xAIGrok model family

Why This Is Happening Now

Two forces pushed this expansion. First, the Mythos security incident — a significant AI safety and security event earlier this year that demonstrated the potential for frontier models to be exploited in ways their developers hadn't anticipated — accelerated government interest in independent pre-deployment review. Second, the White House has been circulating a potential executive order that would formalize pre-deployment evaluations for the most capable AI systems. Voluntary agreements ahead of a mandate are a standard industry move: shape the framework before it's imposed.

NIST's CAISI was established as part of the Biden-era AI Executive Order infrastructure and has survived into the current administration as the technical standards body for AI. Its evaluations draw on red-teaming — adversarial testing — and capability assessments developed in collaboration with the AI Safety Institute network, which includes UK and other allied-nation counterparts.

What This Means for the Industry

For AI companies, signing these agreements has two faces. It signals cooperation with federal oversight — useful for government contracting relationships and regulatory goodwill — while keeping the process voluntary and standards-based rather than binding. Companies retain control over whether and when to release models; the government gets visibility, not veto power.

For businesses deploying AI, the existence of an independent pre-deployment evaluation layer adds one more signal to the trust picture. A model that has cleared CAISI assessment carries a different risk profile than one that hasn't, even if CAISI doesn't publish full reports. Expect procurement teams at large enterprises and government contractors to start asking whether vendors' models have gone through this process.

For the competitive landscape, the agreements create a subtle but real barrier. Running a pre-deployment government evaluation takes time and coordination. Smaller labs and open-source projects operating outside this framework face no such friction — but they also won't carry the implicit government-reviewed signal.

What to Watch

Whether the White House executive order on pre-deployment evaluations actually materializes will determine whether this voluntary framework becomes mandatory — and whether its scope expands beyond the current five labs to cover international developers, open-weight models, or fine-tuned variants of base models. Watch for CAISI to publish any methodology documentation that signals how rigorous these evaluations actually are.


Sources: Engadget — Google, Microsoft, and xAI agree to provide US government with early AI model access

Key Takeaways

  • By Hector Herrera | May 5, 2026 | Government
  • national security risk

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron