Government & Policy | 3 min read

Google Negotiates Classified AI Deal With Pentagon as Anthropic Remains Blacklisted

Google is negotiating to deploy Gemini AI inside classified Pentagon networks after the DoD blacklisted Anthropic for refusing to loosen its safety guardrails.

Hector Herrera
Hector Herrera
Scene in a government building interior
Why this matters Google is negotiating to deploy Gemini AI inside classified Pentagon networks after the DoD blacklisted Anthropic for refusing to loosen its safety guardrails.

Google Negotiates Classified AI Deal With Pentagon as Anthropic Remains Blacklisted

Google is in active talks with the U.S. Department of Defense to deploy its Gemini AI models inside classified government networks — negotiations that opened after the Pentagon blacklisted Anthropic as a supply-chain security risk. The deal, if finalized, would give U.S. military analysts access to Google's most capable AI in environments where public cloud connections aren't permitted.

The Pentagon's move against Anthropic came after the AI safety company declined to loosen the guardrails (built-in restrictions on what the model will do) in its Claude models. According to reporting by The Information, covered by Newsweek, the DoD designated those refusals a supply-chain risk — a formal classification that can block an entire vendor from federal procurement.

The Details

Background

The Defense Department has been pushing AI into intelligence analysis, logistics, and decision-support systems for several years. It already runs sensitive workloads on Microsoft Azure Government and Amazon GovCloud. Foundation models — large AI systems like GPT or Gemini that can handle a wide range of tasks — are the next frontier, and the Pentagon wants them running inside classified infrastructure, not calling out to public APIs.

Anthropic's position is principled but costly. The company has publicly argued that weakening safety measures — even for government clients — sets a harmful precedent across the industry. That stance cost it access to one of the largest institutional AI buyers in the world.

Google's situation is different. In 2018, the company canceled its Project Maven contract with the DoD after employees protested its involvement in AI weapons development. This new negotiation marks a notable shift in Google's posture toward defense work.

What This Means

The Details

According to Newsweek's reporting on The Information's investigation:

  • Google is proposing to deploy Gemini inside DoD classified environments through dedicated infrastructure, not public API access.
  • Google's proposed contract includes explicit use restrictions: contractual language would prohibit Gemini from being used for domestic mass surveillance or autonomous lethal weapons decisions made without human oversight.
  • Anthropic has been formally blacklisted — a supply-chain designation that typically cascades to other agencies and prime contractors who might otherwise have purchased that vendor's products.
  • Talks are active as of mid-April 2026. No deal has been announced or signed.

The proposed use restrictions are significant. They suggest Google is trying to win the contract while limiting legal and reputational exposure if military AI use becomes politically contested — a real possibility given the ongoing public debate over autonomous weapons.

What This Means

For the defense AI market: A Google win here would validate Gemini's readiness for the most demanding, security-sensitive environments and accelerate the consolidation of military AI around a small club of major vendors — Google, Microsoft (through its partnership with OpenAI), and Amazon. Smaller AI companies unwilling to meet military specifications will find these contracts permanently out of reach.

For Anthropic: Being blacklisted as a supply-chain risk is a significant setback. The U.S. federal government is among the world's largest software buyers, and a DoD designation can influence purchasing decisions at other agencies and among defense contractors that work with classified data. Claude will not appear in classified environments as long as the designation stands.

For AI safety policy: This episode draws a sharp line between public AI safety commitments and government procurement demands. If the DoD will blacklist vendors that won't modify their guardrails, it creates a structural incentive for AI labs to build "safety-off" product tiers for government customers — or to avoid defense contracts entirely, as some AI researchers have long advocated.

For Google: Winning this contract would be a major credibility boost for Gemini in enterprise and government markets. It would also signal that Google has made a strategic decision to re-engage with defense clients after years of internal tension over that question.

What to Watch

Whether Google's proposed use restrictions — no domestic mass surveillance, no fully autonomous lethal decisions — survive intact in the final contract language. If those guardrails are negotiated away before signing, it will reveal how much leverage AI vendors actually have when the customer is the Pentagon.


By Hector Herrera | April 17, 2026

Source: Newsweek / The Information

Key Takeaways

  • Google is proposing to deploy Gemini inside DoD classified environments
  • Google's proposed contract includes explicit use restrictions
  • Anthropic has been formally blacklisted
  • Talks are active as of mid-April 2026.
  • For the defense AI market:

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron