Business & Enterprise | 4 min read

Anthropic Secures Multi-Gigawatt Compute Deal With Google and Broadcom

Anthropic has locked in multiple gigawatts of next-generation compute capacity through an expanded partnership with Google and Broadcom — infrastructure at a scale that signals preparation for model generations far beyond current Claude deployments.

Hector Herrera
Hector Herrera
Scene in a modern corporate office with someone building from an unusual angle or perspective
Why this matters Anthropic has locked in multiple gigawatts of next-generation compute capacity through an expanded partnership with Google and Broadcom — infrastructure at a scale that signals preparation for model generations far beyond current Claude deployments.

Anthropic Secures Multi-Gigawatt Compute Deal With Google and Broadcom

By Hector Herrera | April 16, 2026 | Business

Anthropic has announced an expanded partnership with Google and Broadcom to lock in multiple gigawatts of next-generation compute capacity — a scale of infrastructure commitment that signals the company is building toward model generations far larger than anything currently deployed. The deal arrives as Anthropic's enterprise revenue surges past OpenAI's, reinforcing Google's position as the dominant strategic partner in the Anthropic ecosystem.

Gigawatts of compute, not megawatts. To put that in context: frontier AI training runs today consume anywhere from tens to hundreds of megawatts. A multi-gigawatt commitment means Anthropic is planning for training workloads an order of magnitude larger than what currently exists.

What Multi-Gigawatt Scale Actually Means

What the Deal Involves

According to Anthropic's announcement, the partnership expands infrastructure commitments across two distinct fronts:

Google's role: Cloud infrastructure and AI accelerator access. Google Cloud (through its TPU — Tensor Processing Unit — hardware and GPU partnerships) provides the training and inference compute that Anthropic's models run on. Anthropic has been a primary Google Cloud customer for training its Claude models, and this deal extends and scales that commitment significantly.

Broadcom's role: Broadcom is one of the primary suppliers of custom AI chips — specifically ASIC (Application-Specific Integrated Circuit) designs — that major hyperscalers use to build more efficient compute than commodity GPUs. Broadcom's Tomahawk and Jericho network chip lines, along with its work with Google on custom TPU interconnects, make it a critical supply chain node for the infrastructure Anthropic needs. Its inclusion in this announcement signals that Anthropic is not just buying compute time — it is securing access to the physical hardware stack.

The specific number of gigawatts committed is not disclosed in the announcement. "Multiple gigawatts" is the characterization Anthropic uses.

Why Google Over Amazon

What Multi-Gigawatt Scale Actually Means

To understand what Anthropic is planning, it helps to understand what a gigawatt of AI compute buys you.

A modern AI data center runs at roughly 20-100 megawatts of power for its compute capacity. A gigawatt of dedicated AI compute represents 10 to 50 such facilities operating simultaneously. Multiple gigawatts means something like the total current AI compute capacity of several major cloud providers, dedicated exclusively to Anthropic's workloads.

What requires that much compute?

  • Training runs for models substantially larger than GPT-5/Claude 4-class systems — the kinds of models that would represent a genuine generational leap in capability
  • The inference infrastructure required to serve those models to hundreds of thousands of concurrent enterprise users
  • Safety evaluation and red-teaming at the scale required for more capable models

The most significant implication is the third point. Anthropic's core argument for its safety-focused approach is that making AI safe requires understanding what capable AI systems actually do — which requires building and evaluating them at scale. A multi-gigawatt compute commitment is the infrastructure prerequisite for doing that research seriously.

Why Google Over Amazon

Anthropic has received substantial investment from both Google and Amazon. Amazon's AWS partnership has been significant — Anthropic runs Claude inference on AWS Bedrock and Trainium chips. But the Broadcom dimension of this deal, combined with Google's TPU access, indicates that Google has emerged as the primary partner for frontier model training at this scale.

The strategic logic is straightforward: Google has designed its own AI chips (TPUs) specifically for the workloads that frontier model training requires. Amazon's Trainium is competitive but newer. For a company planning to run training runs at multi-gigawatt scale, the maturity and performance-per-watt of Google's TPU infrastructure is a genuine differentiator.

What this means for Amazon: AWS remains important for inference and distribution — many enterprise customers prefer to access Claude through AWS Bedrock because of existing AWS commitments and data residency requirements. But the training relationship appears to be consolidating around Google, which has significant implications for where the capability development happens.

The Timing

This deal comes at a specific inflection point for Anthropic:

  • Revenue is accelerating. Anthropic's annualized revenue has crossed $30 billion, up from ~$9B at end of 2025. Enterprise customers are doubling in volume.
  • Competitive pressure is rising. OpenAI's Spud model is weeks from release. Google's next Gemini generation is in development. Anthropic needs to sustain capability parity or superiority to protect its enterprise position.
  • Compute is the constraint. In the current AI development environment, the primary bottleneck is not algorithmic insight — it is access to training compute at the scale required to develop the next model generation. Securing multi-gigawatt commitments now locks in the infrastructure before demand intensifies further.

What to Watch

The compute deal is a capital commitment that will define what Anthropic builds over the next 18-36 months. Watch for model announcements — likely Claude 5-class or beyond — that are only possible with this level of infrastructure. The timeline implied by multi-gigawatt training runs is not months; it is measured in quarters. Expect 2026 to close with significant capability announcements from Anthropic that the current deal is enabling.

The geopolitical dimension also matters: this is American compute infrastructure secured by an American AI safety company for training on Google's American data center network. That framing will appear in government briefings on AI supply chain security as export control and industrial policy debates intensify through mid-2026.


Hector Herrera is the founder of Hex AI Systems and editor of NexChron.

Key Takeaways

  • By Hector Herrera | April 16, 2026 | Business
  • What requires that much compute?
  • The strategic logic is straightforward:
  • What this means for Amazon:
  • Revenue is accelerating.

Did this help you understand AI better?

Your feedback helps us write more useful content.

Hector Herrera

Written by

Hector Herrera

Hector Herrera is the founder of Hex AI Systems, where he builds AI-powered operations for mid-market businesses across 16 industries. He writes daily about how AI is reshaping business, government, and everyday life. 20+ years in technology. Houston, TX.

More from Hector →

Get tomorrow's AI briefing

Join readers who start their day with NexChron. Free, daily, no spam.

More from NexChron