Anthropic Secures Multi‑Gigawatt TPU Power with Google and Broadcom to Fuel Claude’s Explosive Growth

Anthropic has signed a multi‑year agreement with Google and Broadcom to lock in multiple gigawatts of next‑generation TPU capacity starting in 2027, a move driven by Claude’s soaring demand, revenue surpassing $30 billion and a rapid doubling of high‑spending enterprise customers.

AI Explorer
AI Explorer
AI Explorer
Anthropic Secures Multi‑Gigawatt TPU Power with Google and Broadcom to Fuel Claude’s Explosive Growth

Multi‑Gigawatt Compute Commitment

Anthropic signed an agreement with Google and Broadcom that secures multiple gigawatts of next‑generation TPU compute capacity, scheduled to become available in 2027. The capacity is intended to support the Claude model and will be deployed primarily in the United States, extending the company’s earlier pledge (November 2025) to invest $50 billion in U.S. computing infrastructure.

“This partnership continues our infrastructure‑expansion strategy. We are building the capacity needed to serve exponential customer growth and keep Claude at the forefront of AI development. It is the most important compute commitment we have made to date.” – Krishna Rao, CFO

Demand Explosion and Revenue Surge

In 2026 Claude’s customer demand accelerated sharply, driving a steep increase in revenue and enterprise‑customer count.

Annual revenue: surpassed $30 billion, up from roughly $9 billion at the end of 2025.

Enterprise customers spending >$1 million annually: more than doubled from just over 500 to over 1,000 within two months.

The growth curve forces Anthropic to lock in future compute at an unprecedented scale to keep service capacity aligned with demand.

Diversified Hardware Strategy

Claude’s training and inference run across three major platforms: AWS Trainium, Google TPU, and NVIDIA GPU. Amazon AWS remains the primary cloud provider and training partner, with ongoing collaboration on the “Rainier” project.

Claude is currently the only frontier AI model available on all three major cloud platforms—AWS Bedrock, Google Cloud Vertex AI, and Microsoft Azure Foundry—providing maximum deployment flexibility for customers.

Core Insights

Compute is power: Pre‑locking and scaling next‑generation compute resources has become a defining factor in AI competition.

Diversification is a moat: Maintaining a multi‑chip, multi‑cloud approach gives Anthropic flexibility in supply‑chain security, cost optimization, and technology iteration.

AI development’s exponential growth requires matching infrastructure investment
AI development’s exponential growth requires matching infrastructure investment
ClaudeCloud providersAI computeAnthropicTPUHardware diversification
AI Explorer
Written by

AI Explorer

Stay on track with the blogger and advance together in the AI era.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.