Anthropic just doubled down hard on compute. On April 6, the company officially announced a major expansion of its partnership with Google and Broadcom — totaling 3.5 gigawatts of TPU capacity that will come online starting in 2027. That is a massive jump from the deal the three companies struck back in October 2025, which sat at roughly one gigawatt.
What those numbers actually mean
3.5 gigawatts sounds abstract until you put it in context. It is roughly the electricity demand of a mid-sized city. Anthropic gets access to a huge chunk of Google’s Tensor Processing Units, the TPUs that already power large parts of Claude today. Broadcom handles the chip manufacturing, Google delivers the surrounding infrastructure. The majority of this capacity will sit on US soil and is part of Anthropic’s $50 billion commitment to invest in domestic compute.
A $30 billion run rate
The compute story comes with a second number that is arguably even more impressive: Anthropic now reports an annualized revenue run rate of $30 billion. At the end of 2025, that figure sat at $9 billion. More than 1,000 business customers are now spending over a million dollars a year with Anthropic. That is the kind of growth curve that makes compute deals of this size make sense — and it explains why Anthropic is going so aggressively on capacity.
My take
The deal tells us two things. First, Anthropic is clearly betting on diversification. Instead of leaning on a single cloud provider, it is mixing Google’s TPUs with Amazon’s Trainium and Nvidia GPUs. Second, the scale of these investments is starting to feel surreal. Three hyperscalers are racing to build data centers at a scale that would have been unthinkable two years ago. And Claude sits right in the middle of it.