2 min read AI-generated

Mira Murati Lands the Biggest Compute Deal of the Year - Nvidia Gives Thinking Machines a Gigawatt

Copy article as Markdown

Mira Murati's startup Thinking Machines Lab secures a multi-year deal with Nvidia including investment and access to Vera Rubin chips. The former OpenAI CTO means business.

Featured image for "Mira Murati Lands the Biggest Compute Deal of the Year - Nvidia Gives Thinking Machines a Gigawatt"

Mira Murati hasn’t wasted any time. Barely a year after founding her AI startup Thinking Machines Lab, the former OpenAI CTO has closed a deal with Nvidia that’s turning heads across the industry: a multi-year agreement for at least one gigawatt of Nvidia’s next-generation Vera Rubin chips, plus a ‘significant investment’ from Nvidia itself.

What This Means

A gigawatt of compute isn’t a side project — that’s frontier training territory. For context: many of the world’s largest AI training clusters operate in the range of a few hundred megawatts. Thinking Machines is positioning itself directly in the league of OpenAI, Google, and Anthropic.

The deal isn’t just about hardware access. It includes technical collaboration: Thinking Machines’ products will be optimized for Nvidia’s chips. That’s a strong signal — Nvidia is investing not just money but engineering resources.

From OpenAI to Her Own Thing

Murati left OpenAI in September 2024 and founded Thinking Machines Lab in February 2025. Since then, the startup has raised over $2 billion from Andreessen Horowitz, Accel, and now Nvidia. The goal: building AI systems that are ‘more widely understood, customizable and generally capable.’

What’s notable is that Murati doesn’t talk about chatbots or assistants. She talks about understandability and customizability. That sounds like a different approach from what she pursued at OpenAI.

My Take

This deal illustrates how fast the AI landscape is shifting. Two years ago, the compute market was basically a duopoly between OpenAI/Microsoft and Google. Today, Anthropic, xAI, Thinking Machines, and others all have access to frontier-scale compute.

For Nvidia, this is obviously ideal — the more customers fighting over their chips, the better. But for the industry as a whole, it could mean more diversity in frontier models. And that would be a good thing.

The question remains: what exactly is Thinking Machines building? There’s no public model yet. But with this compute budget, it won’t stay that way for long.


Sources: