GTC is Nvidia’s big stage — and Jensen Huang didn’t disappoint this year either. On Monday in San Jose, he unveiled the Vera Rubin platform: a complete AI compute system made up of seven chip types, five rack-scale systems, and one supercomputer. The whole thing operates as a single unit, purpose-built for agentic AI.
Seven Chips, One System
Vera Rubin consists of Vera CPUs, Rubin GPUs, NVLink 6 switches, ConnectX-9 NICs, BlueField 4 DPUs, Spectrum-X co-packaged optical NICs, and the brand-new Groq 3 LPUs — Nvidia’s first chip from the $20 billion Groq acquisition. The platform is already in production.
The first DGX Station system with GB300 superchips was shipped to Andrej Karpathy in early March. When your first customer is the former head of AI at Tesla, you know you’re doing something right.
A Trillion Dollars by 2027
The truly staggering number: Huang expects one trillion dollars in purchase orders for Blackwell and Vera Rubin systems through 2027. Last year, the projection was $500 billion. That’s a doubling in twelve months — and it shows just how explosive demand for AI infrastructure continues to be.
Inference Is the New Training
A central theme at GTC: we’ve hit an inflection point for inference. Training large models was the first wave. Now it’s about running those models efficiently — especially for agents that autonomously execute tasks. That’s exactly what Vera Rubin is built for.
What Comes After Vera Rubin
Already announced: the next architecture is called Feynman, featuring a new CPU named Rosa (after Rosalind Franklin). Nvidia doesn’t think in quarters — they think in generations.
My Take
GTC 2026 makes one thing clear: AI infrastructure isn’t a niche anymore — it’s a trillion-dollar market. For those of us using Claude, ChatGPT, and the rest, this means faster models, more capable agents, and costs that should come down over time. Whether Nvidia’s trillion-dollar bet pays off depends on whether demand for AI inference keeps growing at this pace. So far, all signs point to yes.
Sources: