Google went big at Cloud Next ‘26 on April 23. The two headline announcements: an Enterprise Agent Platform that’s supposed to make AI agents actually enterprise-ready. And a new chip that puts previous hardware to shame.
The Gemini Enterprise Agent Platform
Google wants companies to build AI agents that handle tasks autonomously — plan, execute, adapt. Without a human approving every step. The key differentiator: every agent gets a unique cryptographic identity. That means every action is traceable and auditable. Not an anonymous bot running loose in the system, but an agent with clear attribution and defined permissions.
This is exactly what enterprise customers have been missing. Autonomous agents are exciting — but without audit trails and access controls, they’re a non-starter in regulated industries.
The TPU 8t
Google’s new eighth-generation training chip uses Inter-Chip Interconnect technology to connect up to 9,600 TPUs. The numbers: triple the processing power compared to the Ironwood TPU, double the performance per watt. This isn’t an incremental update — it’s a generational leap in training infrastructure.
There’s also the TPU 8i for inference — the actual delivery of responses. Google continues building out its own chip pipeline, reducing dependence on Nvidia.
My take
Google is showing a clear vision here: AI agents as first-class citizens in the cloud, with their own identity and audit capability. This isn’t a toy — it’s aimed at banks, insurance companies, and healthcare. The chip announcements are the foundation. Whoever controls their own hardware controls costs. And whoever controls costs can scale more aggressively.
Sources: