The story of AI and the US military just got a new chapter. Pentagon AI chief Cameron Stanley has confirmed that the Department of Defense is significantly expanding its use of Google’s Gemini. The timing is no coincidence: roughly two months earlier, the Pentagon had blacklisted Anthropic.
What happened
Stanley told CNBC that the DoD doesn’t want to rely on a single model — ‘never a good thing,’ as he put it. Gemini is now being deployed more broadly after Anthropic was shut out of the Pentagon ecosystem. Details about specific use cases remained vague, but the direction is clear: Google is filling the gap Anthropic left behind.
The internal pushback
Here’s where it gets interesting. More than 700 Google employees have signed an open letter calling on the company to reject classified workloads. It’s reminiscent of 2018, when Google pulled out of Project Maven — the first major AI military project — after internal protests.
But 2026 isn’t 2018. Google has significantly shifted its stance on government contracts since then. Whether internal pressure will make a difference this time is questionable.
The bigger picture
The situation highlights a dilemma facing the AI industry. Anthropic refused to give the Pentagon unrestricted access to Claude — and was punished for it. Google steps in — and faces internal backlash. The question of where the line falls between responsible AI and national security remains unresolved.
For Anthropic, this is a short-term business loss. In the long run, it could prove to be the right call — or expensive idealism. That depends on how the market for government AI contracts evolves.
Sources: