2 min read AI-generated

Pentagon CTO: Claude Would 'Pollute' the Supply Chain

Copy article as Markdown

Emil Michael, the Pentagon's new tech chief, rules out negotiations with Anthropic. The tone keeps getting sharper.

Featured image for "Pentagon CTO: Claude Would 'Pollute' the Supply Chain"

The Pentagon saga enters its next chapter — and the tone is getting significantly sharper. Emil Michael, the Pentagon’s new technology chief, made it clear in a CNBC interview on Wednesday: negotiations with Anthropic are off the table.

What did Michael say?

Michael’s core argument: Claude’s built-in safety principles — what Anthropic calls the model’s ‘constitution’ — pose a danger to the military supply chain. Soldiers could end up with ‘ineffective weapons’ or ‘ineffective protection’ if an AI model with its own policy preferences sits in the chain.

The Pentagon has set a 180-day deadline for all defense contractors and suppliers to certify that they don’t use Claude in their work for the Pentagon.

‘No chance’ of a deal

When asked if there’s still room for negotiations, Michael was direct: No. He accused Anthropic of leaking negotiation details and negotiating in ‘bad faith.‘

Anthropic’s position

Anthropic has now filed two lawsuits against the Trump administration, calling the supply chain risk designation ‘unprecedented and unlawful.’ The company says hundreds of millions of dollars in contracts are at stake.

My take

What’s happening here is unprecedented in tech history. A U.S. company is being treated by its own Defense Department like a hostile actor — not because of security flaws, not because of data breaches, but because its AI model is built too safely.

The Pentagon is essentially arguing: Claude’s safety features are a bug, not a feature. That’s a remarkable position with consequences far beyond Anthropic. If safety measures count as ‘pollution,’ what does that mean for AI safety research as a whole?


Sources: