This is one of those stories you have to read to believe. The US Department of Defense gave Anthropic an ultimatum: either open up Claude for unlimited military use – or lose your Pentagon contracts. And potentially be classified as a “supply chain risk.”
What Happened
Defense Secretary Pete Hegseth gave Anthropic a deadline until Friday, according to Axios. The demand: Anthropic should loosen its usage policies and release Claude without restrictions for military purposes. This includes potential applications in weapons development, surveillance, and use of force.
Anthropic’s answer? No.
The company publicly pointed out that its usage policies explicitly prohibit using Claude to facilitate violence, develop weapons, or conduct surveillance.
Why This Matters
This is about more than a contract dispute. It’s a fundamental conflict: Can an AI company set ethical boundaries – even when the most powerful government in the world pushes back?
For Anthropic, this is particularly sensitive. The company has positioned itself as a safety-first company from the start. Dario Amodei has repeatedly emphasized that AI safety isn’t marketing – it’s company DNA. Now exactly that is being put to the test.
My Take
I find the decision courageous. Not because I believe AI should never be used in a military context – that’s a more complex debate. But because Anthropic is showing here that its own principles aren’t just paper you throw away when the wind blows against you.
At the same time: the risk is real. Pentagon contracts are lucrative. And political pressure in Washington is no joke. It remains to be seen how this develops.
Sources: Axios · Washington Times · CNN