2 min read AI-generated

Trump Declares Anthropic a Security Risk – and the AI Industry Watches in Disbelief

Copy article as Markdown

The Trump administration classifies Anthropic as a 'Supply Chain Risk' – a designation previously reserved for companies like Huawei. Anthropic announces legal action.

Featured image for "Trump Declares Anthropic a Security Risk – and the AI Industry Watches in Disbelief"

What has happened in the last 48 hours between Anthropic and the US government reads like a political thriller. And the crazy part: it’s real.

From Negotiation to Escalation

You may remember the story from earlier this week: Anthropic had refused to give the Pentagon unrestricted access to Claude for military purposes. CEO Dario Amodei had drawn two red lines – no use for autonomous weapons and no mass surveillance of American citizens.

Then came the bombshell on Friday evening: President Trump ordered via Truth Social post that all US agencies must cease using Anthropic products within six months. Defense Secretary Pete Hegseth went further and declared Anthropic a “Supply Chain Risk” – a classification normally reserved for hostile foreign companies like Huawei.

What This Means

The consequences are enormous. Not only must the Pentagon dissolve its $200 million contract with Anthropic. The “Supply Chain Risk” classification also means: no company doing business with the US military can work with Anthropic anymore. This potentially affects the entire supply chain.

Anthropic promptly responded and announced it would challenge the classification in court. In a statement, the company said such a measure was “unprecedented – historically, it has never been publicly applied to an American company.”

Dario Amodei Holds Firm

Amodei’s response was unequivocal: “These threats do not change our position. We cannot in good conscience comply with their demand.” This is remarkable. An AI company standing against the most powerful government in the world – and not blinking.

What I Think About This

Regardless of how you feel about Anthropic: a company risking a $200 million contract and filing a lawsuit against its own government for its safety principles is unprecedented in the tech industry. Whether this is courageous or business-damaging, history will tell. But one thing is clear – the question of who decides about the use of AI in the military will occupy us for a long time.


Sources: