2 min read AI-generated

Anthropic vs Pentagon: The Hearing That Could Change Everything

Copy article as Markdown

Judge Rita Lin hears Anthropic's case against the Pentagon in San Francisco today. New sworn declarations challenge the government's national security claims.

Featured image for "Anthropic vs Pentagon: The Hearing That Could Change Everything"

Today’s the day the AI industry has been watching for weeks. In a San Francisco federal court, Judge Rita Lin takes on what might be the most consequential tech lawsuit of the year: Anthropic versus the Pentagon.

Quick recap

For those who’ve lost track: In late February, the Trump administration designated Anthropic a ‘supply chain risk.’ The reason? Anthropic refused to grant the military unrestricted access to Claude - particularly for mass surveillance and fully autonomous weapons systems.

On March 9, Anthropic filed suit. Since then, legal briefs have been flying back and forth.

The latest twist

Last Friday, Anthropic submitted two sworn declarations to the court - from Sarah Heck, Head of Policy, and Thiyagu Ramasamy, Head of Public Sector.

Their core argument: The Pentagon is claiming things that were never raised during months of negotiations. Anthropic never said it wanted veto power over military operations. The government’s case, they argue, rests on technical misunderstandings and claims that were simply never made.

The Pentagon contradicts itself

Here’s the explosive part: According to court documents from March 20, the Pentagon emailed Anthropic on March 4 saying the two sides were ‘very close’ on both disputed points - autonomous weapons and mass surveillance. That was one week after Trump publicly declared the relationship over.

The Pentagon fires back with security concerns: Anthropic employs a large number of foreign nationals, including many from China, which they say poses a national security risk.

What could happen today

Judge Lin is ruling on Anthropic’s request for a preliminary injunction. If she grants it, the supply chain risk designation would be temporarily suspended - a win for Anthropic that would make it viable for government-adjacent business again.

Why this matters beyond Anthropic

This case goes far beyond one company. It’s defining the rules for AI companies that want to work with government but aren’t willing to accept every condition. If a company can be labeled a security risk for drawing safety lines - what does that mean for the entire industry?

Sources: