If you thought the Anthropic-Pentagon story couldn’t get more absurd – here’s the twist.
Same Rules, Different Outcome
Late Friday evening, Sam Altman announced that OpenAI had closed a deal with the Pentagon. The conditions? No mass surveillance of American citizens. No autonomous weapons systems. Humans must always remain in the loop for lethal decisions.
Sound familiar? Exactly – these are essentially the same red lines that got Anthropic classified as a security risk.
Altman put it diplomatically: the restrictions reflect “existing US law and Department of Defense policy.” In other words: what Anthropic demanded was actually nothing new. It’s already in the law.
The AI Industry Shows Solidarity
In parallel, more than 300 Google employees and over 60 OpenAI employees signed an open letter. The title: “We Will Not Be Divided.” The demand: their employers should stand behind Anthropic and also draw clear boundaries for the military use of AI.
Sam Altman himself said he considers it wrong for the Pentagon to threaten companies with the Defense Production Act. A remarkable statement from someone who just closed a Pentagon deal himself.
What’s Really Going On
The situation raises uncomfortable questions. If the Pentagon is willing to accept OpenAI’s red lines – why not Anthropic’s? Was it never about the content, but about who pushes back first? Or is there a different tone behind the Altman deal that led to the same outcome?
Whatever the reason: the fact that the same principles lead to blacklisting for one company and a contract for another is hard to explain.
What Concerns Me
The good news: the AI industry is showing unusual unity when it comes to autonomous weapons and mass surveillance. The less good news: the way the US government is proceeding here seems arbitrary. And that with a technology where clear rules are more important than ever.
Sources: