Sam Altman did something rare for a tech CEO on Monday: he admitted a mistake. And not a small one.
What happened
Last week, OpenAI signed a deal with the Pentagon — mere hours after Anthropic was blacklisted by the U.S. government. The timing was, to put it mildly, unfortunate. Altman had publicly supported Anthropic’s position just days earlier. And then: the deal.
The backlash was fierce. #CancelChatGPT was trending, users switched to Claude in droves, and even OpenAI employees signed an open letter supporting Anthropic’s stance. Altman himself now calls the deal ‘opportunistic and sloppy.‘
The amendments
OpenAI has reworked the Pentagon agreement. The key changes:
- No domestic surveillance: The AI system shall not be used for targeted surveillance, tracking, or monitoring of U.S. citizens or nationals.
- NSA excluded: Intelligence agencies like the NSA cannot use OpenAI’s services.
- Human control over weapons: Autonomous weapon systems without human decision-making remain off-limits.
In essence, Altman has now drawn the same red lines that Anthropic demanded from the start. Except Anthropic got blacklisted for insisting on them — while OpenAI signed first and added them later.
What it means
The irony is hard to miss. Anthropic refused to sign a deal without these protections and was punished for it. OpenAI signed quickly, took the heat — and is now retroactively building in the same safeguards.
The question that lingers: why did the Pentagon accept these conditions from OpenAI but not from Anthropic? Altman himself said he should have taken more time. At least that part sounds honest.
Whether the #CancelChatGPT movement calms down remains to be seen. Trust, once lost, isn’t rebuilt with a press release.
Sources: