Sometimes reality writes the best stories. On Friday, the Trump administration declares Anthropic a national security risk and bans all military contractors from doing business with the company. One day later? Claude’s iOS app sits at number one among free apps in Apple’s US App Store – ahead of ChatGPT and Gemini.
The Streisand Effect, Textbook Edition
What happened here is a masterclass in unintentional PR. Anthropic refuses to let Claude be used for mass surveillance and autonomous weapons. The Pentagon escalates, Trump orders the ban – and suddenly millions of people are curious about an app they may never have heard of before.
According to Sensor Tower, the Claude app was languishing at rank 131 in the US store back in late January. Through February it hovered around the top 20, and once the Pentagon headlines broke, it shot straight up. By Saturday evening, Claude was sitting at the very top – briefly overtaking even ChatGPT.
More Than a Curiosity
You could dismiss this as a fun anecdote. But there’s something deeper going on. The app charts show that AI safety has entered mainstream consciousness. People aren’t downloading Claude because they suddenly need coding help. They’re doing it because Anthropic’s stance – “We draw the line at mass surveillance” – strikes a chord.
That’s remarkable. In an industry that normally fights for attention with benchmark scores and token limits, a company is gaining visibility through an ethical decision.
The Other Side
Of course, an App Store ranking doesn’t solve business problems. The Pentagon contracts were worth up to $200 million, and the “supply chain risk” designation could have far-reaching consequences for Anthropic’s enterprise business. Military contractors working with the Pentagon are now officially barred from any commercial relationship with Anthropic.
But in the short term, Anthropic has achieved something most AI companies dream about: brand recognition beyond the tech bubble. Whether that translates into paying customers is a different question.
My Take
I find the whole situation telling. While OpenAI announces its Pentagon deal the same day and Google and xAI keep their contracts, the company that draws a red line gets punished. And the public responds with downloads instead of indifference. That says a lot about the mood around AI and surveillance in 2026.
Sources: