We’ve already covered Trump’s decision to put Anthropic on what amounts to a government blacklist. But one consequence isn’t getting enough attention: What happens to the companies that have been using Claude in their daily work – without even knowing it?
Claude is inside Microsoft 365
Since January 2026, Anthropic’s Claude models have been enabled by default in Microsoft 365 Copilot. The Researcher agent, Copilot Studio, Agent Mode for Excel – Claude can be working in the background across all of them. Microsoft implemented this as a subprocessor arrangement, automatically active for most commercial tenants.
That was good news for users: more model diversity, better results. Until the ban hit.
The IT admin dilemma
Now thousands of IT departments are facing a decision. If you work directly with the Pentagon, you need to disable the Anthropic subprocessor in the M365 Admin Center. That’s straightforward.
But what about everyone else? Companies that might want a government contract someday? Suppliers of suppliers? Law firms playing it safe?
In practice, IT consultants report that risk-averse legal departments are preemptively disabling Claude across their entire Microsoft environment – regardless of whether there’s any current Pentagon exposure. That undermines Microsoft’s multi-model strategy and reduces AI capabilities for end users.
What Anthropic says
Anthropic argues that the ban is legally narrow in scope: it applies only to Claude’s use within DoD contract work, not general commercial access. Legal experts have also questioned whether Hegseth even has the statutory authority to extend the ban beyond direct defense contract work.
For the regular M365 user with no Pentagon connection, nothing changes for now. Claude keeps running inside Copilot.
My take
This is a textbook example of how political decisions hit technical supply chains that nobody had on their radar. Microsoft integrated Claude deeply into its ecosystem – and suddenly that’s a compliance problem. Not for everyone, but for enough companies that it stings.
The real loser here isn’t Anthropic. It’s the users who are now working with fewer AI capabilities because a legal department decided to play it safe.
Sources: