If Claude felt weaker over the past few weeks, you weren’t wrong. On April 23, Anthropic published an official postmortem naming three specific root causes. No vague hints, no PR speak. Three measurable mistakes that affected Claude Code, the Agent SDK, and Cowork.
What went wrong
Mistake 1: Reasoning dialed down. On March 4, Anthropic lowered Claude Code’s default reasoning effort from ‘high’ to ‘medium’. Sounds like a minor setting — but it directly impacts how much thinking the model puts into a task. Less reasoning means shallower answers.
Mistake 2: Caching bug. On March 26, a cache optimization change ended up clearing cached session data with every prompt-response cycle. The model kept losing context mid-conversation.
Mistake 3: System prompt trimmed too aggressively. On April 16, Anthropic revised the system prompt to make Claude less verbose. Reasonable goal. But combined with the Opus 4.7 release, it caused a measurable three percent performance drop — for both Opus 4.6 and 4.7. The change was reverted on April 20.
Why this matters
Anthropic says the API wasn’t affected. But anyone using Claude Code, Cowork, or the Agent SDK — which is a lot of people — was working with a degraded version for weeks without knowing it.
The postmortem is a step in the right direction. Mistakes happen. But what stings is that users were reporting issues on Reddit and X for weeks. The response was silence or denial. The transparent breakdown only came when the pressure became too much to ignore.
My take
I appreciate that Anthropic named all three causes clearly instead of hiding behind ‘we’re always improving.’ At the same time, this case reveals a structural problem: when a company tells users ‘the model hasn’t gotten worse’ while it demonstrably has, trust erodes. The question isn’t whether mistakes happen — it’s how quickly and honestly you deal with them.
Sources: