2 min read AI-generated

Did Anthropic secretly nerf Claude? Users demand transparency

Copy article as Markdown

Anthropic quietly lowered Claude's default effort level to 'medium' — and power users are not happy about it.

Featured image for "Did Anthropic secretly nerf Claude? Users demand transparency"

What happens when a company that built its brand on transparency quietly changes how its core product works? That’s exactly what’s playing out at Anthropic right now.

What happened

In early March, Anthropic lowered Claude Opus 4.6’s default effort level from ‘high’ to ‘medium’ (level 85). This might sound like a minor technical tweak, but it’s not. The effort level determines how much compute Claude puts into reasoning through a response. Less effort means faster answers and fewer tokens — but also less thorough thinking.

The problem: Anthropic buried this change in a changelog instead of telling users directly.

The backlash

It started with Stella Laurenzo, Senior Director in AMD’s AI group. She filed a detailed GitHub issue analyzing 6,852 Claude Code sessions with 17,871 thinking blocks. Her conclusion: Claude Code had regressed to the point where it couldn’t be trusted for complex engineering work.

That opened the floodgates. Developers reported sloppy code generation, ignored instructions, and repeated mistakes. The consensus on Reddit and X: Claude feels ‘lazy.’

Fortune, VentureBeat, and The Register all picked up the story. The headlines were blunt: ‘Is Anthropic nerfing Claude?‘

Anthropic’s response

Boris Cherny, who leads Claude Code at Anthropic, addressed the complaints publicly. He said the switch to medium effort was deliberate — a response to user feedback that Claude was burning too many tokens. The change was documented in the changelog, he said.

As a fix, Cherny announced that Teams and Enterprise users will be defaulted back to ‘high effort.’ Anyone who wants to switch right now can manually adjust the effort level in settings.

My take

I get both sides. Tokens cost money, and not every prompt needs maximum compute. But a change that fundamentally affects model behavior doesn’t belong in a changelog — it belongs in a direct notification to every user.

The real issue isn’t the technical decision. It’s the communication. Anthropic has positioned itself as the more transparent AI company. When they of all companies slip through changes like this quietly, it erodes trust — and trust is their most valuable asset.

Sources: