2 min read AI-generated

Is Claude Getting 'Nerfed'? Power Users Push Back Against Anthropic's Effort Level Change

Copy article as Markdown

Developers and heavy users are reporting noticeable quality drops in Claude. The reason: Anthropic quietly lowered the default effort level to 'medium' — and the community is not happy about it.

Featured image for "Is Claude Getting 'Nerfed'? Power Users Push Back Against Anthropic's Effort Level Change"

There’s a storm brewing in the Claude community. Since mid-April, complaints have been piling up: Claude is following instructions worse, taking inappropriate shortcuts, and making more mistakes on complex workflows. What’s going on?

What Happened

Anthropic quietly lowered Claude’s default effort level to ‘medium.’ This means the model uses fewer tokens per request by default — essentially thinking less thoroughly. Boris Cherny, who leads Anthropic’s Claude Code product, confirmed the change in an online discussion, explaining that many users had previously complained about excessive token consumption.

The problem: the change wasn’t communicated proactively. Users noticed the quality difference before Anthropic addressed it.

The Numbers

One particularly viral post went through the roof: a developer analyzed 6,852 Claude Code sessions, 17,871 thinking blocks, and 234,760 tool calls — documenting a clear performance decline. His analysis concluded that Claude Code was no longer reliable enough for complex engineering tasks.

Fortune, Axios, VentureBeat, and The Register all picked up the story. Headlines ranged from ‘nerfing’ accusations to speculation about compute shortages at Anthropic.

Anthropic’s Response

Anthropic is course-correcting: for Teams and Enterprise users, the default effort level will be set to ‘high,’ keeping Extended Thinking active even at the cost of higher token usage. This shows Anthropic is taking the issue seriously — though the late communication has eroded trust.

Why This Matters

This is a pivotal moment for me. Anthropic has positioned itself as the company that’s more transparent and user-friendly than the competition. That’s precisely why this controversy hits so hard: expectations were higher. Quietly changing model quality without involving the community contradicts that promise.

At the same time, it highlights a real dilemma: cut token costs or deliver maximum quality? One solution is letting users choose — and that seems to be exactly what Anthropic is now doing with configurable effort levels in the new Opus 4.7.

Sources: