If you work with Claude Code a lot, you know the drill: Pull requests pile up faster than your team can review them. Anthropic now has a solution — and it’s a smart one.
Multiple Agents, One Review
Claude Code Review launched on March 9 as a research preview for Teams and Enterprise customers. What makes it interesting: it’s not a single agent reading through your code. Instead, multiple AI agents work in parallel, each with a different focus. One checks security, another looks for logic errors, a third examines performance. A final agent then ranks the most important findings and summarizes them.
The results show up as comments directly in your GitHub PR.
Logic Errors, Not Style Nitpicking
What I like most: Code Review focuses on what actually matters. No comments about missing spaces or naming conventions. The focus is on logic errors, security issues, and real bugs — the kind of things that slip through human reviews, especially when the queue is long.
Anthropic says that internally, review coverage jumped from 16% to 54% of PRs since they started using the tool themselves. The error rate is below 1%.
What It Costs
Code Review uses token-based pricing. Anthropic estimates $15 to $25 per review, depending on code complexity. That’s not cheap — but when you consider how much developer time a thorough review costs, it puts things in perspective.
Why This Matters
The context is obvious: the more code AI writes, the more code needs reviewing. And human reviewers simply can’t keep up anymore. Claude Code Review is Anthropic’s answer to a problem they helped create — and honestly, I find that pretty pragmatic.
Available as a research preview for Claude for Teams and Claude for Enterprise. Install the GitHub App, select your repos, and you’re set.
Sources:
- Anthropic Launches AI-powered Code Review For Claude Code (Dataconomy)
- Anthropic launches a multi-agent code review tool for Claude Code (The New Stack)
- New Claude tool uses AI agents to find bugs in pull requests (Help Net Security)