Code Review Got Faster, Not Easier
Over fifty percent.
That is how much code checked into production at Google each week is generated by AI, gets through code review, is accepted, and isn't rolled back.
The bottleneck did not disappear. It moved.
The shift
Review cycles that used to take days or weeks can now move much faster. That sounds like progress until you realize what it means for the reviewer: more code to review, at a faster cadence.
"There's certainly a lot more code to review now than there was historically. And so I think that we're writing more code than we have ever before. But also the rate at which code reviews are happening is happening at a much more rapid clip."
Paige Bailey,
More code. Faster turnaround. Higher review load. The math gets tight unless something changes in how review happens.
The intermediate check
The pattern emerging at scale: use AI as an intermediate check before human review, not as a replacement for it.
The model flags priority issues. It recommends fixes. It catches the structural problems that waste reviewer attention. Then the engineer gives final approval.
This is not "AI does the review." This is "AI does the triage so the human review can focus on what matters."
The distinction matters because the failure mode is different. If you treat AI review as a replacement, you end up with confident suggestions that miss context. If you treat it as triage, you end up with a smaller set of issues for a human to validate.
"I remember when I first started at Google, if you were trying to shepherd a CL through into production, it would sometimes be like a days- or weeks-long process. But now it's much much faster."
Paige Bailey,
In practice, part of the speedup comes from shifting the bottleneck from "waiting for a reviewer to have time" to "reviewer receives pre-triaged issues."
The constraint for smaller teams
For larger organizations, the intermediate check is already infrastructure. For Series A through C teams, the question is different: how do you get the same leverage without the same headcount?
The answer is the same pattern, scaled down. Use an AI agent that can check out the branch, run your linters, and flag issues before the PR lands in someone's queue. The human still owns the stamp. The human still makes the call on anything ambiguous. But the human is not spending their first pass catching formatting issues and obvious bugs.
Code review load scales with AI-generated volume. Headcount doesn't have to scale linearly.
How Roo Code closes the loop on review
Roo Code can act as the intermediate check before human review. When you use Roo Code's PR Reviewer, it checks out the branch, runs your test suite, analyzes the changes, and flags issues with priority levels-all before the code lands in a human reviewer's queue.
The key difference from a standalone linter: Roo Code closes the loop. It doesn't just flag problems-it proposes fixes and can iterate on them until the tests pass. The human reviewer sees a cleaner diff with fewer obvious issues to catch.
Manual review vs. AI-assisted review
| Manual Review Only | AI-Assisted Review (Intermediate Check) | |
|---|---|---|
| First pass | Human catches all issues | AI flags priority issues + recommends fixes |
| Time per PR | Proportional to code volume | Reduced by pre-triage |
| Reviewer focus | Everything, including formatting | Ambiguous decisions and context-sensitive issues |
| Scale | Linear with headcount | Sublinear-review capacity grows faster than team |
| Ownership | Human owns final approval | Human still owns final approval |
The tradeoff
The tradeoff is ownership. Someone still needs to own the stamp.
If you automate triage but no one owns final approval, you end up with a review process that feels fast but catches less. The speed is real. The confidence is borrowed.
The pattern that works: clear ownership of final approval, with AI handling the first pass. The reviewer's job shifts from "find all the problems" to "validate the flagged problems and catch what the model missed."
Why this matters for your team
For a small engineering team shipping regularly, the intermediate check pattern changes the math on review load. Instead of spending reviewer time on low-signal issues, you spend it validating flagged issues and catching what the model missed.
The hours reclaimed go somewhere. They go into shipping, not into waiting for review cycles to complete.
If your team is generating more code with AI assistance, the review queue is already growing. The question is whether you scale review with headcount or with an intermediate check that preserves human ownership.
The first step: audit how much time your reviewers spend on issues that an automated check could have caught. That number is your leverage.
Frequently asked questions
Stop being the human glue between PRs
Cloud Agents review code, catch issues, and suggest fixes before you open the diff. You review the results, not the process.