Hacker News
Show HN: adamsreview – better multi-agent PR reviews for Claude Code
On my own PRs, it has been catching dramatically more real bugs than Claude’s built-in /review, /ultrareview, CodeRabbit, Greptile, and Codex’s built-in review, while producing fewer false positives.
adamsreview is six Claude Code slash commands packaged as a plugin: review, codex-review, add, promote, walkthrough, and fix. I modeled it after the built-in /review command and extended it meaningfully.
You can clear context between review stages because state is stored in JSON artifacts on disk, with built-in scripts for keeping it updated.
The walkthrough command uses Claude’s AskUserQuestion feature to walk you through uncertain findings or items needing human review one by one. Then, the fix command dispatches per-fix-group agents and re-reviews the work with Opus, reverting any regressions before committing survivors.
It runs against your regular Claude Code subscription (Max plan recommended), unlike /ultrareview, which charges against your Extra Usage pool.
I would love feedback from Claude Code users, pro devs, and anyone with strong opinions about AI code reviews.
Repo: https://github.com/adamjgmiller/adamsreview
Install: /plugin marketplace add adamjgmiller/adamsreview, /plugin install adamsreview@adamsreview
moomin
|next
[-]
yuppiepuppie
|next
|previous
[-]
It runs locally, YOU review all the code locally, and feedback that to Claude.
Agents reviewing AI code always felt dirty to me, especially when working on production (non-disposable) code.
Ozzie-D
|next
|previous
[-]
thesimon
|next
|previous
[-]
How expensive is it to run in your experience? In $ or tokens?
nkmnz
|next
|previous
[-]
docheinestages
|next
|previous
[-]
bilekas
|next
|previous
[-]
Have we all just given up?
stingraycharles
|next
|previous
[-]
Seems like it would create a lot of friction and burn a lot of tokens.