Review Pipeline
PR Review Kit provides two ways to run a review: quick mode for a fully automated pipeline, and manual mode for step-by-step control.
Quick mode
Quick mode runs the full pipeline in one command. It pauses to ask which PR or branch to review, and once more to ask for optional additional context, then handles everything else automatically.
Start it from your AI IDE:
/prr-quick The pipeline runs in order: select PR, describe changes, collect context, run all reviewers, generate report. Inline comments are posted if your platform is configured.
Manual mode
Manual mode gives you full control. Start the master agent and pick each step from the menu:
/prr-master From the menu, you can run any step individually, re-run a specific reviewer, or skip steps you do not need.
The seven steps
1. Select PR
Lists open PRs from your platform (GitHub, GitLab, Azure DevOps, or Bitbucket) or lets you select branches manually. Enter the PR number and the diff is loaded directly into AI context.
2. Describe changes
Classifies the PR type (feature, fix, refactor, etc.), generates a concise summary, and produces a file-by-file walkthrough of all changes.
3. Collect context
Runs automatically after step 2. Analyzes the changed files to detect relevant domains, reads project config files and standards documents, extracts inline annotations from the diff, and optionally queries MCP tools or RAG systems.
After auto-collection, the agent pauses and asks you for any additional context — such as business
rationale, known trade-offs, or specific areas to focus on. If you provide input, it is marked
⚠️ IMPORTANT and all reviewer agents treat it as the highest-priority context.
You can skip this prompt by pressing Enter, or disable it permanently with
skip_manual_input_context: true in your config.
The result is saved as a context file loaded by all reviewer agents.
4. Deep code review
Runs all specialist reviewer agents — General, Security, Performance, Architecture, and Business Review — either in parallel or in sequence. Each agent uses the fresh context collected in step 3. See Reviewer Agents for details on what each one checks.
5. Generate report
Compiles all findings from every reviewer into a single Markdown report. Findings are sorted by severity from blockers to suggestions. The report is saved to your configured output path.
6. Post inline comments
Optionally posts findings as inline code comments on the exact file and line in your platform PR.
Requires platform_repo to be configured and the relevant platform CLI to be
authenticated.
Severity levels
All findings use a consistent four-level severity scale:
| Level | Meaning |
|---|---|
| Blocker | Must be fixed before the PR can be merged |
| Warning | Should be fixed, with an explanation of the risk |
| Suggestion | Nice-to-have improvement, not blocking |
| Question | Needs clarification from the PR author |