/ Docs
GitHub

Review Pipeline

PR Review Kit provides two ways to run a review: quick mode for a fully automated pipeline, and manual mode for step-by-step control.

Quick mode

Quick mode runs the full pipeline in one command. It pauses twice: once to select the PR or branch, and once to ask for review instructions (scope, focus, requirements, or context). Everything else runs automatically.

Start it from your AI IDE:

/prr-quick

The pipeline runs in order: select PR, describe changes, collect context, run all reviewers, generate report. Inline comments are posted if your platform is configured.

Manual mode

Manual mode gives you full control. Start the master agent and pick each step from the menu:

/prr-master

From the menu, you can run any step individually, re-run a specific reviewer, or skip steps you do not need.

The six steps

1. Select PR

Lists open PRs from your platform (GitHub, GitLab, Azure DevOps, or Bitbucket) or lets you select branches manually. Enter the PR number and the diff is loaded directly into AI context.

2. Describe changes

Classifies the PR type (feature, fix, refactor, etc.), generates a concise summary, and produces a file-by-file walkthrough of all changes.

3. Collect context

In quick mode, runs automatically after step 2. In manual mode, run [CC] Collect Context from the master menu after describing the PR. Analyzes the changed files to detect relevant domains, reads project config files and standards documents, extracts inline annotations from the diff, and optionally queries MCP tools or RAG systems.

After auto-collection, the agent pauses and asks for review instructions — you can specify which reviews to run ("only security", "skip performance"), what to focus on, mandatory requirements, or background context. Press Enter for a full standard review. If you provide input, it is stored as ⚠️ IMPORTANT and controls which reviewers run and what they prioritize.

The result is saved as pr-context.yaml inside the session folder, loaded by all reviewer agents.

4. Deep code review

Runs all specialist reviewer agents — General, Security, Performance, Architecture, and Business Review — either in parallel or in sequence. Each agent uses the fresh context collected in step 3. See Reviewer Agents for details on what each one checks.

5. Generate report

Compiles all findings from every reviewer into a single Markdown report. Findings are sorted by severity from blockers to suggestions. The report is saved to your configured output path.

6. Post inline comments

Optionally posts findings as inline code comments on the exact file and line in your platform PR. Requires the relevant platform CLI to be authenticated. Platform and repo are auto-detected.

Severity levels

All findings use a consistent four-level severity scale:

LevelMeaning
Blocker Must be fixed before the PR can be merged
Warning Should be fixed, with an explanation of the risk
Suggestion Nice-to-have improvement, not blocking
Question Needs clarification from the PR author