Pay for what
you actually use.
No flat PR limits. On Standard and Pro, every model in the catalog can be your lead or a sub-agent. Ultra unlocks GPT-5.2 Pro, GPT-5.4 Pro, and Claude Opus 4.7 on top of that same flexibility. Every tier is a monthly credit pool — expect roughly 1–250+ credits per review slice depending on models and PR size.
Frontier models
for everyday repo Q&A.
This product surface is separate from paid PR review: Critique Chat does not draw from the monthly review credit pool you see on model tables and plan cards below. Every signed-in user gets conversational search over connected GitHub repos — pick a model, ask in plain language, and iterate without spending those review credits. (Automated review, Remedy execution, and optional indexing still follow their own pricing.)
- Explain an error from CI with live repo search
- Map a module you have not touched before merge
- Sanity-check a design before you open a PR
Paid plans below are for automated PR review depth, specialist agents, and credit pools — not for chatting with your codebase.
One system.
Simple rules.
| PR Size | Changed Files | Multiplier | Example: GLM-5 + 2× GLM-4.7 Flash (~4.5 base cr) |
|---|---|---|---|
| Tiny | 1–5 | 0.75× | ~3 credits |
| Normal | 6–15 | 1.0× | 5 credits |
| Large | 16–30 | 1.5× | ~7 credits |
| Very Large | 30+ | 2.0× | 9 credits |
Three tiers.
One philosophy.
Full-repo embeddings
priced as slots first, credits second.
Enable a repo with Perplexity's pplx-embed-v1-4b and keep the warm snapshot current for semantic retrieval. Chat reads the newest warm snapshot, so you get hybrid search over indexed code without rebuilding context on every question.
Cleaner stack.
Student-pocket pricing.
Get the same 500 credits as the entry pool, but in a tightly curated low-cost lane: GLM-5, Kimi K2.6, and MiniMax M2.7 up top, with cheap specialist sub-agents underneath — not the full public catalog. Approved student and OSS accounts also get unlimited free repository indexing.
- First month free
- Friend gets 2 months off
- You get month 2 refunded or month 3 free
- Best for open-source repos, clubs, capstones, and hackathon teams, with free full-repo indexing once approved.
Scale with custom volume, SLAs, and deployment options. On‑prem or cloud, with dedicated support and full compliance.
Every model.
Every cost.
These credit floors apply to automated PR review only — lead and specialist model runs that roll up into your monthly review pool on Standard, Pro, or Ultra. Critique Chat is separate: repo Q&A and exploration in chat use their own free lane for signed-in users and do not spend review credits. Remedy runs and repository indexing are also their own line items where we describe them above and in the FAQ.
Credits are calculated per review unit — approximately 100k input + 15k output tokens for the lead model on a normal PR. This table does not describe Critique Chat pricing; chat stays outside the review credit meter.
Pick your stack.
See the math.
Estimates below are for PR review credit burn, not Critique Chat. Use Lead then Sub-agent on one list (teal vs green), tweak sub count and depth, and see how each plan's pool stretches.
Tap a row to set the lead reviewer, then switch to Sub-agent for the specialist model.
Questions we hear
before you upgrade.
Credits apply to automated review and to Remedy runs inside Critique — not to everyday Critique Chat. Third-party coding assistants bill their vendors directly.
One credit is our normalized unit for a standard review slice — roughly 100k input tokens and 15k output tokens on the lead model for a normal-sized PR. Specialist sub-agents add their own credit cost on top, and the depth multiplier scales everything for tiny vs very large changes. A single full review is often about 3–50+ credits depending on models and PR size. Your monthly pool is meant to cover many reviews, not to map one-to-one with a raw “job” counter.
Standard and Pro share the same full model lineup and the same flexibility: any listed model can be your lead reviewer or a specialist sub-agent, interchangeably. Ultra adds GPT-5.2 Pro, GPT-5.4 Pro, and Claude Opus 4.7 (frontier credits) and a much larger monthly pool. The only catalog split is those Ultra-only models; lead vs sub is never a separate “tier” of models.
Upgrades apply right away so you are not stuck on a smaller pool when volume spikes. Downgrades take effect at the start of your next billing cycle so you keep the features you already paid for through the end of the current period.
No. Monthly credits reset at each billing boundary. We surface costs transparently so teams can right-size plans instead of banking opaque rollover balances.
Pro is advertised with a 7-day trial where that offer is active — see the plan cards above for the exact label on your signup path. Standard and Ultra follow the CTAs shown there as well.
No. Critique Chat is conversational Q&A against connected GitHub repos with frontier models, and it does not draw down the same monthly pool that meters automated PR review. You can explore branches, compare approaches, and ask follow-ups in chat without spending review credits. Repository indexing has its own rules (slots and refresh costs) where applicable — chat itself stays outside the review credit meter.
Remedy is not the chat message — it is the coding agent that edits files, runs managed execution steps, and can trigger follow-up verification passes. That work bills through Critique credits because our stack is carrying the model runs and apply loop end-to-end. The conversation that suggested the fix may be in free chat; turning it into real commits, patches, and checks is what crosses into metered automation.
Those tools bill you through their vendors, not through Critique. GitHub Copilot is billed by Microsoft / GitHub, Codex usage goes through OpenAI, and Claude Code sits with Anthropic under their terms. We do not resell or bundle those subscriptions. If you export a blueprint or plan from Critique and run it entirely inside your own agent or IDE, any token or seat charges from those products stay between you and that provider.
Yes — many teams use Critique for review and hand execution to Copilot, Codex, Claude Code, or an internal runner. In that split, Critique credits cover automated review (and optional Remedy only if you choose it inside Critique), while execution spend stays with whichever agent vendor you already pay. You are never forced to run fixes through our hosted agent to get value from review.
Indexing is priced separately from review credits: included repo slots, one-time overage enablement, and refresh rules are summarized in the embeddings section above. Student and OSS approvals can include unlimited indexing — check that block for the latest copy. Chat reads warm snapshots once a repo is embedded, but the embedding lifecycle is not the same line item as per-PR review credits.
Start with the clearest path.
Create an account to use Critique Chat and connect the GitHub App when your team is ready. Upgrade or change plans from settings anytime.