5 Questions Before Installing an AI Code Review GitHub App
Permissions, tokens, data flow, checks, and deletion paths that separate a real pilot from a trust problem.
Why the hardest part is not the install button
The install flow is the easy part. Trust is the hard part.
Buyers rarely reject an AI review app because the idea is abstractly unsafe. They reject it because one of four things feels wrong: the permission set is broader than the use case, the token model is unclear, the check becomes noisy and starts blocking merges, or nobody can explain what happens to data when the app is removed.
That is why the first question is not "Can it review code?" It is "Can it review code without taking over the org?" GitHub's own documentation pushes in the same direction: choose the minimum permissions required, subscribe to the minimum webhooks, use a webhook secret, and keep credentials secure. That is the baseline. Anything beyond it needs a reason.
GitHub App or OAuth app?
If you are buying automation that runs inside a repository, a GitHub App is usually the cleaner fit. GitHub Apps are built for integrations. OAuth apps are built for user authorization. That distinction matters because it changes what the product can do, how it is scoped, and which token type should be used.
| Surface | GitHub App pattern | Why it matters |
|---|---|---|
| Permissions | Chosen at registration, scoped to the minimum needed | Reduces blast radius on compromise or misconfiguration |
| Automation token | Installation access token (IAT) — expires in 1 hour | Best fit for headless, app-level workflows |
| Human action | User access token, only when acting on behalf of a user | Avoids over-privileging background automation |
| Webhooks | Minimum event set with a secret and signature validation | Lowers noise and spoofing risk |
| Exit path | Delete app registration → vendor-side cleanup | Makes offboarding real and auditable |
Question 1 — Permissions
What exact permissions do you need?
This is the first filter. If the vendor cannot map each permission to a single product feature, the scope is probably not disciplined enough. Ask for a plain table: permission, why it is needed, read or write, and whether the product ships without it. GitHub's permissions documentation is explicit that apps have no permissions by default and that the minimum set is the recommended starting point.
- Vendor starts read-only where possible
- Write access reserved for a clearly named action (comments, check runs)
- Any administration scope has a written explanation
- Permission table is part of the sales or security packet — not hidden in a demo
- "We need broad access to make the magic work"
- Administration permissions without a crisp reason
- No distinction between commenting, reviews, and checks
- The answer changes depending on who you ask
Question 2 — Token model
Which token type powers the product?
This is where a lot of vendors blur the line between "the app" and "the user." For normal background automation, the default should be an installation access token. It is scoped to the app installation, not to a human, and GitHub says it expires after one hour. When you need the product to act on behalf of a person, then a user access token may be appropriate. That should be the exception, not the center of the architecture.
Ask the vendor: Does the core automation use installation access tokens? Are any long-lived user tokens or PATs stored? Can the token be narrowed to selected repositories? How is the private key stored? What happens if the token is compromised? GitHub's best-practices documentation is direct: private keys, tokens, and client secrets should be stored securely, private keys should never be hardcoded, and installation access tokens are for app-installation activity.
- Short-lived installation tokens for background automation
- Secure storage for private keys and secrets (not hardcoded)
- Explicit token revocation and rotation story
- Clear explanation of when user-level authorization is needed
- "We store your users' PATs for convenience"
- Public-client flow used where server-side installation token should exist
- No answer for how private keys are stored
- No plan for revocation or rotation on compromise
Question 3 — Data egress
What leaves GitHub?
This is the part most vendor pages compress into "we only process your code securely." That phrase is too vague to be useful.
Ask for a plain-English data map: what payload is sent to the vendor, whether it is diff-only, full-file, or repository-level context, whether the product creates embeddings or indexes, where model inference runs, what is logged, how long data is retained, whether support staff can see customer code, which subprocessors touch the data, and which region the data is processed in.
- Retention schedule that is easy to understand and verify
- Deletion path for prompts, logs, embeddings, and backups
- Documented list of subprocessors with roles
- Clear attestation that customer code is not used for training
- "We encrypt everything" as a substitute for a real data-flow answer
- No region story for international or regulated customers
- No policy for prompt logging or retention
- No attestation when the vendor claims data is deleted
Question 4 — Check behavior
What happens when the check is wrong?
AI review tools fail socially before they fail technically. Technically, the review may be "good enough." Socially, it can still become intolerable if it blocks merges too often or makes developers distrust the entire branch protection surface.
GitHub's checks model is visible in the pull request Checks tab, and check runs can include annotations on specific lines. That is useful when the signal is good. It is painful when the signal is noisy. If the product is serious, it should make the right thing easy: start optional, measure false positives, then graduate to required only when the signal is stable.
- Optional pilot mode available on install
- Clear severity labels with actionable specifics
- Rerun behavior that is predictable and stable
- Rollback plan documented if the check gets noisy
- Handles path filters and monorepo configurations
- Required check configured by default on install
- Vague conclusions like "review completed" with no severity
- Check names that don't explain what failed
- Shame-heavy microcopy that trains developers to ignore the tool
Question 5 — Exit path
What happens when you uninstall?
This is the trust test that most sales conversations skip.
GitHub says deleting a GitHub App registration uninstalls it from all accounts where it is installed. That is the beginning of the exit story, not the end. You still need to know what happens to vendor-side data: prompts, logs, embeddings, cached artifacts, and backups. The right vendor does not make leaving feel like a punishment.
- Documented deletion SLA for all data categories
- Clean uninstall rehearsal available in a staging org
- Explicit statement about training exclusion
- No ambiguity about partial deletion across multiple orgs
- "We do not keep your data" with no retention detail to back it up
- No answer for backups or derived artifacts (embeddings)
- No process for proving deletion happened (attestation)
- Deletion story only appears after contract negotiations begin
Webhook hygiene that should not be optional
If the app uses webhooks, the basics are non-negotiable: enable only the events you actually need, create a webhook secret with high entropy, verify the X-Hub-Signature-256 header before processing the payload, store the secret securely, make the handler idempotent, and avoid leaking secrets into logs. GitHub says to validate signatures in constant time and to never modify the payload or headers before verification.
One-page vendor scorecard
Use this as a quick internal filter before anyone approves a pilot. If you cannot get a green flag in every row, you do not have a pilot yet.
FAQ
Closing
Installing an AI code review GitHub App should feel boring in the best possible way: minimal permissions, short-lived tokens, explicit data handling, predictable check behavior, and a clean exit path documented before you sign.
If the vendor cannot answer the five questions cleanly, the safest choice is to wait.
Critique is in early access.
If you're an engineering leader evaluating AI PR review and want to see how Critique handles permissions, data, and checks — we should talk. We work with early design partners who want the infrastructure done right.
Get started →