AI-powered code review for Pull Requests using GPT-5.2, GPT-5.2-Pro, GPT-5.1, and GPT-4o
Translation Versions: ENGLISH | 简体中文 | 繁體中文 | 한국어 | 日本語
The simplest way to get started - just add your OpenAI API key and a workflow file.
Step 1: Add your API key
Go to your repository Settings → Secrets and variables → Actions → Secrets tab:
- Click New repository secret
- Name:
OPENAI_API_KEY - Value: Your OpenAI API key from platform.openai.com
Step 2: Add the workflow
Create .github/workflows/gpt-code-review.yml in your repository:
name: Code Review
permissions:
contents: read
pull-requests: write
on:
pull_request:
types: [opened, reopened, synchronize]
issue_comment:
types: [created]
jobs:
review:
runs-on: ubuntu-latest
# Run on PR events OR when /gpt-review comment is posted on a PR
if: |
github.event_name == 'pull_request' ||
(github.event_name == 'issue_comment' &&
github.event.issue.pull_request &&
contains(github.event.comment.body, '/gpt-review'))
steps:
- uses: micahstubbs/gpt-code-review@v3
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
MODEL: gpt-5.2-2025-12-11Done! Reviews will appear as github-actions[bot] comments, and your API key stays encrypted.
Custom Branding Setup (Advanced)
For reviews with a custom bot name and avatar, create your own GitHub App:
Step 1: Create a GitHub App
- Go to Settings → Developer settings → GitHub Apps → New GitHub App
- Configure:
- Name: Your custom bot name (e.g., "My Code Reviewer")
- Homepage URL: Your repository URL
- Webhook: Uncheck "Active" (not needed for this setup)
- Permissions:
- Repository → Pull requests: Read & write
- Repository → Contents: Read-only
- Click Create GitHub App
- Note the App ID shown on the app's settings page
- Scroll down and click Generate a private key - save the downloaded
.pemfile
Step 2: Install the App
- On your app's settings page, click Install App in the left sidebar
- Select the repositories where you want code reviews
Step 3: Configure Secrets and Variables
Go to your repository Settings → Secrets and variables → Actions:
| Type | Name | Value |
|---|---|---|
| Secret | OPENAI_API_KEY |
Your OpenAI API key |
| Secret | CODE_REVIEW_APP_PRIVATE_KEY |
Contents of the .pem file you downloaded |
| Variable | CODE_REVIEW_APP_ID |
App ID from Step 1 |
Step 4: Add the workflow
Create .github/workflows/gpt-code-review.yml:
name: Code Review
permissions:
contents: read
pull-requests: write
on:
pull_request:
types: [opened, reopened, synchronize]
issue_comment:
types: [created]
jobs:
code-review:
runs-on: ubuntu-latest
if: |
github.event_name == 'pull_request' ||
(github.event_name == 'issue_comment' &&
github.event.issue.pull_request &&
contains(github.event.comment.body, '/gpt-review'))
steps:
- name: Generate App Token
id: app-token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ vars.CODE_REVIEW_APP_ID }}
private-key: ${{ secrets.CODE_REVIEW_APP_PRIVATE_KEY }}
- uses: micahstubbs/gpt-code-review@v3
env:
GITHUB_TOKEN: ${{ steps.app-token.outputs.token }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
MODEL: gpt-5.2-2025-12-11Reviews will now appear with your custom app's name and avatar.
Trigger a review on-demand by commenting /gpt-review on any open Pull Request. This is useful for:
- Re-reviewing after making changes
- Reviewing PRs that were opened before the workflow was added
- Getting a fresh review on an existing PR
The bot will add an 👀 reaction to acknowledge the command, then post a review.
You can specify which model to use for a specific review by passing the model ID as an argument:
/gpt-review gpt-5.2-pro-2025-12-11
This overrides the MODEL environment variable for that review only. To see available models with their characteristics:
/gpt-review:get-models
This posts a comment with a table of supported models showing:
- Model ID (for use with
/gpt-review <model-id>) - API type (Responses or Chat Completions)
- Relative speed and cost
- Recommended use cases
Examples:
/gpt-review # Use default model from workflow config
/gpt-review gpt-4o-mini # Quick, cost-effective review
/gpt-review gpt-5.2-pro-2025-12-11 # Most thorough review (slower, more expensive)Why use different models?
- gpt-4o-mini / gpt-3.5-turbo: Fast, cheap reviews for simple changes
- gpt-5.2-2025-12-11: Balanced default for most PRs
- gpt-5.2-pro-2025-12-11: Deep analysis for complex or critical changes
- gpt-5.1-codex: Specialized for code-heavy reviews
- gpt-5.1-codex-max: Most intelligent coding model, optimized for long-horizon agentic coding tasks
| Variable | Description | Default |
|---|---|---|
MODEL |
OpenAI model to use | gpt-5.2-2025-12-11 |
LANGUAGE |
Response language | English |
PROMPT |
Custom review prompt | (built-in) |
MAX_PATCH_LENGTH |
Skip files with larger diffs | unlimited |
IGNORE_PATTERNS |
Glob patterns to ignore | none |
INCLUDE_PATTERNS |
Glob patterns to include | all |
REASONING_EFFORT |
GPT-5.x reasoning level | medium |
VERBOSITY |
Response detail level | medium |
AUTO_REVIEW |
Enable automatic reviews on PR open/sync | true |
REQUIRE_MAINTAINER_REVIEW |
Restrict /gpt-review to maintainers |
true (public repos) |
To conserve OpenAI API tokens, you can disable automatic reviews and only trigger reviews when someone comments /gpt-review on a PR:
env:
AUTO_REVIEW: false
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}With this configuration, the bot will not review PRs automatically when they are opened or updated. Instead, reviews only happen when a user explicitly requests one by commenting /gpt-review.
By default, the /gpt-review command is restricted to repository maintainers (users with write access or higher) on public repositories. This protects repository owners from community members inadvertently consuming their OpenAI API tokens.
Default behavior:
- Public repos: Only maintainers can use
/gpt-review - Private repos: Anyone with access can use
/gpt-review
To override the default behavior:
env:
# Allow anyone to trigger reviews (not recommended for public repos)
REQUIRE_MAINTAINER_REVIEW: false
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}Or to enforce maintainer-only mode even on private repos:
env:
REQUIRE_MAINTAINER_REVIEW: true
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}| Model | Description |
|---|---|
gpt-5.2-2025-12-11 |
Recommended - excellent balance of quality and cost |
gpt-5.2-pro-2025-12-11 |
Premium tier for complex/critical reviews |
gpt-5.1-codex |
Optimized for code review |
gpt-5.1-codex-mini |
Cost-effective option |
gpt-5.1 |
General purpose |
gpt-4o, gpt-4o-mini |
Previous generation |
Alternative Providers
GitHub Models:
env:
USE_GITHUB_MODELS: true
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
MODEL: openai/gpt-4oAzure OpenAI:
env:
AZURE_API_VERSION: 2024-02-15-preview
AZURE_DEPLOYMENT: your-deployment-name
OPENAI_API_ENDPOINT: https://your-resource.openai.azure.com
OPENAI_API_KEY: ${{ secrets.AZURE_OPENAI_KEY }}Self-hosting
For webhook-based deployment (instead of GitHub Actions):
- Clone the repository
- Copy
.env.exampleto.envand configure - Install and run:
yarn install
yarn build
pm2 start pm2.config.cjsSee Probot documentation for details.
docker build -t gpt-code-review .
docker run -e APP_ID=<app-id> -e PRIVATE_KEY=<pem-value> gpt-code-reviewDevelopment
# Install dependencies
yarn install
# Build
yarn build
# Run tests
yarn test
# Start locally
yarn startAll API keys are stored as repository secrets, which are:
- Encrypted at rest
- Never exposed in logs
- Only accessible to GitHub Actions workflows
This bot includes several security measures:
- API keys are automatically redacted from error logs (pattern:
sk-*) - Error messages never include sensitive data
- The
REQUIRE_MAINTAINER_REVIEWoption (default for public repos) prevents unauthorized users from triggering reviews
- Rotate API keys regularly - Create a new key every 30-90 days
- Set usage limits - Configure spending limits on your OpenAI account
- Monitor usage - Check OpenAI dashboard for unexpected activity
- Use separate keys - Don't reuse API keys across projects
The default prompt works well for general code review, but you can dramatically improve review quality by customizing the PROMPT variable for your specific project. A tailored prompt helps the model understand your tech stack, architecture patterns, and the types of issues most relevant to your codebase.
Template for project-specific prompts:
env:
PROMPT: |
Review this code patch for [brief project description].
TECH STACK:
- Frontend: [e.g., React with Redux, Vue 3, Angular, Svelte]
- Backend: [e.g., Node.js/Express, Django, Rails, Go]
- Database: [e.g., PostgreSQL, MongoDB, Redis]
- Other: [e.g., GraphQL, WebSockets, message queues, cloud services]
FOCUS AREAS:
1. SECURITY: [e.g., auth bypasses, injection attacks, XSS, CSRF, secrets exposure]
2. [FRAMEWORK] PATTERNS: [e.g., proper state management, hook usage, middleware patterns]
3. DATA INTEGRITY: [e.g., race conditions, transactions, validation]
4. ERROR HANDLING: [e.g., proper error boundaries, logging, user feedback]
5. PERFORMANCE: [e.g., N+1 queries, memory leaks, unnecessary re-renders]
FLAG ISSUES BY SEVERITY:
- CRITICAL: Security vulnerabilities, data loss risks
- HIGH: Logic errors, broken functionality
- MEDIUM: Code quality, maintainability
- LOW: Style, minor optimizations
Be concise. Skip obvious changes. Focus on non-trivial issues.Example for a full-stack web application:
env:
PROMPT: |
Review this code patch for a task management web application.
TECH STACK:
- Frontend: React 18 with TypeScript, Redux Toolkit, RTK Query
- Backend: Node.js with Express, Prisma ORM
- Database: PostgreSQL with Redis caching
- Auth: JWT with refresh tokens, OAuth2 (Google, GitHub)
FOCUS AREAS:
1. SECURITY: SQL injection, XSS, CSRF, JWT handling, OAuth state validation, secrets in code
2. REACT PATTERNS: Hook dependencies, memoization, component composition, TypeScript types
3. API DESIGN: REST conventions, error responses, input validation, rate limiting
4. DATA INTEGRITY: Transaction handling, optimistic updates, cache invalidation
5. PERFORMANCE: Bundle size, lazy loading, query optimization, connection pooling
FLAG ISSUES BY SEVERITY:
- CRITICAL: Security vulnerabilities, data loss risks
- HIGH: Logic errors, broken functionality
- MEDIUM: Code quality, maintainability
- LOW: Style, minor optimizations
Be concise. Skip obvious changes. Focus on non-trivial issues that could cause problems in production.
IGNORE_PATTERNS: node_modules/**/*,*.md,*.lock,dist/**/*,coverage/**/*
INCLUDE_PATTERNS: src/**/*,api/**/*,lib/**/*,.github/**/*Tips for effective prompts:
- Be specific about your stack - Mention exact frameworks and versions so the model understands idioms and best practices
- Prioritize security concerns - List the vulnerability types most relevant to your architecture
- Include domain context - Briefly describe what the application does to help identify business logic issues
- Set severity expectations - Help the model distinguish between critical bugs and minor style issues
- Tune file patterns - Use
INCLUDE_PATTERNSandIGNORE_PATTERNSto focus on source code and skip generated files
When using GPT-5.1+ models (gpt-5.1, gpt-5.2, gpt-5.2-pro), the code reviewer automatically streams responses from the OpenAI API. This provides real-time progress updates in GitHub Actions logs, so you can see what the model is doing while processing reviews.
Progress indicators:
- 🤖 Starting code review with model details
- ⏳ Review in progress (logged every 2 seconds)
- 📝 Generating review with character count
- ✅ Review completed
This feature is especially useful for large files or complex reviews that may take longer to process. Note that reasoning tokens are not shown in the stream (they are encrypted by OpenAI), but you'll see the model actively generating the review text.
To prevent token expiration on large PRs, the code reviewer automatically posts review comments in batches rather than waiting until all files are processed. This ensures your review work is never lost, even if processing takes longer than expected.
Batch posting triggers:
- Every 20 files reviewed
- Every 30 minutes elapsed
- When all files are processed (final batch)
Example log output:
✓ Posted review batch: batch 1 with 18 comments (25m elapsed)
✓ Posted review batch: batch 2 with 15 comments (45m elapsed)
✓ Posted review batch: batch 3 (final) with 8 comments (52m elapsed)
GitHub App Token Limitations:
When using actions/create-github-app-token for custom app identity, be aware that installation access tokens expire after 1 hour. For very large PRs with slow models (GPT 5.2 Pro + high reasoning effort), you may see a warning:
⚠️ Review has been running for 40 minutes. GitHub App tokens expire after 1 hour.
Consider using fewer files per review or a faster model.
Mitigation strategies:
- Use built-in GITHUB_TOKEN - Doesn't expire during job (up to 24 hours), but comments show as "github-actions[bot]"
- Filter files more aggressively - Use
INCLUDE_PATTERNSto review only critical files - Use faster model/settings - GPT 5.2 (non-Pro) or lower
REASONING_EFFORT - Split large PRs - Review in smaller, focused PRs instead of one massive change
If you have suggestions or want to report a bug, open an issue.
For more, check out the Contributing Guide.
This project is inspired by codereview.gpt
ISC © 2025 anc95, micahstubbs, and contributors