- π― <1% variance in measurements using CPU simulation - no more flaky benchmarks.
- π₯ Differential flamegraphs to pinpoint exactly what got slower, commit by commit.
- π¬ PR comments & status checks showing performance impact directly in your workflow.
- π‘οΈ Merge protection to block PRs that degrade performance beyond your threshold.
- π Multi-language support for Python, Rust, Node.js, Go, C/C++ and more.
- π Run locally or in CI - works on your machine and integrates with GitHub Actions, GitLab CI, and more.
- π Plug your existing benchmarks in less than 5 minutes - works with pytest, vitest, criterion, and more.
curl -fsSL https://codspeed.io/install.sh | bashNote
The CodSpeed CLI officially supports Ubuntu 20.04, 22.04, 24.04 and Debian 11, 12. Other Linux distributions may work, but are not officially supported.
First, authenticate to keep your benchmark results linked to your CodSpeed account:
codspeed auth loginThe simplest way to get started is to benchmark any executable program directly:
# Benchmark a single command
codspeed exec -- ./my-binary --arg1 value
# Benchmark a script
codspeed exec -- python my_script.py
# Benchmark with specific instrument
codspeed exec --mode walltime -- node app.jsThis approach requires no code changes and works with any executable. CodSpeed will measure the performance provide the instrument results.
For more control and integration with your existing benchmark suite, you can use language-specific harnesses. This allows you to:
- Define multiple benchmarks and keep them versioned in your codebase
- Scope benchmarks to specific functions or modules
- Integrate with existing benchmark suites (pytest, criterion, vitest, etc.)
# Using the Rust harness with criterion
codspeed run cargo codspeed run
# Using the Python harness with pytest
codspeed run pytest ./tests --codspeed
# Using the Node.js harness with vitest
codspeed run pnpm vitest benchThese harnesses provide deeper instrumentation and allow you to write benchmarks using familiar testing frameworks.
CodSpeed provides first-class integrations for multiple languages and frameworks:
| Language | Repository | Supported Frameworks |
|---|---|---|
| Rust | codspeed-rust | divan, criterion.rs, bencher |
| C/C++ | codspeed-cpp | google-benchmark |
| Python | pytest-codspeed | pytest plugin |
| Node.js | codspeed-node | vitest, tinybench, benchmark.js |
| Go | codspeed-go | builtin testing package integration |
| Zig (community) | codspeed-zig | custom |
Need to bench another language or framework? Open an issue or let us know on Discord!
The CLI also offers a built-in harness that allows you to define benchmarks directly.
You can define multiple codspeed exec benchmark targets and configure options in a codspeed.yml file.
This is useful when you want to benchmark several commands with different configurations.
Create a codspeed.yml file in your project root:
# Global options applied to all benchmarks
options:
warmup-time: "0.2s"
max-time: 1s
# List of benchmarks to run
benchmarks:
- name: "Fast operation"
exec: ./my_binary --mode fast
options:
max-rounds: 20
- name: "Slow operation"
exec: ./my_binary --mode slow
options:
max-time: 200ms
- name: "Script benchmark"
exec: python scripts/benchmark.pyThen run all benchmarks with:
codspeed run --mode walltimeTip
For more details on configuration options, see the CLI documentation.
CodSpeed provides multiple instruments to measure different aspects of your code's performance. Choose the one that best fits your use case:
Simulates CPU behavior for <1% variance regardless of system load. Hardware-agnostic measurements with automatic flame graphs.
Best for: CPU-intensive code, CI regression detection, cross-platform comparison
codspeed exec --mode simulation -- ./my-binaryTracks heap allocations (peak usage, count, allocation size) with eBPF profiling.
Best for: Memory optimization, leak detection, constrained environments
Supported: Rust, C/C++ with libc, jemalloc, mimalloc
codspeed exec --mode memory -- ./my-binaryMeasures real-world execution time including I/O, system calls, and multi-threading effects.
Best for: API tests, I/O-heavy workloads, multi-threaded applications
codspeed exec --mode walltime -- ./my-api-testWarning
Using the walltime mode on traditional VMs/Hosted Runners will lead to inconsistent data. For the best results, we recommend using CodSpeed Hosted Macro Runners, which are fine-tuned for performance measurement consistency.
Check out the Walltime Instrument Documentation for more details.
Tip
For detailed information on each instrument, see the Instruments documentation.
Running CodSpeed in CI allows you to automatically detect performance regressions on every pull request and track performance evolution over time.
We recommend using our official GitHub Action: @CodSpeedHQ/action.
Here is a sample .github/workflows/codspeed.yml workflow for Python:
name: CodSpeed Benchmarks
on:
push:
branches:
- "main" # or "master"
pull_request:
workflow_dispatch:
permissions:
contents: read
id-token: write
jobs:
benchmarks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Set up your language/environment here
# For Python:
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install -r requirements.txt
# Run benchmarks with CodSpeed
- uses: CodSpeedHQ/action@v4
with:
mode: instrumentation
run: pytest tests/ --codspeedHere is a sample .gitlab-ci.yml configuration for Python:
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
codspeed:
stage: test
image: python:3.12
id_tokens:
CODSPEED_TOKEN:
aud: codspeed.io
before_script:
- pip install -r requirements.txt
- curl -fsSL https://codspeed.io/install.sh | bash -s -- --quiet
script:
- codspeed run --mode simulation -- pytest tests/ --codspeedTip
For more CI integration examples and advanced configurations, check out the CI Integration Documentation.
