Skip to content

barbacane-dev/barbacane

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

456 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Barbacane

Barbacane

Your spec is your gateway.

CI Documentation Unit Tests Plugin Tests Integration Tests CLI Tests UI Tests E2E Tests Rust Version License


Barbacane is a spec-driven API gateway built in Rust. Point it at an OpenAPI or AsyncAPI spec and it becomes your gateway — routing, validation, authentication, AI traffic, MCP, and all. No proprietary config language, no drift between your spec and your infrastructure.

  • Spec as config — Your OpenAPI 3.x or AsyncAPI 3.x specification is the single source of truth. The compiler turns it into a sealed .bca artifact; no separate gateway DSL to maintain.
  • Fast and predictable — Built on Rust, Tokio, and Hyper. No garbage collector, no latency surprises. Route lookup in ~83 ns, full request validation in ~1.2 µs.
  • Secure by default — Memory-safe runtime, TLS via Rustls (FIPS-ready via aws-lc-rs), sandboxed WASM plugins, secrets resolved at runtime via env://, file://, and similar references — never baked into artifacts.
  • AI gateway built-inai-proxy unifies OpenAI / Anthropic / Ollama behind one OpenAI-compatible surface: Chat Completions, the stateless Responses API (POST /v1/responses), and an aggregated model catalog (GET /v1/models). Glob-based routes pick the upstream from the client's model, per-target allow/deny lists gate the catalog, and provider fallback handles 5xx/timeout. Four dedicated middlewares add prompt guarding, response redaction, token-based rate limiting, and per-call cost tracking (ADR-0024, ADR-0030).
  • MCP from your spec — Every operation in your OpenAPI spec is automatically exposed as a Model Context Protocol tool at POST /__barbacane/mcp, behind the same auth/rate-limit/validation chain (ADR-0025).
  • Edge-ready — Stateless data plane instances designed to run close to your users, with a separate control plane handling compilation, artifact distribution, and hot-reload.
  • Extensible — 33 official plugins; write your own in any language that compiles to WebAssembly. Plugins run in a sandbox, so a buggy plugin can't take down the gateway.
  • Observable — Prometheus metrics, structured JSON logging, and distributed tracing with W3C Trace Context and OTLP export. Per-middleware timing comes for free.

Quick Start

# Clone and build
git clone https://github.com/barbacane-dev/barbacane.git
cd barbacane
cargo build --release

# Initialize a project (scaffolds barbacane.yaml + specs/api.yaml)
./target/release/barbacane init my-api --fetch-plugins
cd my-api

# Start the dev server (compiles, serves, and hot-reloads on save)
../target/release/barbacane dev

For production, use the explicit compile-and-serve workflow:

barbacane compile -m barbacane.yaml -o api.bca
barbacane serve --artifact api.bca --listen 0.0.0.0:8080

What configuration looks like

Routing, auth, rate limits, AI policy — all declared inline on the operation:

paths:
  /v1/chat/completions:
    post:
      operationId: chatCompletions
      x-barbacane-middlewares:
        - name: jwt-auth
          config:
            issuer: "https://auth.example/"
            audience: ai-gateway
        - name: ai-prompt-guard
          config:
            default_profile: standard
            profiles:
              standard:
                max_messages: 50
                blocked_patterns: ["(?i)ignore previous instructions"]
        - name: ai-token-limit
          config:
            default_profile: standard
            partition_key: "header:x-auth-sub"
            profiles:
              standard: { quota: 100000, window: 60 }
        - name: ai-response-guard
          config:
            default_profile: default
            profiles:
              default:
                redact:
                  - pattern: '\b\d{3}-\d{2}-\d{4}\b'
                    replacement: '[SSN]'
        - name: ai-cost-tracker
          config:
            prices:
              openai/gpt-4o:             { prompt: 0.0025, completion: 0.01 }
              anthropic/claude-opus-4-6: { prompt: 0.015,  completion: 0.075 }
      x-barbacane-dispatch:
        name: ai-proxy
        config:
          # Caller-owned model: the gateway never declares one — clients pick.
          # Glob routes match the client's `model` field; first match wins.
          routes:
            - { pattern: "claude-*", provider: anthropic, api_key: "env://ANTHROPIC_API_KEY" }
            - { pattern: "gpt-*",    provider: openai,    api_key: "env://OPENAI_API_KEY" }
          fallback:
            - { provider: anthropic, api_key: "env://ANTHROPIC_API_KEY" }

The compiler validates the spec against each plugin's JSON schema (vacuum:barbacane) and seals everything into a single .bca artifact — including pinned plugin WASM. The data plane runs the artifact; nothing is fetched at request time.

Documentation

Full documentation is available at docs.barbacane.dev.

Playground

Try Barbacane locally with the full-featured playground — now in its own repo:

git clone https://github.com/barbacane-dev/playground
cd playground
docker-compose up -d

# Gateway: http://localhost:8080
# Grafana: http://localhost:3000 (admin/admin)
# Control Plane: http://localhost:3001

The playground includes a Train Travel API demo with WireMock backend, full observability stack (Prometheus, Loki, Tempo, Grafana), and the control plane UI. See barbacane-dev/playground for details.

Official Plugins

33 production-ready plugins ship with Barbacane. They're built as WASM modules and run in a sandbox.

Dispatchers — where the request goes

Plugin Description
http-upstream Reverse proxy to HTTP/HTTPS backends
mock Return static responses with {{placeholder}} interpolation
lambda Invoke AWS Lambda functions
kafka Publish messages to Kafka
nats Publish messages to NATS
s3 Proxy requests to AWS S3 / S3-compatible storage with SigV4 signing
ai-proxy OpenAI-compatible LLM gateway — Chat Completions, stateless Responses API, aggregated /v1/models, glob-based routing, per-target allow/deny, fallback
ws-upstream WebSocket transparent proxy with full middleware chain on upgrade
fire-and-forget Forward request to upstream and return immediate static response

Middlewares — what happens on the way

Concern Plugins
Authentication jwt-auth, apikey-auth, basic-auth, oauth2-auth, oidc-auth
Authorization acl, opa-authz, cel (CEL policy + policy-driven routing)
Traffic control rate-limit (sliding window), request-size-limit, ip-restriction, bot-detection, redirect
Caching cache (response caching)
Transformation request-transformer, response-transformer, cors, correlation-id
Observability observability (SLO + detailed logging), http-log
AI gateway ai-prompt-guard, ai-token-limit, ai-cost-tracker, ai-response-guard

Performance

Benchmark results on Apple M4 (MacBook Air 16GB):

Routing & Validation

Operation Latency
Route lookup (1000 routes) ~83 ns
Request validation (full) ~1.2 µs
Body validation (JSON) ~458 ns
Router build (500 routes) ~130 µs

WASM Plugin Runtime

Operation Latency
Module compilation ~210 µs
Instance creation ~17 µs
Middleware chain (1 plugin) ~261 µs
Middleware chain (3 plugins) ~941 µs
Middleware chain (5 plugins) ~1.32 ms
Memory write (1 KB) ~14 ns
Memory write (100 KB) ~1.4 µs

Serialization

Operation Latency
Request (minimal) ~118 ns
Request (full, 1 KB body) ~921 ns
Response (1 KB body) ~417 ns

Spec Compilation

Operation Latency
Compile 10 operations ~550 µs
Compile 50 operations ~2.17 ms
Compile 100 operations ~3.72 ms

Run your own benchmarks:

cargo bench --workspace

Project Status

Barbacane is under active development. See ROADMAP.md for the roadmap and CHANGELOG.md for release history.

Contributing

Contributions are welcome! Please read CONTRIBUTING.md for guidelines.

License

Dual-licensed under AGPLv3 and a commercial license. See LICENSING.md for details.

Trademark

Barbacane is a trademark. The software is open source; the brand is not. See TRADEMARKS.md for usage guidelines.