Skip to content

timmx7/styx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

⚡ Styx

The MCP-Native AI Gateway

Route requests to any AI provider through one universal endpoint.
Self-hosted. Open source. BYOK.


What is Styx?

Styx is an open-source AI gateway that sits between your app and AI providers. Send requests to OpenAI, Anthropic, Google, or Mistral — all through one OpenAI-compatible endpoint. Bring your own API keys, self-host on your infra, and get full visibility into every request.

The first AI gateway with native MCP (Model Context Protocol) support.

from openai import OpenAI

client = OpenAI(
    api_key="your-styx-api-key",
    base_url="http://localhost:8080/v1",  # ← Only change needed
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello from Styx"}],
)

Features

  • 🔌 MCP Native — Built-in MCP server. Connect Claude Code or Cursor in one command
  • 🔀 Universal Routing — One OpenAI-compatible endpoint for all providers
  • 🤖 styx:auto — Intelligent model routing: use "model": "styx:auto" and let Styx pick the right model based on request complexity
  • 🔑 BYOK — Bring your own API keys, encrypted at rest (Fernet/AES)
  • 📊 Dashboard — Track requests, costs, latency per project and model
  • 🔄 Fallbacks — Auto-failover between providers with circuit breakers
  • 💰 Billing — Built-in subscription and credit-based billing (Stripe)
  • 🧠 Semantic Cache — Similar questions return cached responses instantly
  • Smart Routing — ML classifier routes to the optimal model for each request
  • 🐳 Self-Hosted — Docker Compose, 5-minute setup
  • 🔒 Secure — HMAC key hashing, Fernet encryption, rate limiting, TLS

Prerequisites

  • Docker Engine 24+ and Docker Compose v2
  • At least one AI provider API key (OpenAI, Anthropic, Google, or Mistral)
  • Supabase account (free tier) — only for production mode (not needed for dev mode)

Quick Start

Option A: Setup Wizard (recommended)

git clone https://github.com/timmx7/styx.git
cd styx
./setup.sh                 # interactive wizard, generates .env
docker compose up -d --build   # first build: ~15-20 min; subsequent starts: ~60s

The wizard lets you choose between:

  • Dev mode — No Supabase needed, no authentication, instant start
  • Production mode — Full Supabase auth, account creation, API keys

Option B: Manual Setup

git clone https://github.com/timmx7/styx.git
cd styx
cp .env.example .env

Edit .env with:

  1. Set SKIP_AUTH=true for dev mode, or configure Supabase for production
  2. At least one AI provider key (e.g., OPENAI_API_KEY)
docker compose up -d --build   # first build: ~15-20 min; subsequent starts: ~60s

Access Points

Connect Claude Code

claude mcp add styx -- npx styx-mcp

Connect Cursor

Add to .cursor/mcp.json:

{
  "styx": {
    "command": "npx",
    "args": ["styx-mcp"],
    "env": { "STYX_API_KEY": "your-key" }
  }
}

Send your first request

curl -X POST http://localhost:8080/v1/chat/completions \
  -H "Authorization: Bearer YOUR_STYX_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello from Styx"}]
  }'

Dev mode: skip the Authorization header — requests are accepted without an API key.

Use with any OpenAI SDK

// Node.js / TypeScript
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "your-styx-key",
  baseURL: "http://localhost:8080/v1",
});
# Python
from openai import OpenAI

client = OpenAI(
    api_key="your-styx-key",
    base_url="http://localhost:8080/v1",
)

Supported Providers

Provider Models Status
OpenAI gpt-4.1, gpt-4.1-mini, gpt-4o, gpt-4o-mini, o3, o4-mini
Anthropic claude-sonnet-4, claude-3-5-sonnet, claude-3-5-haiku, claude-3-haiku
Google gemini-2.5-pro, gemini-2.5-flash, gemini-2.5-flash-lite, gemini-2.0-flash
Mistral mistral-large, mistral-medium-3, mistral-small, codestral
Azure OpenAI Same as OpenAI models, via Azure deployments

Auto-routing: Any model matching the provider prefixes above (gpt-*, claude-*, gemini-*, mistral-*, o3*, o4*) is routed automatically — even models released after your last config update.

Architecture

┌─────────┐     ┌──────────────┐     ┌───────────────┐
│  Client  │────▶│  Go Router   │────▶│  AI Provider  │
│  (app)   │◀────│  (port 8080) │◀────│  (OpenAI...)  │
└─────────┘     └──────┬───────┘     └───────────────┘
                       │
                ┌──────▼───────┐
                │ Python API   │
                │ (port 8000)  │
                │ Auth/Billing │
                └──────┬───────┘
                       │
          ┌────────────┼────────────┐
          │            │            │
    ┌─────▼──┐  ┌──────▼──┐  ┌─────▼──┐
    │Postgres│  │  Redis   │  │ Next.js│
    │  (DB)  │  │ (cache)  │  │ (UI)   │
    └────────┘  └─────────┘  └────────┘

Request flow:

Client request
    │
    ▼
Go Router (:8080) ──▶ Cache check ──▶ HIT? Return instantly
    │                                   MISS? Continue...
    ▼
Budget check ──▶ OVER LIMIT? Block + alert
    │              OK? Continue...
    ▼
Route to best provider (OpenAI / Anthropic / Google / Mistral)
    │
    ▼
Provider error? ──▶ Automatic fallback (circuit breaker)
    │
    ▼
Response to client + log to ClickHouse + update Redis counters

Project Structure

styx/
├── router/          # Go reverse proxy — the fast path (<10ms overhead)
├── backend/         # Python FastAPI — auth, billing, business logic
├── dashboard/       # Next.js + Tailwind — web dashboard
├── classifier/      # ML request classifier (complexity scoring)
├── cache-service/   # Semantic cache (Qdrant + sentence-transformers)
├── sdk/             # Python & Node.js client SDKs
├── packages/        # MCP server, gateway CLI
├── infra/           # Docker, Helm, K8s, k6 load tests, Prometheus
└── docker-compose.yml

Comparison

Feature Styx OpenRouter LiteLLM Portkey
MCP Native
Self-Hosted
Open Source ✅ Apache 2.0
Dashboard ✅ Full Basic
BYOK ✅ Encrypted
Semantic Cache
Smart Routing ✅ ML
Circuit Breaker
One-Command Install N/A N/A

Claude Code Plugin

Install the Styx plugin directly in Claude Code:

/plugin install styx@claude-plugin-directory

Or browse: /plugin > Discover > styx

This gives you /styx:setup, /styx:status, and the @styx-ops agent for managing your gateway from Claude Code.

MCP Connector

Styx includes a native MCP server. Connect it to Claude, Cursor, or any MCP-compatible client.

Local (stdio — requires npx)

Claude Code:

claude mcp add styx -- npx styx-mcp

Cursor: Add to .cursor/mcp.json:

{
  "styx": {
    "command": "npx",
    "args": ["styx-mcp"],
    "env": { "STYX_API_KEY": "your-key" }
  }
}

Remote MCP Server

Connect to a hosted Styx instance without local installation:

Claude.ai / Claude Desktop: Settings > Connectors > Add custom connector > URL: https://mcp.styxhq.com/mcp

Claude Code:

claude mcp add --transport http styx https://mcp.styxhq.com/mcp

See docs/DEPLOY_MCP_REMOTE.md for self-hosting the remote MCP server.

Examples

Check gateway health

Prompt: "Check if my AI gateway is healthy and which providers are connected" → Styx checks all provider connections, returns status and latency per provider, flags any issues.

Analyze spending

Prompt: "How much have I spent on AI APIs this month?" → Styx aggregates usage across providers, returns cost breakdown by model, shows cache savings.

Create a scoped API key

Prompt: "Create an API key for the marketing team limited to 1000 requests/day" → Styx generates a rate-limited key, returns the key and its configuration.

Contributing

We welcome contributions! Please see CONTRIBUTING.md.

License

Apache 2.0 — see LICENSE for details.

Links

About

The MCP-Native AI Gateway — Route requests to any AI provider through one universal endpoint. Intelligent auto-routing, 65+ models, self-hosted. Open source.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors