Headless Rust server that transforms Google AI Studio and Anthropic Console web sessions into standard OpenAI-compatible APIs. Deploy on VPS, run as systemd daemon, manage via CLI or Web UI.
# Build
cargo build --release -p antigravity-server
# Run
./target/release/antigravity-server
# β http://127.0.0.1:8045import openai
client = openai.OpenAI(
api_key="sk-antigravity",
base_url="http://127.0.0.1:8045/v1"
)
response = client.chat.completions.create(
model="gemini-3-pro-high",
messages=[{"role": "user", "content": "Hello!"}]
)# Claude Code CLI
export ANTHROPIC_API_KEY="sk-antigravity"
export ANTHROPIC_BASE_URL="http://127.0.0.1:8045"
claude| Capability | Upstream (Tauri) | This Fork (Axum) |
|---|---|---|
| Deployment | Desktop GUI | Headless VPS daemon |
| Frontend | React + TypeScript | Leptos (Rust β WASM) |
| Rate Limiting | Reactive (retry on 429) | AIMD predictive |
| Account Rotation | Standard retry | Smart exclusion-based rotation |
| Reliability | Basic failover | Circuit breakers + health scores |
| Persistence | Direct file I/O | Actor loop (race-free) |
| Observability | Local UI only | Prometheus metrics + REST API |
| Audio/Video | Not supported | Full multimodal |
When running multiple accounts (e.g., 15 Google AI Studio accounts), efficient load distribution becomes critical under high concurrency. This fork implements exclusion-based rotation β each request maintains memory of which accounts already failed, ensuring retries always try fresh accounts.
Request lifecycle:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Attempt 1: Account A (ultra-tier) β 503 Service Unavailable β
β attempted = {A} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Attempt 2: Account B (pro-tier) β 429 Rate Limited β
β attempted = {A, B} β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Attempt 3: Account C (free-tier) β 200 OK β β
β Response returned to client β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Benchmark (50 concurrent requests):
| Metric | Result |
|---|---|
| Accounts utilized | 14 different accounts |
| Rotation events | 362 distributed across pool |
| Success rate | 100% (50/50 HTTP 200) |
| Total time | 18.9 seconds |
The exclusion set is per-request (no cross-request contamination), thread-safe (stack-allocated HashSet), and has minimal overhead (~6KB worst case for 14 accounts).
βββββββββββββββββββ βββββββββββββββββββββββ ββββββββββββββββββββ
β Claude Code β β β β Google Gemini β
β OpenAI SDK β βββΆ β Antigravity Proxy β βββΆ β Anthropic API β
β Cursor / IDE β β (localhost:8045) β β (via OAuth) β
βββββββββββββββββββ βββββββββββββββββββββββ ββββββββββββββββββββ
Endpoints:
POST /v1/chat/completionsβ OpenAI-compatible chatPOST /v1/messagesβ Anthropic-compatible messagesGET /v1/modelsβ Available modelsPOST /v1/images/generationsβ Imagen 3 (DALL-E compatible)POST /v1/audio/transcriptionsβ Whisper-compatible
Resilience API:
GET /api/resilience/healthβ Account availabilityGET /api/resilience/circuitsβ Circuit breaker statesGET /api/resilience/aimdβ Rate limit telemetryGET /api/metricsβ Prometheus metrics
Audio (official OpenAI format):
response = client.chat.completions.create(
model="gemini-3-pro",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Transcribe this audio"},
{"type": "input_audio", "input_audio": {"data": audio_b64, "format": "wav"}}
]
}]
)Video (Gemini extension):
response = client.chat.completions.create(
model="gemini-3-pro",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Describe this video"},
{"type": "video_url", "video_url": {"url": f"data:video/mp4;base64,{video_b64}"}}
]
}]
)Supported: wav, mp3, ogg, flac, m4a | mp4, mov, webm, avi
antigravity-server account list # List accounts with quotas
antigravity-server account add --file x # Add account from JSON
antigravity-server account refresh all # Refresh all quotas
antigravity-server warmup --all # Warmup sessions
antigravity-server status # Proxy statistics
antigravity-server config show # Current configSystemd (user service):
# ~/.config/systemd/user/antigravity.service
[Unit]
Description=Antigravity AI Gateway
After=network.target
[Service]
ExecStart=%h/.cargo/bin/antigravity-server
Restart=always
Environment=RUST_LOG=info
Environment=ANTIGRAVITY_PORT=8045
[Install]
WantedBy=default.targetsystemctl --user enable --now antigravityEnvironment:
| Variable | Default | Description |
|---|---|---|
ANTIGRAVITY_PORT |
8045 |
Server port |
ANTIGRAVITY_DATA_DIR |
~/.antigravity_tools |
Data directory |
RUST_LOG |
info |
Log level |
crates/
βββ antigravity-types/ # Shared types, errors, models
βββ antigravity-core/ # Business logic (proxy, AIMD, circuits)
βββ antigravity-client/ # Rust SDK (auto-discovery, streaming)
antigravity-server/ # Axum HTTP server + CLI
src-leptos/ # Leptos WASM frontend
vendor/antigravity-upstream/ # Upstream reference (submodule)
Based on lbjlaq/Antigravity-Manager.
License: CC BY-NC-SA 4.0 β Non-commercial use only.
Built with Rust