This guide is for users who want source builds, Bun workflows, provider profiles, diagnostics, or more control over runtime behavior.
npm install -g @gitlawb/openclaudeUse Bun 1.3.11 or newer for source builds on Windows. Older Bun versions can fail during bun run build.
git clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run build
npm linkgit clone https://node.gitlawb.com/z6MkqDnb7Siv3Cwj7pGJq4T5EsUisECqR8KpnDLwcaZq5TPr/openclaude.git
cd openclaude
bun install
bun run devexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_MODEL=gpt-4ocodexplan maps to GPT-5.4 on the Codex backend with high reasoning.
codexspark maps to GPT-5.3 Codex Spark for faster loops.
If you already use the Codex CLI, OpenClaude reads ~/.codex/auth.json automatically. You can also point it elsewhere with CODEX_AUTH_JSON_PATH or override the token directly with CODEX_API_KEY.
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_MODEL=codexplan
# optional if you do not already have ~/.codex/auth.json
export CODEX_API_KEY=...
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-...
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chatexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-or-...
export OPENAI_BASE_URL=https://openrouter.ai/api/v1
export OPENAI_MODEL=google/gemini-2.0-flash-001OpenRouter model availability changes over time. If a model stops working, try another current OpenRouter model before assuming the integration is broken.
ollama pull llama3.3:70b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.3:70bexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://127.0.0.1:1337/v1
export OPENAI_MODEL=your-model-nameNo API key is needed for Atomic Chat local models.
Or use the profile launcher:
bun run dev:atomic-chatDownload Atomic Chat from atomic.chat. The app must be running with a model loaded before launching.
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-nameexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=...
export OPENAI_BASE_URL=https://api.together.xyz/v1
export OPENAI_MODEL=meta-llama/Llama-3.3-70B-Instruct-Turboexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=gsk_...
export OPENAI_BASE_URL=https://api.groq.com/openai/v1
export OPENAI_MODEL=llama-3.3-70b-versatileexport CLAUDE_CODE_USE_MISTRAL=1
export MISTRAL_API_KEY=...
export MISTRAL_MODEL=mistral-large-latestexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=your-azure-key
export OPENAI_BASE_URL=https://your-resource.openai.azure.com/openai/deployments/your-deployment/v1
export OPENAI_MODEL=gpt-4o| Variable | Required | Description |
|---|---|---|
CLAUDE_CODE_USE_OPENAI |
Yes | Set to 1 to enable the OpenAI provider |
OPENAI_API_KEY |
Yes* | Your API key (* not needed for local models like Ollama or Atomic Chat) |
OPENAI_MODEL |
Yes | Model name such as gpt-4o, deepseek-chat, or llama3.3:70b |
OPENAI_BASE_URL |
No | API endpoint, defaulting to https://api.openai.com/v1 |
CODEX_API_KEY |
Codex only | Codex or ChatGPT access token override |
CODEX_AUTH_JSON_PATH |
Codex only | Path to a Codex CLI auth.json file |
CODEX_HOME |
Codex only | Alternative Codex home directory |
OPENCLAUDE_DISABLE_CO_AUTHORED_BY |
No | Suppress the default Co-Authored-By trailer in generated git commits |
You can also use ANTHROPIC_MODEL to override the model name. OPENAI_MODEL takes priority.
Use these commands to validate your setup and catch mistakes early:
# quick startup sanity check
bun run smoke
# validate provider env + reachability
bun run doctor:runtime
# print machine-readable runtime diagnostics
bun run doctor:runtime:json
# persist a diagnostics report to reports/doctor-runtime.json
bun run doctor:report
# full local hardening check (smoke + runtime doctor)
bun run hardening:check
# strict hardening (includes project-wide typecheck)
bun run hardening:strictNotes:
doctor:runtimefails fast ifCLAUDE_CODE_USE_OPENAI=1with a placeholder key or a missing key for non-local providers.- Local providers such as
http://localhost:11434/v1,http://10.0.0.1:11434/v1, andhttp://127.0.0.1:1337/v1can run withoutOPENAI_API_KEY. - Codex profiles validate
CODEX_API_KEYor the Codex CLI auth file and probePOST /responsesinstead ofGET /models.
Use profile launchers to avoid repeated environment setup:
# one-time profile bootstrap (prefer viable local Ollama, otherwise OpenAI)
bun run profile:init
# preview the best provider/model for your goal
bun run profile:recommend -- --goal coding --benchmark
# auto-apply the best available local/openai provider/model for your goal
bun run profile:auto -- --goal latency
# codex bootstrap (defaults to codexplan and ~/.codex/auth.json)
bun run profile:codex
# openai bootstrap with explicit key
bun run profile:init -- --provider openai --api-key sk-...
# ollama bootstrap with custom model
bun run profile:init -- --provider ollama --model llama3.1:8b
# ollama bootstrap with intelligent model auto-selection
bun run profile:init -- --provider ollama --goal coding
# atomic-chat bootstrap (auto-detects running model)
bun run profile:init -- --provider atomic-chat
# codex bootstrap with a fast model alias
bun run profile:init -- --provider codex --model codexspark
# launch using persisted profile (.openclaude-profile.json)
bun run dev:profile
# codex profile (uses CODEX_API_KEY or ~/.codex/auth.json)
bun run dev:codex
# OpenAI profile (requires OPENAI_API_KEY in your shell)
bun run dev:openai
# Ollama profile (defaults: localhost:11434, llama3.1:8b)
bun run dev:ollama
# Atomic Chat profile (Apple Silicon local LLMs at 127.0.0.1:1337)
bun run dev:atomic-chatprofile:recommend ranks installed Ollama models for latency, balanced, or coding, and profile:auto can persist the recommendation directly.
If no profile exists yet, dev:profile uses the same goal-aware defaults when picking the initial model.
Use --provider ollama when you want a local-only path. Auto mode falls back to OpenAI when no viable local chat model is installed.
Use --provider atomic-chat when you want Atomic Chat as the local Apple Silicon provider.
Use profile:codex or --provider codex when you want the ChatGPT Codex backend.
dev:openai, dev:ollama, dev:atomic-chat, and dev:codex run doctor:runtime first and only launch the app if checks pass.
For dev:ollama, make sure Ollama is running locally before launch.
For dev:atomic-chat, make sure Atomic Chat is running with a model loaded before launch.