This guide uses a standard shell such as Terminal, iTerm, bash, or zsh.
Install Node.js 20 or newer from:
https://nodejs.org/
Then check it:
node --version
npm --versionnpm install -g @gitlawb/openclaudeReplace sk-your-key-here with your real key.
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_MODEL=gpt-4o
openclaudeexport CLAUDE_CODE_USE_OPENAI=1
export OPENAI_API_KEY=sk-your-key-here
export OPENAI_BASE_URL=https://api.deepseek.com/v1
export OPENAI_MODEL=deepseek-chat
openclaudeInstall Ollama first from:
https://ollama.com/download
Then run:
ollama pull llama3.1:8b
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=llama3.1:8b
openclaudeNo API key is needed for Ollama local models.
Install LM Studio first from:
https://lmstudio.ai/
Then in LM Studio:
- Download a model (e.g., Llama 3.1 8B, Mistral 7B)
- Go to the "Developer" tab
- Select your model and enable the server via the toggle
Then run:
export CLAUDE_CODE_USE_OPENAI=1
export OPENAI_BASE_URL=http://localhost:1234/v1
export OPENAI_MODEL=your-model-name
# export OPENAI_API_KEY=lmstudio # optional: some users need a dummy key
openclaudeReplace your-model-name with the model name shown in LM Studio.
No API key is needed for LM Studio local models (but uncomment the OPENAI_API_KEY line if you hit auth errors).
Close the terminal, open a new one, and try again:
openclaudeCheck the basics:
- make sure the key is real
- make sure you copied it fully
- make sure Ollama is installed
- make sure Ollama is running
- make sure the model was pulled successfully
- make sure LM Studio is installed
- make sure LM Studio is running
- make sure the server is enabled (toggle on in the "Developer" tab)
- make sure a model is loaded in LM Studio
- make sure the model name matches what you set in
OPENAI_MODEL
npm install -g @gitlawb/openclaude@latestnpm uninstall -g @gitlawb/openclaudeUse: