An extension of OpenHands that adds pre-configured settings, automatic code analysis via tldr-code, and multi-provider LLM support. No manual UI setup required.
- Docker Desktop installed and running
- An LLM API key from any supported provider (DeepSeek, OpenAI, Anthropic, etc.)
- Or Ollama for fully local, free usage
git clone <repo-url>
cd OpenHands_extended
cp .env.example .envEdit .env and replace the placeholder:
LLM_API_KEY=sk-your-actual-api-key-hereThis is the only required change. Everything else has working defaults.
The
.envfile is gitignored and never committed. Only.env.example(with placeholder values) is tracked.
docker compose up --build -dOpen http://localhost:3333 and start coding.
The first conversation takes a few minutes while OpenHands builds its sandbox image (~11GB). Subsequent conversations reuse it.
Double-click build.bat. It creates .env on first run, validates the API key, builds, and starts everything.
Host machine
├── Docker Desktop
│ └── openhands-llm-app container
│ ├── Web UI at http://localhost:3333
│ ├── Settings auto-generated from .env
│ ├── Hooks: auto-inject code context before each LLM call
│ └── Spawns sandbox containers (one per conversation)
│ ├── Agent runs commands here
│ ├── Code analysis tools (tldr) available
│ └── Project files mounted at /workspace
│
└── Ollama (optional, runs on host)
└── Powers hook routing + can serve as the main LLM
What happens on each message:
- A hook intercepts the message and classifies the intent (keyword matching or LLM-based routing via Ollama)
- For basic queries:
tldr-codeextracts function/class signatures (40+ languages via Pygments) - For advanced queries:
llm-tldrruns call graphs, control/data flow, or semantic search (16 languages via tree-sitter, ~100ms with daemon) - Compressed context is injected into the LLM prompt (significantly fewer tokens than raw code)
- The configured LLM (DeepSeek, GPT-4, Claude, etc.) responds with full code awareness
Place project code in the project/ folder. It is mounted into the container at /workspace:
OpenHands_extended/
├── project/ ← project code goes here (mounted at /workspace)
│ ├── webapp/
│ ├── api-server/
│ └── ...
To use a different directory, change the volume mount in docker-compose.yml:
volumes:
- /path/to/code:/opt/workspace_baseAll settings live in .env. After editing, restart with docker compose up -d (no rebuild needed).
Settings persistence: The .env file is the source of truth for LLM configuration. Changes made via the OpenHands web UI are preserved across container restarts, but LLM settings (model, API key, base URL) will be overwritten by .env values on each container start. Other UI settings (language, agent, etc.) are preserved.
The LLM that powers the coding agent:
# DeepSeek (default)
LLM_MODEL=deepseek/deepseek-reasoner
LLM_API_KEY=sk-your-deepseek-key
LLM_BASE_URL=https://api.deepseek.com
# OpenAI
LLM_MODEL=gpt-4o
LLM_API_KEY=sk-your-openai-key
LLM_BASE_URL=https://api.openai.com
# Anthropic
LLM_MODEL=anthropic/claude-sonnet-4-20250514
LLM_API_KEY=sk-ant-your-key
LLM_BASE_URL=https://api.anthropic.com
# Local Ollama (free, no API key needed)
LLM_MODEL=ollama/qwen3.5A small/fast LLM that classifies queries to select the right code analysis strategy. It only returns a single word (search/structure/extract), so even a tiny model works:
# Ollama (free, default)
HOOKS_MODEL=ollama/qwen3.5:27b
OLLAMA_HOST=http://host.docker.internal:11434
# Or reuse DeepSeek
# HOOKS_MODEL=deepseek/deepseek-chat
# Or OpenAI
# HOOKS_MODEL=gpt-4o-mini
# Or Groq (fast, free tier)
# HOOKS_MODEL=groq/llama-3.1-8b-instantIf the routing LLM is unreachable, hooks fall back to keyword matching. Everything still works.
You can use the same LLM provider for both the main agent and hook routing. This simplifies configuration and can reuse API keys:
# DeepSeek for both
LLM_MODEL=deepseek/deepseek-reasoner
LLM_API_KEY=sk-your-deepseek-key
LLM_BASE_URL=https://api.deepseek.com
HOOKS_MODEL=deepseek/deepseek-chat
HOOKS_BASE_URL=https://api.deepseek.com
HOOKS_API_KEY=sk-your-deepseek-key # Same key
# Ollama for both
LLM_MODEL=ollama/qwen3.5:27b
LLM_API_KEY=ollama
LLM_BASE_URL=http://host.docker.internal:11434/v1
HOOKS_MODEL=ollama/qwen3.5:27b
OLLAMA_HOST=http://host.docker.internal:11434The validation script (run at startup) will check connectivity to both LLMs.
TLDR_MODE=both # hook | tool | both (default)
TLDR_ROUTING=auto # keyword | llm | auto (default)EXTERNAL_PORT=3333At startup, the container runs a validation script that checks connectivity to both LLMs. Results are logged to stdout. If the main LLM is unreachable, OpenHands will prompt for an API key in the UI. Hook LLM failures cause fallback to keyword routing.
You can also run validation manually:
docker exec openhands-llm-app python3 /app/tools/validate_llm.py| Key | File | Purpose |
|---|---|---|
LLM_API_KEY |
.env |
Main agent LLM (required) |
HOOKS_API_KEY |
.env |
Query classification (optional, for cloud routing) |
Keys go in .env only. Never place keys in Dockerfile, docker-compose.yml, or any tracked file.
docker compose up --build -d # Build and start
docker compose up -d # Restart after .env changes (no rebuild)
docker compose down # Stop
docker compose logs -f # Follow logs
docker logs openhands-llm-app # View container logsOpenHands_extended/
├── project/ ← project code (mounted at /workspace in container)
├── .env ← configuration (gitignored)
├── .env.example ← configuration template
├── Dockerfile ← container build
├── docker-compose.yml ← container orchestration
├── build.bat ← Windows one-click setup
├── entrypoint.sh ← startup: settings generation, hook deployment, server start
├── hooks/ ← shell hooks (run on user messages and tool use)
├── hooks.json ← hook event wiring
├── local_llm/ ← Python code analysis + hook classifier
├── microagents/ ← agent instruction files (injected into conversations)
├── skills/ ← agent skills
└── tools/ ← tldr CLI wrapper for code signature extraction
"Ollama not reachable" — Ollama is optional. If using Ollama for hook routing, ensure it is running (ollama serve). The main LLM works without Ollama.
"Port 3333 in use" — Set EXTERNAL_PORT=3334 in .env and restart.
First conversation is slow — Normal. OpenHands builds a sandbox runtime image on first use.
Agent can't find code analysis tools — The tldr tool is installed into the workspace on container startup. Start a new conversation after the container has fully initialized.
Hook routing always returns "search" — Verify HOOKS_MODEL matches an available model. For Ollama, use the exact model:tag from ollama list (e.g., ollama/qwen3.5:27b).
This project builds on the following open-source projects:
- OpenHands — An open platform for AI software developers as generalist agents. Created by All Hands AI. MIT License.
- llm-tldr — Advanced code analysis for LLM agents: call graphs, control/data flow, semantic search, daemon. Tree-sitter based, 16 languages. AGPL-3.0 License.
- tldr-code — Extracts function signatures from codebases for LLM context reduction. Pygments based, 40+ languages. Created by Chris Simoes. MIT License.
- Pygments — Generic syntax highlighter for 500+ languages. BSD 2-Clause License.
MIT