Extracted from the
voideditor/voidrepo (originally atvoid/agent-cli) and published as a standalone tool.License: this repo carries forward the upstream Apache-2.0 license and third-party notices.
Standalone command-line tool that runs the same agent loop as Void's sidebar chat: send message → (optional tool call) → run tool → append result → send again, until the model responds without a tool call.
Use it from the terminal without opening Void/VS Code.
cd agent-cli
npm installSet your API key (required):
export OPENAI_API_KEY=sk-...For a local OpenAI-compatible server (e.g. Ollama):
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_API_KEY=noop
export OPENAI_MODEL=llama3.2Single message (argument):
npm run dev -- "list all files in src"
# or after build:
node dist/cli.js "list all files in src"Message from stdin:
echo "what's in package.json?" | npm run devOptions:
--cwd <dir>— Workspace root (default: current directory). Tools (read_file, run_command, etc.) run relative to this.--model <name>— Model name (default:OPENAI_MODELorgpt-4o-mini).--base-url <url>— OpenAI-compatible API base URL (e.g.http://localhost:11434/v1for Ollama).--system <text>— Extra system prompt.--system-file <path>— Extra system prompt from a file (e.g..voidrules).--help,-h— Show help.
Examples:
# OpenAI
OPENAI_API_KEY=sk-... npm run dev -- "summarize README.md"
# Ollama (local)
OPENAI_BASE_URL=http://localhost:11434/v1 OPENAI_API_KEY=noop npm run dev -- --model llama3.2 "list files in ."
# With workspace root and custom system prompt
npm run dev -- --cwd /path/to/project --system-file .voidrules "fix the bug in src/index.ts"- Builds a system message (role, workspace path, rules) and optional extra prompt.
- Sends the user message (and any prior assistant + tool messages) to the OpenAI-compatible API with tools:
read_file,ls_dir,get_dir_tree,run_command,rewrite_file. - If the model returns a tool call, runs the tool (Node
fs/child_process) and appends the result to the thread. - Sends again with the updated thread; repeats until the model responds with no tool call.
- Prints the final assistant text to stdout; tool calls/results are logged to stderr.
npm run buildThen run:
node dist/cli.js "your message"Or link the bin:
npm link
void-agent "your message"This folder is a standalone extraction of:
- Agent loop —
chatThreadService._runChatAgent(send → tool call → run tool → send again). - Message conversion —
convertToLLMMessageService+prepareMessages(thread → OpenAI message shape). - System prompt —
prompts.chat_systemMessage+ tool definitions. - Send — OpenAI chat completions (streaming) + tool_calls parsing (from
sendLLMMessage.impl). - Tools — Node-based implementations of
read_file,ls_dir,get_dir_tree,run_command,rewrite_file(from Void's builtin tools).
It does not depend on the rest of the Void repo (no VS Code, no Electron, no IPC). You can copy this folder elsewhere and use it as a CLI on its own.