Skip to content

Standalone CLI extracted from voideditor/void

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
LICENSE-VS-Code.txt
Notifications You must be signed in to change notification settings

zakstal/void-agent-cli

Repository files navigation

Void Agent CLI

Extracted from the voideditor/void repo (originally at void/agent-cli) and published as a standalone tool.

License: this repo carries forward the upstream Apache-2.0 license and third-party notices.

Standalone command-line tool that runs the same agent loop as Void's sidebar chat: send message → (optional tool call) → run tool → append result → send again, until the model responds without a tool call.

Use it from the terminal without opening Void/VS Code.

Setup

cd agent-cli
npm install

Set your API key (required):

export OPENAI_API_KEY=sk-...

For a local OpenAI-compatible server (e.g. Ollama):

export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_API_KEY=noop
export OPENAI_MODEL=llama3.2

Usage

Single message (argument):

npm run dev -- "list all files in src"
# or after build:
node dist/cli.js "list all files in src"

Message from stdin:

echo "what's in package.json?" | npm run dev

Options:

  • --cwd <dir> — Workspace root (default: current directory). Tools (read_file, run_command, etc.) run relative to this.
  • --model <name> — Model name (default: OPENAI_MODEL or gpt-4o-mini).
  • --base-url <url> — OpenAI-compatible API base URL (e.g. http://localhost:11434/v1 for Ollama).
  • --system <text> — Extra system prompt.
  • --system-file <path> — Extra system prompt from a file (e.g. .voidrules).
  • --help, -h — Show help.

Examples:

# OpenAI
OPENAI_API_KEY=sk-... npm run dev -- "summarize README.md"

# Ollama (local)
OPENAI_BASE_URL=http://localhost:11434/v1 OPENAI_API_KEY=noop npm run dev -- --model llama3.2 "list files in ."

# With workspace root and custom system prompt
npm run dev -- --cwd /path/to/project --system-file .voidrules "fix the bug in src/index.ts"

What it does

  1. Builds a system message (role, workspace path, rules) and optional extra prompt.
  2. Sends the user message (and any prior assistant + tool messages) to the OpenAI-compatible API with tools: read_file, ls_dir, get_dir_tree, run_command, rewrite_file.
  3. If the model returns a tool call, runs the tool (Node fs / child_process) and appends the result to the thread.
  4. Sends again with the updated thread; repeats until the model responds with no tool call.
  5. Prints the final assistant text to stdout; tool calls/results are logged to stderr.

Build

npm run build

Then run:

node dist/cli.js "your message"

Or link the bin:

npm link
void-agent "your message"

Relation to Void

This folder is a standalone extraction of:

  • Agent loopchatThreadService._runChatAgent (send → tool call → run tool → send again).
  • Message conversionconvertToLLMMessageService + prepareMessages (thread → OpenAI message shape).
  • System promptprompts.chat_systemMessage + tool definitions.
  • Send — OpenAI chat completions (streaming) + tool_calls parsing (from sendLLMMessage.impl).
  • Tools — Node-based implementations of read_file, ls_dir, get_dir_tree, run_command, rewrite_file (from Void's builtin tools).

It does not depend on the rest of the Void repo (no VS Code, no Electron, no IPC). You can copy this folder elsewhere and use it as a CLI on its own.

About

Standalone CLI extracted from voideditor/void

Resources

License

Apache-2.0, Unknown licenses found

Licenses found

Apache-2.0
LICENSE
Unknown
LICENSE-VS-Code.txt

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published