Organized by agentic level. Each level builds on the previous.
- Level 0: LLM CLI
- Level 1: Enhanced Shell
- Level 2: Inline Assistant (@)
- Level 3: Terminal AI (llm-assistant)
- Level 4: Desktop AI
- Level 5: Agentic Coding
- Additional Tools
Core llm command-line tool. Works immediately after installation.
llm "Was ist das meistverbreite Betriebssystem für Pentester?"
llm -c "Und für Forensiker?" # Continue last conversation
llm "Tell me about my OS: $(uname -a)" # Include system info
llm "Was ist Docker?" | pbcopy # Pipe to clipboardllm chat # New conversation
llm chat -c # Continue last conversation
# Pipe help text, then chat about it
docker --help | llm
llm chat -c --md
# > What's the difference between 'run' and 'exec'?llm "Explain Docker" --md # Rich terminal rendering
llm "Top 5 Linux commands" --markdown # Full flag namellm "Beschreibe" -a https://example.com/poster.pdf
llm "Extrahiere den Text" -a image1.jpg -a image2.jpg
cat poster.pdf | llm 'describe image' -a - # Stdin attachmentAzure limitation: Image attachments work, PDF attachments do not. Use pdf: fragments instead.
Load context from files, URLs, and repositories.
# Local files and URLs
llm -f /path/to/file.py "Explain this code"
llm -f https://example.com/article "Summarize"
llm -f site:https://example.com/blog "Extract key points" # Smart extraction
# GitHub repositories and issues
llm -f github:user/repo "Analyze this codebase"
llm -f issue:simonw/llm/123 "Summarize this issue"
llm -f pr:simonw/llm/456 "Review this PR"
# PDFs, YouTube, arXiv
llm -f pdf:document.pdf "Summarize"
llm -f yt:https://youtube.com/watch?v=VIDEO_ID "Key points?"
llm -f arxiv:2310.06825 "Summarize the findings"
# Directories
llm -f dir:./src "Analyze the structure"
# Combine fragments
llm -f github:user/repo -f requirements.txt "Review dependencies"URL types: -f https://url fetches raw content (APIs, JSON). -f site:https://url intelligently extracts article text.
The assistant template is auto-applied by default. Use -t only for different templates.
# Default (no -t needed)
llm "Your question"
# Fabric patterns
llm -t fabric:summarize -f site:https://example.com/article
llm -t fabric:analyze_threat_report -a report.pdf
llm -t fabric:create_stride_threat_model -f github:user/app
llm -t fabric:explain_code -f github:user/repo
# Custom templates
cat > ~/.config/io.datasette.llm/templates/mytemplate.yaml <<'EOF'
system: You are a PostgreSQL expert. Always provide SQL examples.
EOF
llm -t mytemplate "How do I create a composite index?"
# List templates
llm templatesHybrid semantic + keyword search (ChromaDB + BM25).
llm rag add mydocs /path/to/files # Add documents
llm rag add mycode git:https://github.com/user/repo # Add repo
llm rag search mydocs "how does auth work?" # Search
llm -T 'rag("mydocs")' "Explain the auth" # Use as tool
llm rag list # List collections
llm rag rebuild mydocs # Rebuild indexSearch modes: hybrid (default), vector (semantic only), keyword (BM25 only).
llm code "Bash script to backup with timestamp" | tee backup.sh
llm code -c "add error handling" | tee backup.sh # Iterate
llm code "SQL select users registered this month" # Stdout
# Advanced: use fragments for context
llm code -f github:simonw/llm-hacker-news \
"Write a new plugin called llm_video_frames.py..."
# Direct execution (use with caution)
python <(llm code "Fibonacci function, print first 10")# AI commit messages
llm git-commit # From staged changes
llm git-commit --tracked # From all tracked changes
# Semantic sorting
cat names.txt | llm sort --query "Most suitable for a pet monkey?"
llm sort --query "Most technical" --top-k 5 topics.txt
# Text classification
echo "Exciting news!" | llm classify -c positive -c negative -c neutralRequires Gemini provider and Go 1.22+.
imagemage generate "watercolor fox in snowy forest"
imagemage generate "banner image" --aspect=16:9 --count=3
imagemage edit photo.png "make it black and white"
imagemage icon "minimalist cloud logo"echo '{"users": [{"name": "Alice"}, {"name": "Bob"}]}' | llm jq 'extract names'
curl -s https://api.github.com/repos/simonw/llm/issues | llm jq 'count by user.login, top 3'Tools are AI-callable capabilities. The assistant template includes context and sandboxed_shell by default.
# Sandboxed shell (auto-available in assistant template)
llm "Check if docker is installed and show version"
llm "List files in /root" --td # Show tool details
llm "Check kernel version" --ta # Require approval
# File manipulation
llm -T Patch "Read config.yaml" --ta
llm -T Patch "Create hello.py with hello world" --ta
llm -T Patch "In config.yaml, change debug to true" --ta
# SQLite
llm -T 'SQLite("chinook.db")' "Top 5 best-selling artists" --td
# Chain limits
llm --cl 30 "Analyze system, find large files, suggest cleanup"
llm --cl 0 "Complex multi-step task" # Unlimited (use with caution)Default chain limit: 15 (configurable via --cl).
Shell integration adds automatic templates, keybindings, and session recording. Available after installation (level 2+).
The llm command is wrapped to auto-apply the assistant template and tools.
# These are equivalent:
llm "Your question"
command llm -t llm --tool context --tool sandboxed_shell "Your question"
# Wrapper skips template when you specify -t, -s, -c, or --cid
llm -t fabric:summarize < report.txt # Uses fabric template
llm -c "Follow up" # Continues (no re-apply)Bypass with command llm for raw access.
docker build -t myapp . # Error occurs
wut # AI explains the error
wut "is this a security issue?" # Specific question
# Follow up
llm chat -cType a description or partial command, press Ctrl+N:
# Type: find pdf files larger than 20MB
# Press: Ctrl+N
# Result: find . -type f -name "*.pdf" -size +20M
# Iterative: type follow-up, press Ctrl+N again
# Type: Aber nur auf demselben Dateisystem.
# Result: find . -type f -name "*.pdf" -size +20M -xdevAlso: llm cmd "Find .sh files below /root" for non-interactive use.
Every terminal session is automatically recorded via asciinema. Each tmux pane records independently.
context # Last command output
context 5 # Last 5 commands
context all # Entire session
context -e # Export SESSION_LOG_FILE pathStorage: /tmp/session_logs/asciinema/ (default, cleared on reboot) or ~/session_logs/asciinema/ (permanent).
Suppress startup message: export SESSION_LOG_SILENT=1
Built into the assistant template. Just ask:
llm "What was the error in my last command?"
llm chat
# > Summarize what I did in this session
# > How do I fix the compilation error?# Terminal 1: your work
context -e # Copy the export command
# Terminal 2: AI assistant
export SESSION_LOG_FILE="/tmp/session_logs/asciinema/..." # Paste
llm "What did the build script just do?"
llm chat # Continuous assistanceAI assistant that works in any terminal. Each terminal keeps its own conversation. Backed by the shared daemon.
@ What's new about space travel? # Query
@ Tell me more about Saturn # Continue conversation
@ /new # Fresh conversation
@ /status # Session info
@ /help # Available commands- Each terminal gets its own conversation (tracked by terminal ID)
- Daemon starts automatically on first use
- Idle timeout: 30 minutes
- Ctrl+G applies suggested commands from the AI
In Zsh, @ at line start enters LLM mode:
- Tab completion for
/slash-commands - Tab completion for
@fragments(@pdf:,@yt:,@arxiv:,@github:,@dir:,@file:) @elsewhere in the line inserts a literal@
@ @pdf:document.pdf What does this say?
@ @yt:https://youtube.com/watch?v=ID Summarize this
@ @github:user/repo Explain the architecture@ /quit # Stop daemon
@ /status # Check status
# Daemon auto-starts on next @ commandAI pair programming assistant for Terminator. This is where things get serious: knowledge bases, persistent memory, pentest reports, MCP servers, voice input/output, and a web UI.
llm assistant # Launch in Terminator
llm assistant azure/gpt-4.1 # With specific modelAuto-creates a split Exec terminal for command execution.
AI suggests commands via execute_in_terminal tool:
- AI proposes a command
- You approve:
[y/n/e](yes/no/edit) - Command runs in Exec terminal
- Output captured for next AI iteration
Proactive terminal monitoring with user-defined goals.
/watch detect security issues # Enable
/watch spot inefficient commands # Different goal
/watch monitor logs for errors # Log watching
/watch off # Disable
/watch # Show statusHash-based change detection prevents duplicate alerts. AI focuses on new content only.
Persistent context files loaded into the system prompt.
/kb # List available KBs
/kb load pentest-checklist # Load KB
/kb load kb1,kb2,kb3 # Load multiple
/kb unload pentest-checklist # Unload
/kb reload # Reload allLocation: ~/.config/llm-assistant/kb/ (markdown files).
Auto-load via ~/.config/llm-assistant/assistant-config.yaml:
knowledge_base:
auto_load:
- pentest-checklist
- company-standardsPersistent notes across sessions.
/memory # Show loaded memory
/memory global # Global memory only
/memory local # Project memory only
/memory reload # Reload from disk
# Quick notes
# The customer uses OAuth2 for auth
# local Remember: DB migrations need approvalLocations: ~/.config/llm-assistant/AGENTS.md (global), ./AGENTS.md (project).
/squash # Compress context (auto at 80%)
/rewind # Interactive picker
/rewind -3 # Go back 3 turns
/rewind undo # Restore last rewind
/copy # Copy last response
/copy 5 # Copy last 5
/copy all # Copy entire conversation
/copy raw # With markdown preserved/mcp # List servers and status
/mcp load microsoft-learn # Enable server
/mcp unload microsoft-learn # Disable
# Available servers:
# microsoft-learn - Microsoft docs search
# aws-knowledge - AWS docs search
# arxiv - Paper search/download
# chrome-devtools - Browser DevTools (7 tools)Capture and manage security findings during assessments.
/report init acme-webapp en # New project (English)
/report init client-test de # German findings
/report "SQL injection in login form" # Capture finding
/report list # List findings
/report severity F001 8 # Override severity
/report export # Word document (pandoc)
/report projects # List projects
/report open acme-webapp # Switch projectAI auto-generates: title, OWASP severity (1-9), description, remediation. Storage: ~/.config/llm-assistant/findings/.
/voice # Enable voice (auto-submit)
/voice off # Disable
/voice clean # Enable transcript cleanup
/voice clean off # Disable cleanup
/speech # Enable TTS (Vertex models only)
/speech off # DisableVoice model: Parakeet TDT INT8 (~600MB). TTS: Google Cloud Chirp3-HD.
Keybindings: Ctrl+Space (voice), Esc (stop TTS).
/screenshot # Active window
/screenshot region # Select region
/screenshot full # Full screen
/screenshot rdp # RDP window
/screenshot annotate # Flameshot annotation/web # Open in browser (localhost:8741)
/web stop # Stop web serverReal-time streaming, same conversation as terminal.
!multi— Enter multi-line mode (finish with!end)!fragment <name>— Attach an llm fragment to conversation
| Command | Description |
|---|---|
/help |
Show commands |
/clear |
Clear history |
/reset |
Full reset (history + squash) |
/model [name] |
Switch model / list models |
/info |
Session info |
/watch [goal] |
Watch mode |
/squash |
Compress context |
/rewind [n] |
Rewind conversation |
/kb [load|unload|reload] |
Knowledge bases |
/memory [reload|global|local] |
Memory system |
/mcp [load|unload] |
MCP servers |
/report [...] |
Pentest findings |
/voice [on|off|clean] |
Voice input |
/speech [on|off] |
TTS output |
/screenshot [mode] |
Screenshot capture |
/copy [n|all|raw] |
Copy to clipboard |
/web [stop] |
Web companion |
/quit |
Exit |
AI-powered editing in the Micro terminal editor.
micro myfile.py
# Generate (no selection): position cursor, press Ctrl+E
llm write a fibonacci function
# Modify (with selection): select code, press Ctrl+E
llm add error handling to this function
# Use templates
llm -t llm-code implement quicksort in pythonSystem-wide AI access outside the terminal.
| Hotkey | Action |
|---|---|
| Super+^ | Open popup (German keyboards) |
| Super+Shift+^ | Open with current selection |
| Super+` | Open popup (US keyboards) |
| Super+Shift+` | Open with current selection |
Features: action panel (Ctrl+K), image drag & drop, desktop context capture, browser fallback at localhost:8741.
See Desktop Integration for details.
Type triggers in any application:
| Trigger | Mode | Clipboard |
|---|---|---|
:llm: |
simple | no |
:llmc: |
simple | yes |
:@: |
assistant | no |
:@c: |
assistant | yes |
See Desktop Integration for details.
| Keyword | Mode |
|---|---|
llm |
simple query |
@ |
assistant with tools |
Launch via Ctrl+Space. Streaming responses, persistent conversations.
See Desktop Integration for details.
transcribe recording.mp3
transcribe video.mp4 -o transcript.txt25 languages, multiple formats. See Desktop Integration.
Anthropic's agentic coding CLI. Installed at level 2+.
claude # Launch Claude CodeMulti-provider routing proxy for Claude Code.
routed-claude # Launch through router| Mode | Primary | Web Search |
|---|---|---|
| Dual-Provider | Azure OpenAI | Gemini |
| Gemini-Only | Gemini | Gemini |
Route types: default, background, think, longContext, webSearch.
Config: ~/.claude-code-router/config.json.
Alternative agentic coding tool (Azure-only).
Custom tools in Bash, JavaScript, or Python via llm-functions.
git clone https://github.com/sigoden/llm-functions.git
cd llm-functions
cat > tools.txt <<EOF
get_current_weather.sh
execute_command.sh
EOF
argc build && argc check
# Use with llm
llm -T get_current_weather "What's the weather in Berlin?"# gitingest (Python, feature-rich)
gitingest https://github.com/user/repo | llm "What does this do?"
# yek (Rust, 230x faster)
yek /path/to/repo | llm "Review architecture"
# files-to-prompt
files-to-prompt src/*.py | llm "Review for security issues"
files-to-prompt project/ -e py -e js -c > context.xmlcurl -s https://api.github.com/repos/simonw/llm | \
llm jq 'extract stars, forks, open issues' | \
llm "Analyze project popularity"
tail -n 100 /var/log/syslog | llm "Identify errors and explain"
git log --since="30 days ago" | \
llm "Prepare a timeline of development" --md