Local Intelligent MCP Planning Server — A document and planning layer for AI assistants. No subscriptions, no cloud. Point limps at any folder (local, synced, or in git). One shared source of truth across Claude, Cursor, Codex, and any MCP-compatible tool.
- Quick Start
- Features
- How I Use limps
- Health & Automation
- How You Can Use It
- Why limps?
- Installation
- Upgrading from v2
- Project Setup
- Client Setup
- Transport
- Daemon Management
- CLI Commands
- Configuration
- Environment Variables
- Troubleshooting
- MCP Tools
- Skills & Commands
- Extensions
- Obsidian Compatibility
- Development
- Used in Production
- Creating a feature plan
- Deep Dive
- What is MCP?
- License
# Install globally
npm install -g @sudosandwich/limps
# Initialize in your project
cd ~/Documents/my-planning-docs
limps init
# Start the HTTP daemon
limps server start
# → Daemon starts on http://127.0.0.1:4269/mcp
# → PID file written to OS-standard location
# → Ready for MCP client connections
# Generate MCP client config
limps config print --client claude-code
# Copy the output to your MCP client config fileThat's it. Your AI assistant now has access to your documents via HTTP transport. The folder can be anywhere—local, synced, or in a repo; limps does not require a git repository or a plans/ directory.
Tip: limps server status always includes system-wide daemon discovery. If a project config is found (or passed via --config), it also reconciles the configured project target against that global list.
- Document CRUD + full-text search across any folder of Markdown files
- Plan + agent workflows with status tracking and task scoring
- Next-task suggestions with score breakdowns and bias tuning
- Sandboxed document processing via
process_doc(s)helpers - Multi-client support for Cursor, Claude, Codex, and more
- Extensions for domain-specific tooling (e.g., limps-headless)
- Knowledge graph — Entity extraction, hybrid retrieval, conflict detection, and graph-based suggestions
- Health automation — Staleness detection, code drift checks, status inference, and auto-fix proposals
- Advanced task scoring — Dependency-aware prioritization with per-plan/agent weight overrides
- MCP Registry — Published to the official MCP Registry (
registry.modelcontextprotocol.io)
- Local only — Your data stays on disk (SQLite index + your files). No cloud, no subscription.
- Restart after changes — If you change the indexed folder or config, restart the MCP server (or rely on the file watcher) so the index and tools reflect the current state.
- Daemon management — The HTTP server runs as a background process. Use
limps server start,limps server stop, andlimps server statusto manage the daemon lifecycle. PID files are stored in OS-standard directories for system-wide awareness. - Sandboxed user code —
process_docandprocess_docsrun your JavaScript in a QuickJS sandbox with time and memory limits; no network or Node APIs. - One optional network call —
limps version --checkfetches from the npm registry to compare versions. All other commands (serve, init, list, search, create/update/delete docs, process_doc, etc.) do not contact the internet. Omitversion --checkif you want zero external calls.
I use limps as a local planning layer across multiple AI tools, focused on create → read → update → closure for plans and tasks. The MCP server points at whatever directory I want (not necessarily a git repo), so any client reads and updates the same source of truth.
Typical flow:
- Point limps at a docs directory (any folder, local or synced).
- Use CLI + MCP tools to create plans/docs, read the current status, update tasks, and close work when done.
- Add the limps MCP entry to each client config so Cursor/Claude/Codex all see the same plans.
Commands and tools I use most often:
- Create:
limps init,create_plan,create_doc - Read:
list_plans,list_agents,list_docs,search_docs,get_plan_status - Update:
update_doc,update_task_status,manage_tags - Close:
update_task_status(e.g.,PASS),delete_docif needed - Analyze:
graph health,graph search,graph check,health check
Full lists are below in "CLI Commands" and "MCP Tools."
limps is designed to be generic and portable. Point it at any folder with Markdown files and use it from any MCP-compatible client. No git repo required. Not limited to planning—planning (plans, agents, task status) is one use case; the same layer gives you document CRUD, full-text search, and programmable processing on any indexed folder.
Common setups:
- Single project: One docs folder for a product.
- Multi-project: Each project gets its own
.limps/config.json; pass--configto target a specific one. - Shared team folder: Put plans in a shared location and review changes like code.
- Local-first: Keep everything on disk, no hosted service required.
Key ideas:
- Any folder — You choose the path; if there’s no
plans/subdir, the whole directory is indexed. Use generic tools (list_docs,search_docs,create_doc,update_doc,delete_doc,process_doc,process_docs) or plan-specific ones (create_plan,list_plans,list_agents,get_plan_status,update_task_status,get_next_task). - One source of truth — MCP tools give structured access; multiple clients share the same docs.
The problem: Each AI assistant maintains its own context. Planning documents, task status, and decisions get fragmented across Claude, Cursor, ChatGPT, and Copilot conversations.
The solution: limps provides a standardized MCP interface that any tool can access. Your docs live in one place—a folder you choose. Use git (or any sync) if you want version control; limps is not tied to a repository.
npm install -g @sudosandwich/limpsv3 introduces major changes:
v3 uses HTTP transport exclusively. stdio transport has been removed.
Migration steps:
-
Start the HTTP daemon for each project:
limps server start --config /path/to/.limps/config.json
-
Update MCP client configs — Replace stdio configs with HTTP transport:
{ "mcpServers": { "limps-planning-myproject": { "transport": { "type": "http", "url": "http://127.0.0.1:4269/mcp" } } } }Use
limps config printto generate the correct snippet.
v3 removes the centralized project registry. If you previously used limps config add, config use, or the --project flag:
- Run
limps initin each project directory to create.limps/config.json. - Update MCP client configs — Replace
--project <name>with HTTP transport config (see above). - Remove environment variable —
LIMPS_PROJECTno longer exists. UseMCP_PLANNING_CONFIGto override config path.
Removed commands: config list, config use, config add, config remove, config set, config discover, config migrate, config sync-mcp, serve.
Replaced by: limps init + limps server start + limps config print.
cd ~/Documents/my-planning-docs
limps initThis creates .limps/config.json in the current directory and prints MCP client setup instructions.
You can also specify a path:
limps init ~/Documents/my-planning-docsIf the directory contains a plans/ subdirectory, limps uses it. Otherwise, it indexes the entire directory.
Each project has its own .limps/config.json. Use --config to target a specific project:
limps plan list --config ~/docs/project-b/.limps/config.jsonAfter running limps init, you need to add a limps entry to your MCP client's config file. Use limps config print to generate the correct snippet for your client, then paste it into the appropriate config file:
limps config print --client cursor
limps config print --client claude-code
limps config print --client claudeThe output tells you exactly what JSON (or TOML) to add and where the config file lives.
All clients connect to the HTTP daemon. Start the daemon first with limps server start, then configure your client.
Cursor
Add to .cursor/mcp.json in your project:
{
"mcpServers": {
"limps-planning-myproject": {
"transport": {
"type": "http",
"url": "http://127.0.0.1:4269/mcp"
}
}
}
}Claude Code
Add to .mcp.json in your project root:
{
"mcpServers": {
"limps-planning-myproject": {
"transport": {
"type": "http",
"url": "http://127.0.0.1:4269/mcp"
}
}
}
}Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"limps-planning-myproject": {
"transport": {
"type": "http",
"url": "http://127.0.0.1:4269/mcp"
}
}
}
}OpenAI Codex
Add to ~/.codex/config.toml:
[mcp_servers.limps-planning-myproject.transport]
type = "http"
url = "http://127.0.0.1:4269/mcp"ChatGPT
ChatGPT requires a remote MCP server over HTTPS. Deploy limps behind an MCP-compatible HTTPS reverse proxy (nginx, Caddy, etc.) with authentication.
In ChatGPT → Settings → Connectors → Add custom connector:
- Server URL:
https://your-domain.example/mcp - Authentication: Configure as needed for your proxy
Print setup instructions:
limps config print --client chatgptlimps v3 uses HTTP transport exclusively via a persistent daemon. This allows multiple MCP clients to share a single server instance, avoiding file descriptor bloat from multiple stdio processes.
# Start the daemon
limps server start
# Check status (shows uptime, sessions, PID)
limps server status
# Stop the daemon
limps server stopThe daemon runs at http://127.0.0.1:4269/mcp by default. Use limps config print to generate the correct MCP client configuration:
limps config print --client claude-codeSee Daemon Management for detailed lifecycle documentation.
All clients use HTTP transport. Example config:
{
"mcpServers": {
"limps-planning-myproject": {
"transport": {
"type": "http",
"url": "http://127.0.0.1:4269/mcp"
}
}
}
}Customize the HTTP server by adding a "server" section to your config.json:
| Option | Default | Description |
|---|---|---|
port |
4269 |
HTTP listen port |
host |
127.0.0.1 |
Bind address |
maxSessions |
100 |
Maximum concurrent MCP sessions |
sessionTimeoutMs |
1800000 |
Session idle timeout in ms (30 min) |
corsOrigin |
"" (none) |
CORS origin ("", "*", or a URL) |
maxBodySize |
10485760 |
Max request body in bytes (10 MB) |
rateLimit |
100 req/min |
Rate limit per client IP |
Example custom server config:
{
"server": {
"port": 8080,
"host": "0.0.0.0"
}
}Note: PID files are stored in OS-standard application directories:
- macOS:
~/Library/Application Support/limps/pids/ - Linux:
$XDG_DATA_HOME/limps/pids/or~/.local/share/limps/pids/ - Windows:
%APPDATA%/limps/pids/
This enables limps server status to perform system-wide daemon discovery from any directory. When a limps config is found for the current directory (or passed via --config), the CLI also reports and reconciles that project's configured target.
- Remote clients: Use an MCP-compatible HTTPS proxy for remote clients (e.g., ChatGPT).
limps v3 uses a persistent HTTP daemon with system-wide awareness. PID files are stored in OS-standard directories, allowing you to manage and discover daemons from any directory on your system.
PID files are stored in platform-specific application data directories:
macOS:
~/Library/Application Support/limps/pids/
Linux:
$XDG_DATA_HOME/limps/pids/
# or if XDG_DATA_HOME is not set:
~/.local/share/limps/pids/
Windows:
%APPDATA%/limps/pids/
Each PID file is named by port number (limps-{port}.pid) to enable system-wide discovery. Example PID file structure:
{
"pid": 12345,
"port": 4269,
"host": "127.0.0.1",
"startedAt": "2026-02-08T12:00:00.000Z",
"configPath": "/path/to/project/.limps/config.json",
"logPath": "/Users/you/Library/Application Support/limps/logs/limps-4269.log"
}This port-based naming allows limps server status to find all running daemons across different projects without needing a config file.
Daemon logs are written to OS-standard application log directories:
macOS:
~/Library/Application Support/limps/logs/
Linux:
$XDG_DATA_HOME/limps/logs/
# or if XDG_DATA_HOME is not set:
~/.local/share/limps/logs/
Windows:
%APPDATA%/limps/logs/
Daemon logs are intentionally operational-only: limps redacts uncaught exception/rejection payloads and does not persist raw AI prompt/response content. Daemon log files are append-only and are not auto-rotated; if you run long-lived daemons, rotate or truncate these files with your system tooling.
Background mode (default):
limps server start
# → Daemon starts on http://127.0.0.1:4269/mcp
# → PID file written to OS-standard location
# → Logs written to OS-standard log file (append mode)
# → Process detaches and runs in backgroundForeground mode (debugging):
limps server start --foreground
# → Runs in foreground (blocks terminal)
# → Logs appear in stderr
# → Useful for debugging startup issues
# → Still creates PID file for discoveryCustom port/host (via config):
Configure server.port and server.host in your .limps/config.json:
{
"server": {
"port": 8080,
"host": "0.0.0.0"
}
}Then start normally:
limps server start
# → Starts using server.port/server.host from config
# → PID file: limps-8080.pidThe start command performs health verification by polling the /health endpoint for up to 5 seconds, issuing repeated HTTP requests. Each individual health-check request has its own shorter timeout (for example, ~1000ms). If any request fails during this window, you'll see one of these error codes:
- TIMEOUT — A single health-check HTTP request exceeded its per-request timeout (e.g., ~1000ms). The daemon may be slow to start or system resources may be constrained. Try
limps server start --foregroundto see logs. - NETWORK_ERROR — Cannot connect to daemon. Port may be blocked or already in use by another process.
- NON_200_STATUS — Health endpoint returned a non-200 status code. Check daemon logs with foreground mode.
- INVALID_RESPONSE — Health endpoint responded, but the response was invalid or could not be parsed as expected (for example, malformed or missing required fields).
With project config (reconciled with global discovery):
# From within a project directory with .limps/config.json
limps server status
# Project target:
# limps server is running
# PID: 12345 | 127.0.0.1:4269
# Uptime: 2h 15m
# Sessions: 3
# Log: /Users/you/Library/Application Support/limps/logs/limps-4269.log
# Project target is present in system-wide daemon discovery.
# System-wide daemons:
# 127.0.0.1:4269 (PID 12345) [project target]
# Uptime: 2h 15m | Sessions: 3
# Log: /Users/you/Library/Application Support/limps/logs/limps-4269.log
# Or specify config explicitly
limps server status --config /path/to/.limps/config.jsonWithout project config (global discovery only):
# From a directory without a limps config
cd /tmp
limps server status
# Found 2 running daemons:
# 127.0.0.1:4269 (PID 12345)
# Uptime: 2h 15m | Sessions: 3
# Log: /Users/you/Library/Application Support/limps/logs/limps-4269.log
# 127.0.0.1:8080 (PID 67890)
# Uptime: 45m 30s | Sessions: 1
# Log: /Users/you/Library/Application Support/limps/logs/limps-8080.logWhen limps server status cannot resolve a config file in the current directory (and no --config is provided), it reports global daemon discovery only. When a config is found, it reports both the configured project target and the global daemon list.
# From the project directory (where your .limps config lives):
limps server stop
# → Gracefully shuts down daemon
# → Closes all MCP sessions
# → Stops file watchers
# → Removes PID file
# → Process exits
# Or from any directory, by specifying the config explicitly:
limps server stop --config /path/to/.limps/config.jsonThe stop command is project-specific and resolves the config to determine which daemon to stop. The daemon performs a graceful shutdown by:
- Closing all active MCP sessions
- Shutting down file watchers
- Removing the PID file
- Exiting the process
If you try to start a daemon on a port that's already in use, limps will detect the conflict and provide resolution guidance:
limps server start
# Error: Port 4269 is already in use.
# Process using port: node (PID 12345)
# Command: /usr/local/bin/node /usr/local/bin/limps server start
#
# To stop the process: kill 12345
# Or use a different port: limps server start --port <port>On systems with lsof available (macOS, Linux), limps can identify which process is using the port and show its command line. If lsof is not available, you'll see a simpler error message suggesting a different port.
Use foreground mode for debugging, Docker deployments, or CI/CD pipelines:
limps server start --foregroundUse cases:
- Debugging — See server logs in real-time to diagnose startup issues
- Docker — Keep container alive with the daemon as the main process
- CI/CD — Run tests against a limps daemon without background processes
Behavior differences from background mode:
- Logs to stderr instead of being silent
- Blocks the terminal (press Ctrl+C to stop)
- Still creates a PID file for discovery by other processes
- Responds to SIGINT (Ctrl+C) and SIGTERM for graceful shutdown
The HTTP daemon exposes a /health endpoint for monitoring and health checks:
curl http://127.0.0.1:4269/healthExample response:
{
"status": "ok",
"sessions": 3,
"uptime": 8145,
"pid": 12345,
"sessionTimeoutMs": 1800000
}HTTP status codes:
- 200 — Daemon is healthy and accepting connections
- 429 — Rate limit exceeded (rate limiter may return this before the request reaches
/health)
Sessions automatically expire after 30 minutes of inactivity (configurable via sessionTimeoutMs). When a session expires, MCP clients receive a specific error response indicating they should reconnect.
Session Expiration Response:
When a session expires or is closed, subsequent requests with that session ID return:
{
"error": "Session expired",
"code": "SESSION_EXPIRED",
"message": "Session expired due to timeout. Please reconnect without session ID.",
"expiredAt": "2026-02-11T10:30:00.000Z"
}Headers:
X-Session-Expired: trueX-Session-Expired-Reason: timeout(orclosed,deleted)
MCP Client Reconnection Flow:
When receiving SESSION_EXPIRED, clients should:
- Clear the stored session ID — Remove the cached
mcp-session-id - Create a new session — Send POST to
/mcpwithout the session ID header - Store the new session ID — Save the new
mcp-session-idfrom response headers - Retry the original request — Resubmit with the new session ID
Configuration:
Adjust session timeout in .limps/config.json:
{
"server": {
"sessionTimeoutMs": 3600000 // 1 hour (default: 1800000 = 30 min)
}
}Set to 0 to disable timeout (sessions persist until server restart).
Expired Session Tracking:
The server tracks expired sessions for 24 hours to help clients distinguish between "session expired" vs "session never existed":
SESSION_EXPIRED— Session previously existed but timed out (client should reconnect)SESSION_NOT_FOUND— Session ID was never valid (possible server restart or invalid ID)
Use this endpoint for:
- Monitoring daemon health in scripts or dashboards
- Verifying daemon is running before connecting MCP clients
- Automated health checks in orchestration tools (Kubernetes, Docker Compose)
You can run multiple limps daemons on different ports for different projects by configuring different ports in each project's config:
# Project A with default port (4269)
cd ~/projects/project-a
# .limps/config.json has server.port: 4269 (or uses default)
limps server start
# → Running on http://127.0.0.1:4269/mcp
# Project B with custom port (8080)
cd ~/projects/project-b
# .limps/config.json has server.port: 8080
limps server start
# → Running on http://127.0.0.1:8080/mcpEach daemon has its own PID file:
limps-4269.pid— Project Alimps-8080.pid— Project B
Discover all running daemons (run from a directory without a limps config):
cd /tmp
limps server status
# Found 2 running daemons:
# 127.0.0.1:4269 (PID 12345)
# Uptime: 2h 15m | Sessions: 3
# 127.0.0.1:8080 (PID 67890)
# Uptime: 45m 30s | Sessions: 1Each MCP client can connect to different daemons by configuring different URLs in their config files.
limps plan list # List all plans with status
limps plan agents <plan> # List agents in a plan
limps plan status <plan> # Show plan progress summary
limps plan next <plan> # Get highest-priority available task
limps plan score --plan <plan> --agent <n> # Score a single task
limps plan scores --plan <plan> # Score all available tasks in a plan
limps docs list [path] # List files/directories
limps docs search <query> # Search indexed docs
limps docs process [path] --code "<js>" # Process docs with JavaScript
limps server start # Start HTTP daemon
limps server status # Show daemon status
limps server stop # Stop HTTP daemonlimps init [path] # Initialize new project
limps config show # Display current config
limps config print # Print MCP client config snippets
limps completion zsh # Generate Zsh tab-completion scriptlimps health check # Aggregate all health signals
limps health staleness [plan] # Find stale plans/agents
limps health inference [plan] # Suggest status updates
limps proposals [plan] # List auto-fix proposals
limps proposals apply <id> # Apply a proposal
limps proposals apply-safe # Apply all safe proposalslimps graph reindex # Build/rebuild graph
limps graph health # Graph stats and conflicts
limps graph search <query> # Search entities
limps graph trace <entity> # Trace relationships
limps graph entity <id> # Entity details
limps graph overlap # Find overlapping features
limps graph check [type] # Run conflict detection
limps graph suggest <type> # Graph-based suggestions
limps graph watch # Watch and update incrementallylimps plan scores --plan <plan> # Score all agents in a plan
limps plan score --plan <plan> --agent <n> # Score a single task
limps plan repair [--fix] # Check/fix agent frontmatterConfig lives at .limps/config.json in your project directory, created by limps init.
{
"plansPath": "./plans",
"docsPaths": ["."],
"fileExtensions": [".md"],
"dataPath": ".limps/data",
"extensions": ["@sudosandwich/limps-headless"],
"tools": {
"allowlist": ["list_docs", "search_docs"]
},
"scoring": {
"weights": { "dependency": 40, "priority": 30, "workload": 30 },
"biases": {}
}
}| Option | Description |
|---|---|
plansPath |
Directory for structured plans (NNNN-name/ with agents) |
docsPaths |
Additional directories to index |
fileExtensions |
File types to index (default: .md) |
dataPath |
SQLite database location |
tools |
Tool allowlist/denylist filtering |
extensions |
Extension packages to load |
scoring |
Task prioritization weights and biases |
server |
HTTP daemon settings (port, host, CORS, sessions, timeout) |
graph |
Knowledge graph settings (e.g., entity extraction options) |
retrieval |
Search recipe configuration for hybrid retrieval |
| Variable | Description | Example |
|---|---|---|
MCP_PLANNING_CONFIG |
Path to config file (overrides default discovery) | MCP_PLANNING_CONFIG=./my-config.json limps server bridge |
LIMPS_ALLOWED_TOOLS |
Comma-separated allowlist; only these tools are registered | LIMPS_ALLOWED_TOOLS="list_docs,search_docs" |
LIMPS_DISABLED_TOOLS |
Comma-separated denylist; tools to hide | LIMPS_DISABLED_TOOLS="process_doc,process_docs" |
Precedence: config.tools overrides env vars. If allowlist is set, denylist is ignored.
"Port already in use" error:
If you see this error, another process is using the port:
limps server start
# Error: Port 4269 is already in use.
# Process using port: node (PID 12345)Resolution:
- Kill the existing process:
kill 12345 - Or use a different port:
limps server start --port 8080 - Check if it's another limps daemon:
limps server status(if so, uselimps server stopfirst)
"Daemon may have failed to start" error:
If the daemon starts but doesn't respond to health checks:
limps server start
# Error: Daemon may have failed to start. Check logs or try: limps server start --foregroundResolution:
- Check daemon log path:
limps server status(or run foreground mode:limps server start --foreground) - Check for permission issues: Ensure you have write access to the PID directory
- Verify port is accessible: Try
curl http://127.0.0.1:4269/health - Enable debug logging:
DEBUG=1 limps server start --foreground
Permission issues with PID directory:
If you can't create PID files:
# macOS
ls -la ~/Library/Application\ Support/limps/pids/
# Linux
ls -la ~/.local/share/limps/pids/
# Windows
dir %APPDATA%\limps\pidsEnsure the directory exists and you have write permissions. If not, create it manually:
# macOS
mkdir -p ~/Library/Application\ Support/limps/pids
# Linux
mkdir -p ~/.local/share/limps/pids
# Windows
mkdir %APPDATA%\limps\pidsTIMEOUT error:
The daemon did not respond within the configured timeout. Each health-check request has its own timeout (for example, 1000ms during the final limps server start check and 3000ms for limps server status), and during startup limps will poll for up to about 5 seconds before reporting "Daemon may have failed to start".
Common causes:
- System resource constraints (high CPU/memory usage)
- Slow filesystem (especially for index initialization)
- Large document corpus requiring time to index
Resolution:
- Check system resources:
topor Activity Monitor - Wait a bit longer and retry:
limps server status - Run in foreground to see progress:
limps server start --foreground
NETWORK_ERROR:
Cannot establish connection to the daemon.
Common causes:
- Port is blocked by firewall
- Daemon crashed after starting
- Incorrect host/port configuration
Resolution:
- Verify daemon is running:
limps server status - Check firewall settings for port 4269
- Try
curl http://127.0.0.1:4269/healthmanually - Check daemon logs: see
Log:path inlimps server statusoutput
limps automatically cleans up stale PID files when:
- Running
limps server status(discovers and removes stale files) - Running
limps server start(removes stale file for the target port) - The daemon shuts down gracefully with
limps server stop
If you need to manually clean up PID files:
# macOS
rm ~/Library/Application\ Support/limps/pids/limps-*.pid
# Linux
rm ~/.local/share/limps/pids/limps-*.pid
# Windows
del %APPDATA%\limps\pids\limps-*.pidWhen to manually clean up:
- After a system crash or forced shutdown
- If
limps server startreports a daemon is running but it's not - Before uninstalling limps
If you accidentally try to start a second daemon on the same port:
limps server start
# Error: limps daemon already running (PID 12345 on 127.0.0.1:4269). Run 'limps server stop' first.This is expected behavior — limps prevents multiple daemons on the same port using PID-based locking.
Resolution:
- Check all running daemons:
limps server status - Stop the existing daemon:
limps server stop - Or start on a different port:
limps server start --port 8080
If MCP clients can't connect to the daemon, verify connectivity step by step:
1. Check daemon status:
limps server status
# Should show daemon running with healthy status2. Verify health endpoint:
curl http://127.0.0.1:4269/health
# Should return JSON with status "ok"3. Verify MCP endpoint:
curl -X POST http://127.0.0.1:4269/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{"protocolVersion":"2024-11-05","capabilities":{}}}'
# Should return MCP initialize response4. Enable debug logging:
DEBUG=1 limps server start --foreground
# Watch for connection attempts and errors5. Check MCP client config:
Ensure the URL in your client config matches the daemon:
{
"mcpServers": {
"limps-planning-myproject": {
"transport": {
"type": "http",
"url": "http://127.0.0.1:4269/mcp"
}
}
}
}Problem: limps is using the wrong config file
If limps config path shows a different config than expected, use the diagnostic command to understand why:
limps config show-resolutionThis shows all three priority levels for config resolution:
- CLI
--configargument (highest priority) MCP_PLANNING_CONFIGenvironment variable (second priority)- Local
.limps/config.json(searches up from cwd)
Common causes:
-
Environment variable set: Check if
MCP_PLANNING_CONFIGis set in your shell or IDE. This takes priority over local config files.# Check if set echo $MCP_PLANNING_CONFIG # Unset if needed unset MCP_PLANNING_CONFIG
-
Wrong working directory: Config search starts from your current working directory. Make sure you're in the right directory when running limps commands.
# Check current directory pwd # Navigate to your project cd /path/to/your/project
-
Missing
.limps/config.json: The config file must be in a.limpssubdirectory, not at the project root.# Correct location /path/to/project/.limps/config.json # Wrong - won't be found /path/to/project/config.json
Quick fixes:
# Override with explicit path
limps plan list --config /path/to/project/.limps/config.json
# Or set environment variable
export MCP_PLANNING_CONFIG=/path/to/project/.limps/config.json
# Or initialize a new config in current directory
limps initlimps exposes MCP tools for AI assistants:
| Category | Tools |
|---|---|
| Documents | process_doc, process_docs, create_doc, update_doc, delete_doc, list_docs, search_docs, manage_tags, open_document_in_cursor |
| Plans | create_plan, list_plans, list_agents, get_plan_status |
| Tasks | get_next_task, update_task_status, configure_scoring |
| Health | check_staleness, check_drift, infer_status, get_proposals, apply_proposal |
| Knowledge Graph | graph (unified: health, search, trace, entity, overlap, reindex, check, suggest) |
The knowledge graph builds a structured, queryable representation of your planning documents. It extracts 6 entity types (plan, agent, feature, file, tag, concept) and their relationships (ownership, dependency, modification, tagging, conceptual links). Use it to find conflicts, trace dependencies, and get graph-based suggestions.
# Build the graph from plan files
limps graph reindex
# Check graph health and conflicts
limps graph health --json
# Search entities
limps graph search "auth" --json
# Trace relationships
limps graph trace plan:0042 --direction down
# Detect conflicts (file contention, circular deps, stale WIP)
limps graph check --json
# Get graph-based suggestions
limps graph suggest dependency-orderSee Knowledge Graph Architecture and CLI Reference for details.
limps includes automated health checks that detect issues and suggest fixes:
- Staleness — Flags plans/agents not updated within configurable thresholds
- Code drift — Detects when agent frontmatter references files that no longer exist
- Status inference — Suggests status changes based on dependency completion and body content
- Proposals — Aggregates all suggestions into reviewable, apply-able fixes
limps health check --json # Run all checks
limps proposals apply-safe # Auto-apply safe fixesThis repo ships Claude Code slash commands in .claude/commands/ and a Vercel Skills skill in skills/limps-planning.
Claude Code commands (available automatically when limps is your working directory):
| Command | Description |
|---|---|
/create-feature-plan |
Create a full TDD plan with agents |
/run-agent |
Pick up and execute the next agent |
/close-feature-agent |
Mark an agent PASS and clean up |
/update-feature-plan |
Revise an existing plan |
/audit-plan |
Audit a plan for completeness |
/list-feature-plans |
List all plans with status |
/plan-list-agents |
List agents in a plan |
/plan-check-status |
Check plan progress |
/pr-create |
Create a PR from the current branch |
/pr-check-and-fix |
Fix CI failures and update PR |
/pr-comments |
Review and respond to PR comments |
/review-branch |
General code review of current branch |
/review-mcp |
Review code for MCP/LLM safety |
/attack-cli-mcp |
Stress-test CLI + MCP for robustness |
Vercel Skills (for other AI IDEs):
Install the limps planning skill to get AI-powered guidance for plan creation, agent workflows, and task management:
# Install only the limps planning skill (recommended for consumers)
npx skills add https://github.com/sudosandwich/limps/tree/main/.claude/skills/limps-plan-operations
# Or install all available skills
npx skills add sudosandwich/limpsAvailable Skills:
| Skill | Description |
|---|---|
limps-plan-operations |
Plan identification, artifact loading, distillation rules, and lifecycle guidance using limps MCP tools |
mcp-code-review |
Security-focused code review for MCP servers and LLM safety |
branch-code-review |
General code review for design, maintainability, and correctness |
git-commit-best-practices |
Conventional commits and repository best practices |
See skills.yaml for the complete manifest of the .claude/skills packages installed via npx skills add above; the separate skills/limps-planning/ package in this repo is a legacy distribution and new consumers should prefer the .claude/skills method.
Extensions add MCP tools and resources. Install from npm:
npm install -g @sudosandwich/limps-headlessAdd to config:
{
"extensions": ["@sudosandwich/limps-headless"],
"limps-headless": {
"cacheDir": "~/Library/Application Support/limps-headless"
}
}Available extensions:
@sudosandwich/limps-headless— Headless UI contract extraction, semantic analysis, and drift detection (Radix UI and Base UI migration).
The @sudosandwich/limps-obsidian-plugin package provides deep Obsidian integration:
- Persistent MCP session to the limps daemon with auto-reconnect, keepalive, and CLI fallback
- Document management — create, update, delete plan documents from within Obsidian
- Task management — get next task, update task status (GAP/WIP/PASS/BLOCKED)
- Search & proposals — full-text search, proposal review, and auto-apply for safe fix types
- Health Hub — sidebar view with daemon, graph, link, and MCP status
- Directed graph view — interactive 2D/3D force graph with clickable nodes
- Scheduled health checks — periodic staleness, drift, and conflict detection
- Vault automation — auto-reindex when plan files change, event-driven refresh
- Editor diagnostics — inline link validation
- Graph sync to Obsidian surfaces (
.md,.canvas,.base)
Full YAML frontmatter support, tag management (frontmatter and inline #tag), and automatic exclusion of .obsidian/, .git/, node_modules/.
See packages/limps-obsidian-plugin/README.md for setup instructions, commands, and settings reference.
git clone https://github.com/paulbreuler/limps.git
cd limps
npm install
npm run build
npm testThis is a monorepo with:
packages/limps— Core MCP serverpackages/limps-headless— Headless UI extension (Radix/Base UI contract extraction and audit)
limps manages planning for runi, using a separate folder (in this case a git repo) for plans.
The fastest way is the /create-feature-plan slash command (Claude Code) — it handles numbering, doc creation, and agent distillation automatically via MCP tools. See .claude/commands/create-feature-plan.md for the full spec.
You can also run the same steps manually with MCP tools:
list_plans→ determine next plan numbercreate_plan→ scaffold the plan directorycreate_doc→ add plan, interfaces, README, and agent filesupdate_task_status→ track progress
Plans follow this layout:
NNNN-descriptive-name/
├── README.md
├── NNNN-descriptive-name-plan.md
├── interfaces.md
└── agents/
├── 000_agent_infrastructure.agent.md
├── 001_agent_feature-a.agent.md
└── ...
Numbered prefixes keep plans and agents lexicographically ordered. get_next_task uses the agent number (plus dependency and workload scores) to suggest what to work on next.
Plan Structure
plans/
├── 0001-feature-name/
│ ├── 0001-feature-name-plan.md # Main plan with specs
│ ├── interfaces.md # Interface contracts
│ ├── README.md # Status index
│ └── agents/ # Task files
│ ├── 000-setup.md
│ ├── 001-implement.md
│ └── 002-test.md
└── 0002-another-feature/
└── ...
Agent files use frontmatter to track status:
---
status: GAP | WIP | PASS | BLOCKED
persona: coder | reviewer | pm | customer
depends_on: ["000-setup"]
files:
- src/components/Feature.tsx
---Task Scoring Algorithm
get_next_task returns tasks scored by:
| Component | Max Points | Description |
|---|---|---|
| Dependency | 40 | All dependencies satisfied = 40, else 0 |
| Priority | 30 | Based on agent number (lower = higher priority) |
| Workload | 30 | Based on file count (fewer = higher score) |
Biases adjust final scores:
{
"scoring": {
"biases": {
"plans": { "0030-urgent-feature": 20 },
"personas": { "coder": 5, "reviewer": -10 },
"statuses": { "GAP": 5, "WIP": -5 }
}
}
}RLM (Recursive Language Model) Support
process_doc and process_docs execute JavaScript in a secure QuickJS sandbox. User-provided code is statically validated and cannot use require, import, eval, fetch, XMLHttpRequest, WebSocket, process, timers, or other host/network APIs—so it cannot make external calls or access the host.
await process_doc({
path: "plans/0001-feature/plan.md",
code: `
const features = extractFeatures(doc.content);
return features.filter(f => f.status === 'GAP');
`,
});Available extractors:
extractSections()— Markdown headingsextractFrontmatter()— YAML frontmatterextractFeatures()— Plan features with statusextractAgents()— Agent metadataextractCodeBlocks()— Fenced code blocks
LLM sub-queries (opt-in):
await process_doc({
path: "plans/0001/plan.md",
code: "extractFeatures(doc.content)",
sub_query: "Summarize each feature",
allow_llm: true,
llm_policy: "force", // or 'auto' (skips small results)
});MCP Resources
Progressive disclosure via resources:
| Resource | Description |
|---|---|
plans://index |
List of all plans (minimal) |
plans://summary |
Plan summaries with key info |
plans://full |
Full plan documents |
decisions://log |
Decision log entries |
Example: Custom Cursor Commands
Create .cursor/commands/run-agent.md:
# Run Agent
Start work on the next available task.
## Instructions
1. Use `get_next_task` to find the highest-priority task
2. Use `process_doc` to read the agent file
3. Use `update_task_status` to mark it WIP
4. Follow the agent's instructionsThis integrates with limps MCP tools for seamless task management.
Model Context Protocol is a standardized protocol for AI applications to connect to external systems. Originally from Anthropic (Nov 2024), now part of the Linux Foundation's Agentic AI Foundation.
MIT

