Coderrr is a dual-architecture AI coding agent built with a Python FastAPI backend and Node.js CLI frontend. This document explains the system design, data flow, and key architectural decisions.
File: main.py
Responsibilities:
- Interface with AI models (Mistral AI / GitHub Models)
- Enforce JSON response schema
- Handle API authentication
- Process chat requests
- Return structured plans
Key Features:
- FastAPI for high-performance async API
- Dynamic mistralai import with fallback
- Environment-based configuration
- CORS enabled for local development
Endpoint:
POST /chat
{
"prompt": "user request",
"temperature": 0.2,
"max_tokens": 2000,
"top_p": 1.0
}
Response:
{
"response": "JSON-formatted plan"
}
Entry Points:
bin/coderrr.js- Modern commander-based CLIbin/coderrr-cli.js- Blessed-based TUI (legacy)
Core Modules (src/):
- Backend communication
- Plan parsing and execution
- Codebase scanning integration
- Auto-testing coordination
- Create, read, update, patch, delete files
- Automatic directory creation
- Path resolution (relative → absolute)
- Safe command execution
- User permission prompts
- Live stdout/stderr streaming
- Shell configuration (PowerShell on Windows)
- Parse plans into visual TODO lists
- Track progress (pending → in-progress → completed)
- Visual indicators (○ ⋯ ✓)
- Recursive file discovery
- Smart filtering (ignore node_modules, etc.)
- Caching (1-minute TTL)
- File search by pattern
- Git repository detection
- Automatic checkpoint commits before operations
- Auto-commit successful changes
- Interactive rollback menu
- Uncommitted changes detection
- Chalk-based colored output
- Ora spinners
- Inquirer prompts
- Status indicators
┌─────────────────────────────────────────────────────────────┐
│ User Input │
└─────────────────────┬───────────────────────────────────────┘
│
▼
┌─────────────────────────────┐
│ Agent.process(request) │
│ - Load codebase context │
│ - Prepare enhanced prompt │
└─────────────┬───────────────┘
│
▼
┌─────────────────────────────┐
│ Backend POST /chat │
│ - Call Mistral AI │
│ - Enforce JSON schema │
└─────────────┬───────────────┘
│
▼
┌─────────────────────────────┐
│ Agent.parseJsonResponse() │
│ - Extract JSON from text │
│ - Handle markdown blocks │
└─────────────┬───────────────┘
│
▼
┌─────────────────────────────┐
│ TodoManager.parseTodos() │
│ - Create visual TODO list │
│ - Display to user │
└─────────────┬───────────────┘
│
▼
┌─────────────────────────────┐
│ Agent.executePlan() │
│ For each action: │
│ ├─ File op → FileOps │
│ └─ Command → Executor │
└─────────────┬───────────────┘
│
▼
┌─────────────────────────────┐
│ Agent.runTests() │
│ - Detect test framework │
│ - Execute tests │
└─────────────┬───────────────┘
│
▼
┌─────────┐
│ Done! │
└─────────┘
The AI returns plans in this strict format:
{
"explanation": "Brief summary of the plan",
"plan": [
{
"action": "create_file|update_file|patch_file|delete_file|read_file|run_command",
"path": "relative/path/to/file.js",
"content": "full file content",
"command": "shell command",
"summary": "one-line description"
}
]
}Supported Actions:
create_file- Create new file with contentupdate_file- Replace entire filepatch_file- Modify specific partsdelete_file- Remove fileread_file- Read and display filerun_command- Execute shell command
Give AI full awareness of project structure to prevent filename mismatches.
-
Scan Phase
// On first Agent.process() call scanner.scan() → { structure: [...], // All files/dirs files: {...}, // File metadata summary: {...} // Stats }
-
Cache Phase
- Results cached for 60 seconds
- Subsequent requests use cache
- Manual refresh available
-
Context Enhancement
enhancedPrompt = `${userPrompt} EXISTING PROJECT STRUCTURE: - src/agent.js (31KB) - src/fileOps.js (8KB) ... Use EXACT filenames from above.`
Directories:
node_modules,env,.venv,__pycache__.git,dist,build,coveragevendor,.next,.nuxt
Files:
.DS_Store,Thumbs.dbpackage-lock.json,yarn.lock.env,.gitignore
Size Limit: 500KB per file
All commands require user approval:
executor.execute(command, {
requirePermission: true // Always enforced
})Flow:
- Display command to user
- Prompt for confirmation (Y/n)
- Execute if approved
- Show live output
- All secrets in
.env - No hardcoded credentials
- Backend URL configurable
- API keys never logged
- Paths resolved to absolute
- Parent directories auto-created
- Operations logged
- User can review changes
Required:
GITHUB_TOKEN=xxx # GitHub Models API key
# OR
MISTRAL_API_KEY=xxx # Mistral AI API keyOptional:
MISTRAL_ENDPOINT=https://... # API endpoint
MISTRAL_MODEL=mistral-large # Model name
CODERRR_BACKEND=http://... # Backend URL
TIMEOUT_MS=120000 # Request timeoutPort: 5000 (default) Host: localhost (secure) Reload: Enabled in development
Command:
uvicorn main:app --reload --port 5000Automatic test detection after successful plan execution:
| Framework | Detection | Command |
|---|---|---|
| JavaScript | package.json |
npm test |
| Python | pytest.ini or tests/ |
pytest |
| Go | go.mod |
go test ./... |
| Rust | Cargo.toml |
cargo test |
| Java | pom.xml or build.gradle |
mvn test / gradle test |
if (error.code === 'ECONNREFUSED') {
ui.error('Cannot connect to backend')
ui.warning('Start backend: uvicorn main:app --reload --port 5000')
}Three-tier parsing strategy:
- Direct
JSON.parse() - Extract from markdown code blocks
- Find first
{...}object
- Graceful fallback
- Clear error messages
- Continue with next operation option
- Codebase scan: 60-second TTL
- Typical scan time: <10ms for 25 files
- Memory: ~200KB cached data
- Async FastAPI: Non-blocking I/O
- Streaming: Response streaming planned
- Timeout: 120s default
- Lazy loading: Modules loaded on demand
- Spinners: Async UI updates
- Parallel ops: File operations can run in parallel
// In src/fileOps.js
class FileOperations {
async newOperation(params) {
// Implementation
}
async execute(operation) {
case 'new_operation':
return await this.newOperation(operation);
}
}// In src/agent.js
async runTests() {
const testCommands = [
// Add new detection
{ file: 'newtest.config', command: 'newtest run' }
];
}Change CODERRR_BACKEND:
CODERRR_BACKEND=https://my-custom-backend.comBackend must implement /chat endpoint with same schema.
- Backend:
uvicornlogs to stdout - Frontend: UI messages via
ui.js - Errors: Caught and displayed with context
# Enable verbose logging
DEBUG=* coderrr exec "task"# Run all tests
npm test
# Specific tests
node test/test-scanner.js
node test/test-agent-scanner.js
node test/test-connection.js- Always use environment variables for configuration
- Never bypass permission prompts for safety
- Keep scanner cache to 1 minute for balance
- Review AI-generated code before committing
- Run tests after making changes
- Update documentation when adding features
- Follow existing patterns for consistency
Planned improvements:
- WebSocket streaming for real-time responses
- Plugin system for custom operations
- Multi-backend support (OpenAI, Claude)
- Semantic code search
- Dependency analysis
- Incremental scanning
- Undo/redo operations
For implementation details, see .github/copilot-instructions.md