Most users don't need this! The default hosted backend works great. But if you want to run your own backend for privacy, customization, or development, follow this guide.
- Privacy: Keep your code and AI interactions private
- Customization: Use different AI models or providers
- Development: Contribute to the backend or test changes
- Cost Control: Use your own API keys and control costs
- Python 3.8 or higher
- pip (Python package manager)
- An AI provider API key (GitHub Models, Mistral AI, etc.)
git clone https://github.com/Akash-nath29/Coderrr.git
cd CoderrrWindows:
python -m venv env
.\env\Scripts\Activate.ps1Linux/Mac:
python3 -m venv env
source env/bin/activatepip install -r backend/requirements.txtCreate backend/.env file:
cp backend/.env.example backend/.envEdit backend/.env with your API credentials:
# Choose ONE authentication method:
# Option 1: GitHub Models (Free tier available)
GITHUB_TOKEN=ghp_your_github_token_here
# Option 2: Mistral AI Direct
# MISTRAL_API_KEY=your_mistral_api_key_here
# Model Configuration
MISTRAL_ENDPOINT=https://models.inference.ai.azure.com
MISTRAL_MODEL=mistral-large-2411
# Server Configuration
TIMEOUT_MS=120000GitHub Models (Recommended for Free Tier):
- Go to https://github.com/settings/tokens
- Click "Generate new token" → "Generate new token (classic)"
- Select scopes:
read:user(minimal) - Copy the token and paste in
GITHUB_TOKEN
Mistral AI:
- Go to https://console.mistral.ai/
- Sign up and navigate to API Keys
- Create a new API key
- Copy and paste in
MISTRAL_API_KEY
Development Mode (with auto-reload):
cd backend
uvicorn main:app --reload --port 5000Production Mode:
cd backend
uvicorn main:app --host 0.0.0.0 --port 5000 --workers 4The backend will be available at http://localhost:8000
Create ~/.coderrr/.env:
Windows:
mkdir $HOME\.coderrr
echo CODERRR_BACKEND=http://localhost:8000 > $HOME\.coderrr\.envLinux/Mac:
mkdir -p ~/.coderrr
echo "CODERRR_BACKEND=http://localhost:8000" > ~/.coderrr/.envcoderrr exec "Create a hello world script"For production deployment, see our Deployment Guide which covers:
- Docker deployment
- Cloud hosting (AWS, Azure, GCP)
- Vercel/Netlify deployment
- PM2 process management
- Nginx reverse proxy setup
Error: ModuleNotFoundError: No module named 'fastapi'
- Solution: Make sure you activated the virtual environment and ran
pip install -r backend/requirements.txt
Error: Address already in use
- Solution: Port 5000 is already taken. Either kill the process using port 5000 or use a different port:
Then update
uvicorn main:app --port 5001
CODERRR_BACKEND=http://localhost:5001
Error: Failed to communicate with backend: ECONNREFUSED
- Check backend is running:
curl http://localhost:8000should return a JSON response - Check
~/.coderrr/.envhas correctCODERRR_BACKENDURL - Check firewall isn't blocking port 5000
Error: 401 Unauthorized
- Verify your API key is correct in
backend/.env - For GitHub token: check it has required permissions
- For Mistral: verify key is active on their console
Edit backend/.env:
# Use GPT-4 via Azure
MISTRAL_ENDPOINT=https://your-azure-endpoint.openai.azure.com
MISTRAL_MODEL=gpt-4
# Use Claude via custom endpoint
MISTRAL_ENDPOINT=https://api.anthropic.com
MISTRAL_MODEL=claude-3-opusNote: You may need to modify backend/main.py for providers other than Mistral/GitHub Models.
For large requests:
TIMEOUT_MS=300000 # 5 minutesAdd to backend/main.py:
from fastapi import FastAPI
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
limiter = Limiter(key_func=get_remote_address)
app = FastAPI()
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
@app.post("/chat")
@limiter.limit("10/minute")
async def chat(request: Request, body: ChatRequest):
# ... existing code- Never commit
.envfiles to git - Use strong API keys and rotate them regularly
- Run backend on localhost only unless you need remote access
- Enable firewall if exposing backend to internet
- Use HTTPS in production with reverse proxy
- Monitor API usage to prevent unexpected costs
Remember: Most users don't need to self-host! The default hosted backend at https://coderrr-backend.vercel.app works great for normal use.