This guide covers deploying the Chapters MVP backend to production.
- Docker and Docker Compose
- PostgreSQL 15+ with pgvector extension
- Redis 7+
- OpenAI API key
- S3-compatible storage (AWS S3 or Cloudflare R2)
- Domain name (optional but recommended)
Create a .env.production file:
# Database (use managed PostgreSQL service)
DATABASE_URL=postgresql://user:password@host:5432/chapters
# Redis (use managed Redis service)
REDIS_URL=redis://host:6379/0
# JWT (generate secure key)
SECRET_KEY=<generate-with-openssl-rand-hex-32>
ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=15
REFRESH_TOKEN_EXPIRE_DAYS=7
# OpenAI
OPENAI_API_KEY=sk-your-production-key
# S3 Storage (Cloudflare R2 recommended)
S3_BUCKET=chapters-production
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_ENDPOINT_URL=https://account-id.r2.cloudflarestorage.com
S3_REGION=auto
# Sentry (recommended for production)
SENTRY_DSN=https://your-sentry-dsn
# App
DEBUG=false# Generate SECRET_KEY
openssl rand -hex 32
# Generate S3 credentials (from your provider)
# AWS: IAM Console
# Cloudflare R2: Dashboard > R2 > Manage R2 API TokensBest for: Small-scale deployments, staging environments
# 1. Clone repository
git clone <your-repo>
cd chapters
# 2. Set up environment
cp .env.production backend/.env
# 3. Start services
docker-compose --profile full up -d
# 4. Run migrations
docker-compose exec backend alembic upgrade head
# 5. Check logs
docker-compose logs -f backendPros: Simple, all-in-one Cons: Single server, limited scaling
Best for: Production deployments, scalability
- Railway: Easy deployment, auto-scaling
- Render: Free tier available, good for MVP
- Fly.io: Global edge deployment
- AWS ECS/Fargate: Enterprise-grade
- Google Cloud Run: Serverless containers
- Supabase: PostgreSQL with pgvector support
- Neon: Serverless PostgreSQL
- AWS RDS: Managed PostgreSQL
- DigitalOcean Managed Database
- Upstash: Serverless Redis
- Redis Cloud: Managed Redis
- AWS ElastiCache: Enterprise Redis
- Cloudflare R2: S3-compatible, no egress fees (recommended)
- AWS S3: Industry standard
- DigitalOcean Spaces: Simple S3-compatible
Best for: Large-scale, multi-region deployments
See k8s/ directory for Kubernetes manifests (to be created).
Railway is recommended for quick MVP deployment.
npm install -g @railway/cli
railway loginrailway init# Add PostgreSQL
railway add --database postgres
# Add Redis
railway add --database redis
# Deploy backend
railway upIn Railway dashboard:
- Go to your backend service
- Add all environment variables from
.env.production - Railway will auto-populate DATABASE_URL and REDIS_URL
railway run alembic upgrade headgit push railway main- Go to Render dashboard
- Click "New +" → "Web Service"
- Connect your GitHub repository
- Select
backendas root directory
Name: chapters-api
Environment: Docker
Docker Command: (leave default)- Click "New +" → "PostgreSQL"
- Copy connection string
- Click "New +" → "Redis"
- Copy connection string
Add all variables from .env.production
Render will auto-deploy on git push.
-- Connect to your database
CREATE EXTENSION IF NOT EXISTS vector;# Local
cd backend
alembic upgrade head
# Docker
docker-compose exec backend alembic upgrade head
# Railway
railway run alembic upgrade head
# Render (via shell)
alembic upgrade head-- Check tables
\dt
-- Should see 21 tables:
-- users, books, chapters, chapter_blocks, drafts, draft_blocks,
-- notes, footnotes, hearts, follows, bookmarks, margins,
-- between_the_lines_threads, between_the_lines_invites,
-- between_the_lines_messages, between_the_lines_pins,
-- blocks, reports, chapter_embeddings, user_taste_profilescurl https://your-domain.com/health
# Expected: {"status": "healthy"}curl https://your-domain.com/
# Expected: Welcome message# Already integrated, just add SENTRY_DSN to environment
SENTRY_DSN=https://your-sentry-dsnLogs are automatically structured and sent to stdout.
View logs:
# Docker
docker-compose logs -f backend
# Railway
railway logs
# Render
View in dashboard- API response times
- Error rates
- Database connection pool
- Redis connection status
- OpenAI API usage
- S3 upload success rate
- Change SECRET_KEY from default
- Set DEBUG=false in production
- Use HTTPS (SSL/TLS)
- Enable CORS only for your domains
- Use managed database with backups
- Enable database SSL connections
- Rotate API keys regularly
- Set up rate limiting (already implemented)
- Enable Sentry for error tracking
- Use environment variables (never commit secrets)
# Manual backup
pg_dump $DATABASE_URL > backup.sql
# Restore
psql $DATABASE_URL < backup.sqlRecommended: Use managed database with automatic backups
- Railway: Automatic daily backups
- Render: Automatic backups on paid plans
- AWS RDS: Automated backups with point-in-time recovery
S3/R2 data is already redundant. Consider:
- Enable versioning
- Set up lifecycle policies
- Cross-region replication (for critical data)
The backend is stateless and can scale horizontally:
# Docker Compose
docker-compose up --scale backend=3
# Kubernetes
kubectl scale deployment backend --replicas=3- Use connection pooling (already configured)
- Add read replicas for read-heavy workloads
- Consider pgBouncer for connection management
- Use Redis Cluster for high availability
- Consider Redis Sentinel for automatic failover
- Use Cloudflare CDN in front of R2
- Or AWS CloudFront in front of S3
# Check current version
alembic current
# Rollback one version
alembic downgrade -1
# Try again
alembic upgrade head- Upgrade OpenAI plan
- Implement request queuing
- Add user-facing rate limit messages
- Add database indexes (already optimized)
- Enable query caching
- Add read replicas
- Increase container memory limits
- Optimize database queries
- Add pagination to large result sets
- ✅ Verify all endpoints work
- ✅ Test authentication flow
- ✅ Create test user and chapter
- ✅ Test AI features (Muse)
- ✅ Test media upload
- ✅ Monitor error rates in Sentry
- ✅ Set up alerts for critical errors
- ✅ Document API for mobile team
- ✅ Begin mobile app development
Once deployed, API docs are available at:
- Swagger UI:
https://your-domain.com/docs - ReDoc:
https://your-domain.com/redoc
-
Connect Repository
- Go to Netlify dashboard
- Click "Add new site" → "Import an existing project"
- Connect GitHub and select repository
-
Configure Build
Base directory: frontend Build command: npm run build Publish directory: frontend/.next -
Environment Variables
NEXT_PUBLIC_API_URL=https://your-backend-api.com
-
Deploy
- Netlify auto-deploys on git push to main
- Custom domain: Site settings → Domain management
Already configured in project root:
[build]
base = "frontend"
command = "npm run build"
publish = ".next"npm install -g eas-cli
eas logincd mobile
eas build:configureCreate eas.json:
{
"build": {
"production": {
"env": {
"API_URL": "https://your-backend-api.com"
}
}
}
}# iOS
eas build --platform ios --profile production
# Android
eas build --platform android --profile production# iOS App Store
eas submit --platform ios
# Google Play Store
eas submit --platform android# Push updates without app store review
eas update --branch productionFor deployment issues:
- Check logs first
- Review this guide
- Check service status pages
- Contact platform support