From 266b37762deaa849a0988ce0b7bce5b003a7e043 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Iva=CC=81n=20Raskovsky?= Date: Mon, 23 Mar 2026 18:34:45 +0000 Subject: [PATCH] Migrate session management from custom script to orca Replace the custom session-mgmt worktree system with orca for Docker-isolated development sessions. Update all documentation, simplify database scripts for shared PostgreSQL, and add orchestrator.yml configuration. --- CLAUDE.md | 31 ++- SESSIONS.md | 265 +++++++++-------------- backend/.env.example | 26 +-- backend/CLAUDE.md | 41 ++-- backend/docker-compose.yml | 40 ++-- backend/scripts/README.md | 168 +++++++------- backend/scripts/migrate-prod-to-dev.sh | 35 +-- backend/scripts/migrate_rds_to_sqlite.py | 22 +- backend/tally/settings.py | 5 +- orchestrator.yml | 57 +++++ 10 files changed, 341 insertions(+), 349 deletions(-) create mode 100644 orchestrator.yml diff --git a/CLAUDE.md b/CLAUDE.md index 618fcae5..3f7a73be 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -13,23 +13,32 @@ This applies to ALL work — backend, frontend, research, everything. No excepti **Important**: This project has detailed documentation for faster development: - **Backend Documentation**: See `backend/CLAUDE.md` for Django structure, API endpoints, models, and patterns - **Frontend Documentation**: See `frontend/CLAUDE.md` for Svelte 5 structure, routes, components, and API integration -- **Session Management**: See `SESSIONS.md` for the Git worktree-based development session system +- **Session Management**: See `SESSIONS.md` for orca-based development sessions with Docker isolation ⚠️ **Keep documentation updated**: When making changes to the codebase, update the relevant CLAUDE.md file to help future development sessions. -## Development Sessions & Worktrees -This project uses a custom session management system with Git worktrees. See `SESSIONS.md` for complete documentation on: -- Creating and managing parallel development sessions -- Port allocation and tmux window management -- Claude Code session persistence across worktrees -- Troubleshooting common issues +## Development Sessions with Orca +This project uses [orca](https://github.com/rasca/orca) for Docker-isolated development sessions. See `SESSIONS.md` for complete documentation. + +Each session gets: git worktree, Docker container, tmux session, and its own PostgreSQL database. Quick commands: ```bash -./session-mgmt add # Create new session -./session-mgmt list # Show all sessions -./session-mgmt resume [name] # Resume after restart -./session-mgmt remove # Remove session +orca add # Create new session +orca list # Show all sessions +orca attach # Attach to tmux session +orca resume [name] # Resume after restart +orca remove # Remove session +orca pr # Create session from PR +``` + +Configuration lives in `orchestrator.yml` at the project root. + +### Database +A shared PostgreSQL container (`tally-postgres`) serves all sessions: +```bash +cd backend && docker compose up -d db # Start PostgreSQL +cd backend/scripts && ./migrate-prod-to-dev.sh # Sync production data ``` ## Svelte 5 Important Notes diff --git a/SESSIONS.md b/SESSIONS.md index 18c2acdc..eaabd656 100644 --- a/SESSIONS.md +++ b/SESSIONS.md @@ -1,184 +1,135 @@ -# Points Session Management System Documentation +# Development Sessions with Orca ## Overview -The Points project uses a custom session management system that allows multiple parallel development sessions using Git worktrees, tmux, and Claude Code. Each session gets its own isolated environment with dedicated ports and persistent Claude conversations. -## Key Components +This project uses [orca](https://github.com/rasca/orca) for Docker-isolated development sessions. Each session gets its own git worktree, Docker container, tmux session, and PostgreSQL database. -### 1. The `session-mgmt` Script -Location: `/Users/rasca/Dev/tally/session-mgmt` +## Prerequisites -This is the main management script with the following commands: -- `./session-mgmt add ` - Creates a new session with worktree, branch, and tmux windows -- `./session-mgmt remove ` - Removes a session completely (worktree, branch, tmux) -- `./session-mgmt resume [name]` - Resumes session(s) after system restart -- `./session-mgmt list` - Shows all active sessions with their ports +- **orca** installed: `curl -fsSL https://raw.githubusercontent.com/rasca/orca/main/install.sh | bash` +- **Docker Desktop** running +- **orca base image** built: `orca build` -### 2. Session Structure -Each session consists of: -- **Git Worktree**: Located at `~/Dev/tally-/` -- **Git Branch**: Named exactly as the session (e.g., `feature-auth`, not `session-feature-auth`) -- **Tmux Session**: Named `tally-` with 5 windows: - - `backend:` - Django server running on the specified port - - `frontend:` - Svelte dev server running on the specified port - - `claude` - Claude Code session running in worktree root - - `shell` - Interactive shell with environment activated and ready - - `cli` - Command line ready for one-off commands (env activated, not executed) - -### 3. Port Management -- Backend ports start at 8000 and increment (8001, 8002, etc.) -- Frontend ports start at 5000 and increment (5001, 5002, etc.) -- Port assignments are tracked in `.tally-sessions` file to avoid conflicts -- The script automatically finds the next available ports when creating new sessions - -### 4. Session Configuration File -Location: `/Users/rasca/Dev/tally/.tally-sessions` - -JSON file tracking all sessions: -```json -{ - "session-name": { - "backend_port": 8000, - "frontend_port": 5000, - "worktree": "/Users/rasca/Dev/tally-session-name", - "branch": "session-name" - } -} -``` +## Quick Start + +```bash +# Start the shared PostgreSQL container (one-time) +cd backend && docker compose up -d db && cd .. + +# Create a new session +orca add feature-auth + +# Attach to the tmux session +orca attach feature-auth + +# List all sessions +orca list + +# Stop a session (preserves worktree + volumes) +orca stop feature-auth + +# Resume after restart +orca resume feature-auth -## Automatic Setup Features - -When creating a new session, the script automatically: - -### 1. Creates Symlinks -- `frontend/node_modules` → symlinked to main project's node_modules (avoids reinstalling) - -### 2. Copies Resources (Each Session Gets Its Own) -- `backend/db.sqlite3` → copied from main project (independent database per session) -- `backend/.env` - Copied from main project (contains SECRET_KEY and other env vars) -- `frontend/.env` - Copied from main project and updated with session-specific backend URL - - Automatically sets `VITE_API_URL=http://localhost:` for proper API routing - -### 3. Sets Up Claude Code -- Each Claude session uses a persistent name: `tally-` -- When resuming: uses `--resume` flag to continue previous conversation -- Working directory is set to the worktree root - -### 4. Activates Virtual Environment -All commands use `workon tally` to activate the Python virtual environment before executing - -## Claude Tool Permissions - -### Where Permissions Are Stored -Claude Code stores tool permissions in `~/.claude.json` under the `projects` section: -```json -{ - "projects": { - "/Users/rasca/Dev/tally": { - "allowedTools": [ - "Bash(npm install:*)", - "Bash(python manage.py:*)", - // ... other allowed tools - ] - } - } -} +# Remove everything +orca remove feature-auth ``` -### Permission Inheritance Issue -Currently, each new worktree is treated as a separate project by Claude Code, so permissions need to be set up for each worktree. The script should copy the `allowedTools` array from the main project to each new worktree project in `~/.claude.json`. +## Session Structure -## Common Workflows +Each session consists of: +- **Git Worktree**: `~/Dev/tally-/` +- **Git Branch**: Named after the session +- **Docker Container**: `orca-tally-` +- **Tmux Session**: `tally-` with windows: + - `backend:` — Django server + - `frontend:` — Svelte dev server + - `django-shell` — Django interactive shell + - `claude` — Claude Code session + - `shell` — Interactive shell + - `cli` — For one-off commands + +## Configuration + +All session configuration lives in `orchestrator.yml` at the project root. Key settings: +- **Ports**: Backend starts at 8000, frontend at 5000 (auto-incremented per session) +- **Database**: Each session gets `DATABASE_URL` set to `tally_` database +- **Setup files**: `.env` files are copied and updated per-session automatically + +## Database Management + +### Architecture + +A single shared PostgreSQL container serves all sessions: -### Creating a New Feature Session -```bash -./tally-session add feature-payment -tmux attach -t tally-feature-payment -# Work on your feature across backend, frontend, and Claude windows ``` +Docker: tally-postgres (port 5432, postgres:17) + ├── tally_main ← main worktree + ├── tally_template ← production snapshot + ├── tally_feature_x ← orca session + └── tally_pr_123 ← PR session +``` + +### Start PostgreSQL -### After System Restart ```bash -# Resume all sessions -./tally-session resume +cd backend && docker compose up -d db +``` + +### Sync Production Data -# Or resume specific session -./tally-session resume feature-payment +```bash +cd backend/scripts +./migrate-prod-to-dev.sh --download # Download from RDS +./migrate-prod-to-dev.sh --upload # Restore to tally_template +./migrate-prod-to-dev.sh --setup # Run migrations + create admin ``` -### Cleaning Up +### Create Session Database from Template + +After syncing prod data, create instant clones for new sessions: + ```bash -./tally-session remove feature-payment -# This removes the worktree, branch, and kills the tmux session +docker compose -f backend/docker-compose.yml exec db \ + psql -U tally_user -c 'CREATE DATABASE "tally-feature-x" TEMPLATE tally_template' ``` -## Troubleshooting +### Database Status -### Port Already in Use -If you get "port already in use" errors: -1. Check `.tally-sessions` file for port assignments -2. Make sure the script's port allocation is working correctly -3. Kill any orphaned processes using `lsof -i :PORT | grep LISTEN` - -### Missing Dependencies -If backend or frontend fail to start: -1. Check that `.env` files were copied correctly -2. Verify symlinks are intact (node_modules, db.sqlite3) -3. Ensure virtual environment is activated (`workon tally`) - -### Claude Sessions -- Session names follow pattern: `tally-` -- Use `--resume` flag to continue previous conversations -- Each worktree needs its own tool permissions in `~/.claude.json` - -## Important Notes for Claude (AI Assistant) - -When working with this system: -1. **Always use the worktree path** - Each session runs in its own worktree at `~/Dev/tally-/` -2. **Port awareness** - Check `.tally-sessions` for assigned ports before suggesting localhost URLs -3. **Virtual environment** - All Python commands need `workon tally` first -4. **Independent databases** - Each session has its own database copy, changes don't affect other sessions -5. **Shared node_modules** - Node modules are symlinked to save space (read-only, shared) -6. **Git operations** - Each worktree has its own branch; commits don't affect other sessions -7. **Tool permissions** - New worktrees may need permission setup if not inherited from main project -8. **Frontend API URL** - Each session's frontend automatically points to its own backend port - -## File Structure Example +```bash +docker compose -f backend/docker-compose.yml exec db \ + psql -U tally_user -c "\l" | grep tally ``` -~/Dev/ -├── tally/ # Main repository -│ ├── backend/ -│ │ ├── .env # Original env file -│ │ └── db.sqlite3 # Original database -│ ├── frontend/ -│ │ ├── .env # Original env file -│ │ └── node_modules/ # Original node modules -│ ├── tally-session # Management script -│ └── .tally-sessions # Session tracking file -│ -├── tally-feature-auth/ # Worktree for feature-auth session -│ ├── backend/ -│ │ ├── .env # Copied from main -│ │ └── db.sqlite3 # Independent copy (not shared) -│ └── frontend/ -│ ├── .env # Copied & updated with backend port -│ └── node_modules/ # Symlink to main (shared) -│ -└── tally-bugfix-ui/ # Another worktree - └── ... # Same structure + +## PR Reviews + +```bash +orca pr 42 # Create session from PR #42 +orca attach pr-42 # Attach to it +orca update-pr 42 # Pull latest changes +orca remove pr-42 # Clean up ``` -## Session Commands Reference +## Port Allocation + +Ports are auto-allocated globally across all orca sessions: +- Backend: 8000, 8001, 8002, ... +- Frontend: 5000, 5001, 5002, ... -Inside each tmux session, the windows and their states are: -- **backend:**: Running Django server - `python manage.py runserver ` -- **frontend:**: Running Svelte dev server - `npm run dev -- --port ` -- **claude**: Running Claude Code session - `claude tally-` (or with `--resume`) -- **shell**: Interactive shell ready for use (environment activated, in worktree root) -- **cli**: Command line with environment ready but not executed (press Enter to run commands) +Port assignments are visible via `orca list` and in tmux window names. + +## Troubleshooting -All windows have: -- Virtual environment activated: `workon tally` -- Working directory set: `cd ~/Dev/tally-/[appropriate-dir]` +### Django Can't Connect to Database +1. Ensure `tally-postgres` container is running: `docker ps | grep tally-postgres` +2. Check the database exists: `docker compose -f backend/docker-compose.yml exec db psql -U tally_user -l` +3. From inside orca container, host is `host.docker.internal`, not `localhost` -The port numbers are shown in the window names for easy reference. \ No newline at end of file +### Port Already in Use +Check active sessions with `orca list`. Ports are tracked in `~/.orca/sessions.json`. + +### Container Won't Start +```bash +orca stop # Force stop +orca resume # Restart fresh +``` diff --git a/backend/.env.example b/backend/.env.example index 9bf2b07e..0b803977 100644 --- a/backend/.env.example +++ b/backend/.env.example @@ -4,26 +4,18 @@ DEBUG=True ALLOWED_HOSTS=localhost,127.0.0.1 # Database -# Leave DATABASE_URL empty for SQLite (development) -# For production, set to your database URL (e.g., postgres://user:pass@host:port/dbname) -DATABASE_URL= - -# RDS Database Connection (for migration script) -# Required only when running migrate_rds_to_sqlite.py -# Provide database credentials in one of these ways: -# -# Option 1: Full database URL in .env -# RDS_DATABASE_URL=postgresql://username:password@host:port/database +# Local development uses a shared PostgreSQL container managed by orca. +# Each orca session gets its own database (tally_). +# The main worktree uses tally_main. # -# Option 2: AWS SSM Parameter Store (set parameter name) -# RDS_DATABASE_URL_PARAM=/tally/prod/database_url +# From inside orca Docker containers, use host.docker.internal to reach the host PG container. +# If running directly on host (without orca), use localhost instead. # -# Option 3: AWS Secrets Manager (set secret name) -# RDS_SECRET_NAME=my-rds-secret +# Start the PostgreSQL container: docker compose -f backend/docker-compose.yml up -d db +DATABASE_URL=postgresql://tally_user:tally_password@host.docker.internal:5432/tally_main -# Migration Settings -# Password to set for all users when importing from RDS -RESET_PASSWORD=testpassword123 +# For production, set to your RDS URL: +# DATABASE_URL=postgresql://username:password@host:port/database # CORS and CSRF Settings CORS_ALLOWED_ORIGINS=https://your-frontend-domain.com,https://another-domain.com diff --git a/backend/CLAUDE.md b/backend/CLAUDE.md index 57b1ea5e..ef9f4550 100644 --- a/backend/CLAUDE.md +++ b/backend/CLAUDE.md @@ -93,31 +93,34 @@ backend/ ### Database & Migrations - **Migrations**: `{app}/migrations/` -- **Database**: SQLite by default, configured in settings.py +- **Database**: PostgreSQL via shared `tally-postgres` Docker container (see `docker-compose.yml`) + - Each orca session gets its own database (`tally_`) + - Main worktree uses `tally_main` + - Production snapshots restored to `tally_template` + - `DATABASE_URL` is set per-session by `orchestrator.yml` - **Run migrations**: `python manage.py migrate` - **Create migrations**: `python manage.py makemigrations` ### Database Migration from Production - **Script**: `backend/scripts/migrate-prod-to-dev.sh` - **Documentation**: `backend/scripts/README.md` -- **Purpose**: Sync production PostgreSQL database to local/dev environment +- **Purpose**: Sync production PostgreSQL to local `tally_template` database - **Prerequisites**: + - `tally-postgres` container running (`docker compose up -d db`) - Virtual environment activated - AWS CLI configured with Parameter Store access - - Docker installed (for database operations) **Usage:** ```bash -# Navigate to scripts directory cd backend/scripts -# Download production database only (safest option) +# Download production database ./migrate-prod-to-dev.sh --download -# Upload latest dump to dev database +# Restore to tally_template in local PostgreSQL container ./migrate-prod-to-dev.sh --upload -# Run Django migrations and create admin user only +# Run Django migrations and create admin user ./migrate-prod-to-dev.sh --setup # Full migration (download + upload + setup) @@ -125,23 +128,21 @@ cd backend/scripts ``` **What it does:** -1. Fetches production database credentials from AWS Parameter Store (`/tally/prod/database_url`) -2. Downloads production data to `backend/backups/` using Docker -3. Restores to development database (local PostgreSQL or AWS dev instance) +1. Fetches production credentials from AWS Parameter Store (`/tally/prod/database_url`) +2. Downloads production data to `backend/backups/` via Docker +3. Restores to `tally_template` database in the local `tally-postgres` container 4. Runs Django migrations -5. Creates/updates admin user (`dev@genlayer.foundation` / `password`) with Steward role +5. Creates admin user (`dev@genlayer.foundation` / `password`) with Steward role -**Notes:** -- Uses Docker to avoid PostgreSQL version mismatch issues -- Modular operation allows partial runs (download, upload, setup separately) -- Creates timestamped backups in `backend/backups/` -- See `backend/scripts/README.md` for detailed setup and troubleshooting +**Creating session databases from template:** +```bash +docker compose exec db psql -U tally_user -c 'CREATE DATABASE "tally-feature-x" TEMPLATE tally_template' +``` -### RDS to SQLite Migration +### RDS to SQLite Migration (Deprecated) - **Script**: `backend/scripts/migrate_rds_to_sqlite.py` -- **Purpose**: Convert production PostgreSQL to local SQLite for development -- **Usage**: `python scripts/migrate_rds_to_sqlite.py` (from backend directory) -- **Notes**: Resets all passwords to 'pass', excludes leaderboard entries, backs up existing db.sqlite3 +- **Status**: Deprecated — use the PostgreSQL workflow above instead +- **Purpose**: Legacy script that converted production PostgreSQL to local SQLite ## API Endpoints Summary diff --git a/backend/docker-compose.yml b/backend/docker-compose.yml index 9d0cbd02..434ed693 100644 --- a/backend/docker-compose.yml +++ b/backend/docker-compose.yml @@ -1,33 +1,31 @@ version: '3.8' +# Shared PostgreSQL container for local development. +# Used by all orca sessions — each session gets its own database. +# +# Start: docker compose up -d db +# Stop: docker compose down +# Status: docker compose ps +# +# Databases are created automatically by orca via orchestrator.yml env_substitutions. +# To create a database manually: +# docker compose exec db createdb -U tally_user tally_ +# To create from template (instant clone of production snapshot): +# docker compose exec db psql -U tally_user -c 'CREATE DATABASE "tally_" TEMPLATE tally_template' + services: db: - image: postgres:15 + image: postgres:17 + container_name: tally-postgres environment: - POSTGRES_DB: tally_db + POSTGRES_DB: tally_main POSTGRES_USER: tally_user POSTGRES_PASSWORD: tally_password volumes: - - postgres_data:/var/lib/postgresql/data + - tally_pgdata:/var/lib/postgresql/data ports: - "5432:5432" - - web: - build: . - ports: - - "8000:8000" - environment: - - DEBUG=True - - SECRET_KEY=your-development-secret-key - - DATABASE_URL=postgresql://tally_user:tally_password@db:5432/tally_db - - ALLOWED_HOSTS=localhost,127.0.0.1,0.0.0.0 - - FRONTEND_URL=http://localhost:5173 - - SIWE_DOMAIN=localhost - - CSRF_TRUSTED_ORIGINS=http://localhost:5173,http://127.0.0.1:5173 - depends_on: - - db - volumes: - - .:/app + restart: unless-stopped volumes: - postgres_data: \ No newline at end of file + tally_pgdata: diff --git a/backend/scripts/README.md b/backend/scripts/README.md index 7a184ba0..10dde809 100644 --- a/backend/scripts/README.md +++ b/backend/scripts/README.md @@ -1,83 +1,79 @@ # Database Migration Scripts -Scripts for migrating Tally production database to development environment. +Scripts for syncing the Tally production database to local development. -## Prerequisites +## Local PostgreSQL Setup -1. **Virtual Environment** must be activated: +Local development uses a shared PostgreSQL container. All orca sessions connect to the same container, each with its own database. + +### Architecture +``` +Docker: tally-postgres (port 5432, postgres:17) + ├── tally_main ← main worktree + ├── tally_template ← production snapshot (restored via migrate-prod-to-dev.sh) + ├── tally_feature_x ← orca session (created automatically) + └── tally_pr_123 ← PR session (created automatically) +``` + +### Start the PostgreSQL Container + +```bash +cd backend +docker compose up -d db +``` + +Container credentials (used by all local databases): +- **User**: `tally_user` +- **Password**: `tally_password` +- **Port**: `5432` + +## Syncing Production Data + +### Prerequisites + +1. **Virtual Environment** activated: ```bash - # If using virtualenvwrapper: - workon your-tally-env - - # If using venv: source backend/env/bin/activate ``` - -2. **AWS CLI configured** with access to Parameter Store: +2. **AWS CLI configured** with Parameter Store access: ```bash aws configure ``` +3. **Docker** installed and the `tally-postgres` container running -3. **AWS Parameters** must be set up (see below) +### Quick Sync Workflow -4. **Required tools**: - - Docker (for database operations) - - Python 3 with Django environment (activated) - - AWS CLI +```bash +cd backend/scripts -## AWS Parameter Store Setup +# Download production database (creates timestamped backup in backend/backups/) +./migrate-prod-to-dev.sh --download -The scripts expect database URLs stored as single parameters in AWS Systems Manager Parameter Store: +# Restore to tally_template database in local container +./migrate-prod-to-dev.sh --upload -### Production Parameter -``` -/tally/prod/database_url # Full PostgreSQL URL (SecureString) -``` +# Run Django migrations and create admin user +./migrate-prod-to-dev.sh --setup -### Development Parameter (Optional) -``` -/tally-backend/dev/database_url # Full PostgreSQL URL for dev environment (SecureString) +# Or do all three at once: +./migrate-prod-to-dev.sh ``` -### Setting Parameters +### Creating Session Databases from Template -To set parameters in AWS: +After syncing production data to `tally_template`, create instant clones for orca sessions: ```bash -# Set production database URL -aws ssm put-parameter \ - --name "/tally/prod/database_url" \ - --value "postgresql://username:password@host:port/database" \ - --type "SecureString" \ - --overwrite - -# Set development database URL (optional) -aws ssm put-parameter \ - --name "/tally-backend/dev/database_url" \ - --value "postgresql://tally_dev:password@host:port/tally_dev" \ - --type "SecureString" \ - --overwrite -``` - -The database URL format is: `postgresql://username:password@host:port/database_name` - -If the development database URL is not set in AWS, the script will use local defaults (localhost, postgres user) and prompt for the password. +# Create a database for a session (instant via PostgreSQL template) +docker compose exec db psql -U tally_user -c 'CREATE DATABASE "tally-feature-x" TEMPLATE tally_template' -## Usage - -**IMPORTANT**: Always activate your virtual environment first! - -```bash -# Activate your virtual environment -workon your-tally-env # or source backend/env/bin/activate - -# Navigate to scripts directory -cd backend/scripts +# Drop a session database +docker compose exec db psql -U tally_user -c 'DROP DATABASE IF EXISTS "tally-feature-x"' ``` -### Migration Script Options +Note: orca sessions get their `DATABASE_URL` set automatically via `orchestrator.yml` env_substitutions. You only need to create the database manually if the template exists and you want an instant clone. -The migration script (`migrate-prod-to-dev.sh`) supports modular operations: +### Script Options ```bash # Show help and all options @@ -86,7 +82,7 @@ The migration script (`migrate-prod-to-dev.sh`) supports modular operations: # Download production database only ./migrate-prod-to-dev.sh --download -# Upload last dump to dev database +# Upload last dump to local PostgreSQL (tally_template) ./migrate-prod-to-dev.sh --upload # Upload specific backup file @@ -99,60 +95,48 @@ The migration script (`migrate-prod-to-dev.sh`) supports modular operations: ./migrate-prod-to-dev.sh ``` -### Common Workflows - -```bash -# First time setup -./migrate-prod-to-dev.sh # Full migration - -# Re-run just the setup after fixing issues -./migrate-prod-to-dev.sh --setup - -# Use existing backup without re-downloading -./migrate-prod-to-dev.sh --upload -./migrate-prod-to-dev.sh --setup - -# Download fresh backup for later use -./migrate-prod-to-dev.sh --download -``` - -The script uses Docker containers with matching PostgreSQL versions to avoid version mismatch issues. - ## What the Script Does 1. **Fetch credentials** from AWS Parameter Store 2. **Backup production database** to `backend/backups/` directory -3. **Drop and recreate** development database (with confirmation) -4. **Restore production data** to development +3. **Drop and recreate** the `tally_template` database (with confirmation) +4. **Restore production data** to `tally_template` 5. **Run Django migrations** 6. **Create admin user**: - Email: `dev@genlayer.foundation` - Password: `password` - Roles: Steward and Superuser -## Security Notes +## AWS Parameter Store Setup + +The scripts expect database URLs stored as parameters in AWS Systems Manager: + +``` +/tally/prod/database_url # Production PostgreSQL URL (SecureString) +/tally-backend/dev/database_url # Optional: remote dev database URL +``` + +If the dev parameter is not set, the script defaults to the local `tally-postgres` container. + +## Deprecated: SQLite Migration -- Production credentials are fetched from AWS Parameter Store (never hardcoded) -- Backups are stored locally in `backend/backups/` (add to .gitignore) -- The admin user password is intentionally simple for development only -- Never use these scripts in production environments +The old `migrate_rds_to_sqlite.py` script is deprecated. It converted production PostgreSQL to SQLite, which was slow (1+ hour) and had behavioral differences. Use the PostgreSQL workflow above instead. ## Troubleshooting +### Container Not Running +```bash +docker compose -f backend/docker-compose.yml up -d db +docker compose -f backend/docker-compose.yml ps +``` + ### pg_dump Version Mismatch +The script uses Docker containers with the correct PostgreSQL version automatically. -The script automatically uses Docker containers with the correct PostgreSQL version to avoid mismatch issues. +### Connection from orca Container +From inside orca Docker containers, use `host.docker.internal` instead of `localhost` to reach the PostgreSQL container. This is set automatically by `orchestrator.yml`. ### AWS Credentials Error - -If you get AWS credential errors: 1. Run `aws configure` to set up your credentials 2. Ensure your AWS user has permissions to read from Parameter Store 3. Check the parameter paths are correct for your environment - -### Connection Issues - -If you can't connect to the database: -1. Check network connectivity to production database -2. Verify firewall/security group rules allow your IP -3. Ensure database credentials are correct in AWS Parameter Store \ No newline at end of file diff --git a/backend/scripts/migrate-prod-to-dev.sh b/backend/scripts/migrate-prod-to-dev.sh index 105c97d3..7334974c 100755 --- a/backend/scripts/migrate-prod-to-dev.sh +++ b/backend/scripts/migrate-prod-to-dev.sh @@ -332,16 +332,13 @@ upload_to_development() { DEV_DB_NAME=${HOST_PORT_DB#*/} echo -e "${GREEN}Using dev database from AWS${NC}" else - # Use local defaults if not in AWS - echo -e "${YELLOW}No dev database URL in AWS, using local defaults${NC}" + # Use local defaults matching the shared tally-postgres container + echo -e "${YELLOW}No dev database URL in AWS, using local PostgreSQL container defaults${NC}" DEV_DB_HOST="localhost" DEV_DB_PORT="5432" - DEV_DB_NAME="tally_dev" - DEV_DB_USER="postgres" - - echo -e "${YELLOW}Enter password for local development database user ($DEV_DB_USER):${NC}" - read -s DEV_DB_PASSWORD - echo "" + DEV_DB_NAME="tally_template" + DEV_DB_USER="tally_user" + DEV_DB_PASSWORD="tally_password" fi echo -e "${GREEN}Development database configuration:${NC}" @@ -454,26 +451,8 @@ setup_django() { DEV_DATABASE_URL=$(get_parameter "$DEV_PARAM_PATH") if [ -z "$DEV_DATABASE_URL" ]; then - echo -e "${YELLOW}No dev database URL in AWS, building from local defaults${NC}" - echo -e "${YELLOW}Enter dev database connection details:${NC}" - - read -p "Host (default: localhost): " DEV_DB_HOST - DEV_DB_HOST=${DEV_DB_HOST:-localhost} - - read -p "Port (default: 5432): " DEV_DB_PORT - DEV_DB_PORT=${DEV_DB_PORT:-5432} - - read -p "Database name (default: tally_dev): " DEV_DB_NAME - DEV_DB_NAME=${DEV_DB_NAME:-tally_dev} - - read -p "Username (default: postgres): " DEV_DB_USER - DEV_DB_USER=${DEV_DB_USER:-postgres} - - echo -n "Password: " - read -s DEV_DB_PASSWORD - echo "" - - DEV_DATABASE_URL="postgresql://${DEV_DB_USER}:${DEV_DB_PASSWORD}@${DEV_DB_HOST}:${DEV_DB_PORT}/${DEV_DB_NAME}" + echo -e "${YELLOW}No dev database URL in AWS, using local PostgreSQL container defaults${NC}" + DEV_DATABASE_URL="postgresql://tally_user:tally_password@localhost:5432/tally_template" fi echo -e "${GREEN}Using database: $DEV_DATABASE_URL${NC}" diff --git a/backend/scripts/migrate_rds_to_sqlite.py b/backend/scripts/migrate_rds_to_sqlite.py index de107d3e..7844e155 100755 --- a/backend/scripts/migrate_rds_to_sqlite.py +++ b/backend/scripts/migrate_rds_to_sqlite.py @@ -1,5 +1,16 @@ #!/usr/bin/env python """ +DEPRECATED: This script converts production PostgreSQL to local SQLite. + +The recommended workflow now uses native PostgreSQL for local development: + cd backend/scripts + ./migrate-prod-to-dev.sh --download # Download prod dump + ./migrate-prod-to-dev.sh --upload # Restore to local PostgreSQL container + +See backend/scripts/README.md for the full PostgreSQL-based workflow. + +--- + Migrate data from RDS PostgreSQL to local SQLite database. Requirements (.env file): @@ -8,7 +19,7 @@ RDS_DATABASE_URL_PARAM: SSM parameter name containing database URL (e.g., /tally/prod/database_url) or RDS_SECRET_NAME: Secrets Manager secret name containing database URL - + RESET_PASSWORD: New password for all users (optional, defaults to 'pass') Usage: @@ -17,6 +28,15 @@ import os import sys +import warnings + +warnings.warn( + "migrate_rds_to_sqlite.py is deprecated. " + "Use migrate-prod-to-dev.sh with the local PostgreSQL container instead. " + "See backend/scripts/README.md for details.", + DeprecationWarning, + stacklevel=1, +) import subprocess import json from datetime import datetime diff --git a/backend/tally/settings.py b/backend/tally/settings.py index a84d1774..4a07aa03 100644 --- a/backend/tally/settings.py +++ b/backend/tally/settings.py @@ -118,15 +118,16 @@ def get_required_env(key): # https://docs.djangoproject.com/en/5.2/ref/settings/#databases # Database configuration +# DATABASE_URL is set in .env — local dev uses PostgreSQL via tally-postgres container. +# Falls back to SQLite only if DATABASE_URL is not set. DATABASE_URL = os.environ.get('DATABASE_URL') if DATABASE_URL: - # Production database (RDS) import dj_database_url DATABASES = { 'default': dj_database_url.parse(DATABASE_URL) } else: - # Development database (SQLite) + # Fallback: SQLite (not recommended — use PostgreSQL for production parity) DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', diff --git a/orchestrator.yml b/orchestrator.yml new file mode 100644 index 00000000..94ae9c7d --- /dev/null +++ b/orchestrator.yml @@ -0,0 +1,57 @@ +project: tally +base_branch: dev + +worktree: + enabled: true + +setup: + copy: + - backend/.env + - frontend/.env + - .claude/settings.local.json + env_substitutions: + "backend/.env": + DATABASE_URL: "postgresql://tally_user:tally_password@host.docker.internal:5432/tally_${session}" + BACKEND_URL: "http://localhost:${backend_port}" + FRONTEND_URL: "http://localhost:${frontend_port}" + "frontend/.env": + VITE_API_URL: "http://localhost:${backend_port}" + +docker: + python_requirements: backend/requirements.txt + node_install: [frontend] + volumes: + node_modules: /workspace/frontend/node_modules + +ports: + backend: { start: 8000 } + frontend: { start: 5000 } + +windows: + - name: "backend:${backend_port}" + directory: backend + command: "python manage.py runserver 0.0.0.0:${backend_port}" + + - name: "frontend:${frontend_port}" + directory: frontend + command: "npm run dev -- --port ${frontend_port} --host 0.0.0.0" + + - name: django-shell + directory: backend + command: "python manage.py shell" + + - name: claude + directory: . + command: "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude --dangerously-skip-permissions" + resume_command: "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1 claude --dangerously-skip-permissions --resume" + + - name: shell + directory: . + command: "" + + - name: cli + directory: . + command: "" + +env: + - ANTHROPIC_API_KEY