An AI music production assistant, audio engineer, and music teacher for the REAPER Digital Audio Workstation. Built in Go with a security-first design.
- REAPER DAW control: Read project state, insert MIDI, control transport, manipulate FX, execute Lua — all through AI-driven MCP tools
- Sample library: Scan, analyze, and auto-classify audio samples using Essentia — search by category, type, character, and find similar sounds via MFCC similarity
- Render + analysis pipeline: Render audio from REAPER, then analyze loudness, spectrum, pitch, rhythm, timbre, dynamics, and stereo image — the AI "hears" through structured data
- Audio analysis: Native WAV transient detection (no REAPER dependency)
- Security-first: Principle of least privilege — AI can only use explicitly defined MCP tools
- Engine-agnostic: Works with OpenCode (75+ LLM providers)
- Sandboxed scripting: Starlark-based custom skills with secure secret handling
- Docker-native: Two-user security model (
goreap-system/goreap-ai), easy deployment - Admin UI: Vue 3 web interface for session management, scripts, schedules, and secrets
- Scheduling: Cron-based job scheduler for automated scripts and agent prompts
GoReap controls REAPER via the reaserve plugin — a standalone C++ REAPER extension that exposes REAPER's API over TCP using JSON-RPC 2.0:
- The AI decides to interact with REAPER (e.g., "add a kick drum pattern")
- GoReap sends a structured command over TCP (e.g.,
track.add {"name": "Kick"}) - The reaserve plugin executes it on REAPER's main thread via the C API
- The result is returned as JSON over the same TCP connection
All REAPER API complexity is handled inside the plugin — the AI works with semantic operations like add_track, set_track_property, add_fx, never raw Lua.
With the reaserve plugin, the AI gets 36+ structured tools:
| Category | Tools |
|---|---|
| Reads | get_project_state, get_selected_item_data, get_fx_parameters, list_items, get_midi_notes, list_markers, list_sends, list_envelope_points |
| Tracks | add_track, remove_track, set_track_property |
| FX | add_fx, remove_fx, set_fx_parameter, enable_fx, disable_fx |
| Items/MIDI | move_item, resize_item, split_item, delete_item, insert_midi_pattern |
| Markers | add_marker, remove_marker |
| Routing | add_send, remove_send |
| Envelopes | add_envelope_point |
| Transport | transport_control (play/stop/record/rewind) |
| Rendering | render_tracks (master mixdown, stems, or selected tracks — by bars, seconds, or full project) |
| Analysis | analyze_loudness, analyze_spectrum, analyze_pitch, analyze_rhythm, analyze_timbre, analyze_dynamics, analyze_stereo |
| Audio | analyze_audio_transients (native Go, no REAPER needed) |
| Samples | search_samples, find_similar_samples, get_sample_info, browse_sample_library, insert_audio_sample, load_sampler |
| Project | get_project_context, save_project_context (per-project memory, auto-detected from REAPER) |
| Escape hatch | execute_lua_mutation (raw Lua for anything the structured tools don't cover) |
Plus workspace, web fetch, Starlark scripting, model management, and schedule tools.
- REAPER with the reaserve plugin installed (download binary → copy to
UserPlugins/→ restart REAPER) - Docker and Docker Compose
# 1. Copy the sample env file and fill in your values
cp .env.sample .env
# Required: ANTHROPIC_API_KEY, HOST_WORKSPACE_PATH
# 2. Build and start
docker compose up --build
# 3. Open the admin UI
open http://localhost:8888The docker-compose.yml mounts:
${HOST_WORKSPACE_PATH}:/workspace— persistent config and AI data${REAPER_PROJECTS_PATH}:/mnt/reaper:ro— audio file access for transient analysis${SAMPLES_PATH}:/mnt/samples:ro— sample library for scanning and analysis (optional)
# Build the admin UI first (required for Go tests and embedding)
cd admin-ui && npm install && npm run build && cd ..
# Build the Go binary
make build
# Run tests
make test
# Run locally
export ANTHROPIC_API_KEY=your_key
./goreap startGoReap reads secure/config.yaml from the workspace directory:
engine:
type: opencode
provider: anthropic
model: claude-sonnet-4-20250514
port: 4098
workspace:
path: /workspace
reaper:
enabled: true
host: "192.168.1.100"
port: 9876 # reaserve TCP port (default)
samples:
enabled: true
host_path: /srv/audio/samples # Path on host (sent to REAPER)
starlark:
max_execution_ms: 30000
max_memory_mb: 128Environment variable overrides: REAPER_ENABLED, REAPER_HOST, REAPER_PORT, ANTHROPIC_API_KEY, WORKSPACE_PATH, ADMIN_BIND
Place these files in <workspace>/ai-data/ to customize the AI:
SOUL.md— Identity and personality (ships with a REAPER expert persona)USER.md— User preferences and contextMEMORY.md— Persistent memory across sessions
goreap/
├── cmd/
│ ├── goreap/ # Main orchestrator CLI
│ ├── admin/ # Standalone admin server
│ └── mcp-server/ # Standalone MCP server
├── internal/
│ ├── reaper/ # REAPER DAW bridge via reaserve TCP plugin
│ ├── mcp/ # MCP server & tools (28+ REAPER tools in plugin mode)
│ ├── orchestrator/ # Component coordination
│ ├── config/ # YAML config loading
│ ├── context/ # Context injection (SOUL/USER/MEMORY)
│ ├── engine/ # AI engine abstraction (OpenCode)
│ ├── admin/ # Admin UI server + JWT auth
│ ├── samples/ # Sample library (SQLite, scanner, analyzer, tagger)
│ ├── render/ # Render + analysis pipeline (Lua gen, Essentia)
│ ├── scheduler/ # Cron job scheduler
│ ├── starlark/ # Sandboxed scripting
│ ├── health/ # Health checks & Prometheus metrics
│ ├── ratelimit/ # Token bucket rate limiting
│ └── logging/ # Structured logging
├── admin-ui/ # Vue 3 + Naive UI admin interface
├── tools/ # Python analysis script (Essentia audio analysis)
├── templates/ # Default config, SOUL.md, USER.md, MEMORY.md
├── Dockerfile # Multi-stage build with two-user security
├── docker-compose.yml # Docker Compose with REAPER bridge volumes
└── docker-entrypoint.sh # Orchestrator + OpenCode startup
LLMs cannot process audio. They "hear" through structured data. GoReap's render pipeline lets the AI capture audio from REAPER and analyze it:
- The AI makes changes in REAPER (add FX, adjust levels, write MIDI)
render_trackscaptures audio — master mixdown, per-track stems, or selected tracks- Analysis tools extract structured data: loudness (LUFS), frequency spectrum, pitch/key, rhythm, timbre, dynamics, stereo image
- The AI reasons about the data and iterates
Analysis tools work on any WAV file — renders, samples, or project audio. Each supports resolution control: full (whole-file summary), per_beat, per_bar, or per_second for time-series data.
Renders are stored at <workspace>/renders/<project>/<render_id>/ and can be browsed, played, and managed from the admin UI's Renders page.
GoReap can scan, analyze, and auto-classify a sample library so the AI can search and insert samples by type, character, and genre.
How it works:
- Mount your sample folder into the container (
SAMPLES_PATHin.env) - Start a scan from the admin UI — GoReap indexes files, extracts audio features via Essentia, and auto-tags using calibrated classification rules
- The AI searches with MCP tools like
search_samples category=drum type=kick character=["punchy","dark"] - Matching samples can be inserted directly into REAPER via
insert_audio_sampleor loaded into ReaSamplOmatic5000 viaload_sampler
Classification: Two-stage — broad category first (drum, bass, synth, vocal, fx, loop, instrument), then deep sub-classification for drums (kick, snare, hihat, etc.) using filename/folder parsing + audio feature thresholds calibrated from real experiment data. Character tags (punchy, dark, bright, tight, 808, tonal, etc.) are derived from spectral and temporal features.
Similarity search: Uses MFCC cosine similarity — ask the AI "find me more kicks like this one" and it will return the most timbrally similar samples.
GoReap includes a sandboxed scripting system using Starlark (a Python-like language). Scripts can make HTTP requests, parse JSON, and access secrets securely.
# @description: Get current weather for a city
# @secrets: WEATHER_API_KEY
def get_weather(city):
api_key = secrets.get("WEATHER_API_KEY")
url = format("https://api.weatherapi.com/v1/current.json?key=%s&q=%s", api_key, city)
resp = http.get(url)
data = json.decode(resp["body"])
return {
"city": data["location"]["name"],
"temp_c": data["current"]["temp_c"],
"condition": data["current"]["condition"]["text"],
}The AI never sees secret values — they are replaced with [REDACTED:NAME] in all output.
GET /health— Detailed health statusGET /healthz— Kubernetes-style probeGET /ready— Readiness checkGET /metrics— Prometheus metrics (goreap_*prefix)
MIT