SoundFlare is an open-source observability and validation platform designed primarily for Trillet AI integration and LiveKit compatible Voice AI agents. It acts as a "flight recorder" for your voice assistants, providing real-time monitoring, automated evaluation, and deep analysis to ensure reliable and accurate voice experiences.
- AI Validation Engine: Automatically detects hallucinations, incorrect actions, and API failures in real-time.
- Real-Time Metrics: Track latency (TTFT), token costs, and success rates across your voice stack.
- Automated Evaluations: Stress-test your agents with AI-generated callers that mimic human behavior and edge cases.
- Voice Bug Reporting: Flag issues naturally using custom voice commands during live testing.
- Deep Integration: Purpose-built for Trillet AI with native LiveKit compatibility.
We've designed the local setup to be as simple as possible using Docker Compose. This spins up a fully self-contained stack including the SoundFlare dashboard, a local Supabase backend (Auth, Database, API), and a gateway.
- Docker & Docker Compose installed and running.
Simply run:
git clone https://github.com/TrilletAI/Soundflare.git
cd Soundflare
./scripts/docker-start.shThat's it! The script will:
- Auto-generate
.env.dockerwith unique JWT keys (if not exists) - Display your credentials in the terminal
- Save credentials to
.docker-credentials.txtfor later reference - Start all Docker services
Your credentials are shown on first run and saved to .docker-credentials.txt
- Dashboard:
http://localhost:8000 - Supabase API:
http://localhost:54321
Default Dashboard Login:
- Email:
admin@soundflare.ai - Password:
password123
Your API Keys:
Check .docker-credentials.txt in the project root for your unique Supabase keys (anon key & service role key). These are auto-generated on first run.
SoundFlare uses two separate environment files to avoid confusion between local Docker development and production deployments. We recommend using Docker for local testing to get going quickly, and a dedicated .env for production/Vercel hosting.
- Purpose: Used exclusively by
docker-compose.ymlfor local development - Auto-generated: Created by
./scripts/docker-start.sh - Contains: Local Supabase URLs and JWT tokens signed with your custom secret
- Why separate?: Ensures everyone's local Docker environment has properly matched JWT keys
- Git: Ignored by
.gitignore(never commit this file)
Example .env.docker:
NEXT_PUBLIC_SUPABASE_URL=http://localhost:54321
SUPABASE_INTERNAL_URL=http://kong:54321
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGc... # Auto-generated
SUPABASE_SERVICE_ROLE_KEY=eyJhbGc... # Auto-generated
JWT_SECRET=your-custom-jwt-secret- Purpose: Used for production deployments (Vercel, Cloud, etc.) and your own Supabase instance
- Manual setup: You configure this for your managed/self-hosted Supabase
- Contains: Remote Supabase URLs, API keys, and production secrets
- Why separate?: Prevents mixing local Docker configs with production credentials
- Git: Ignored by
.gitignore(never commit this file)
To create .env:
cp .env.example .env
# Edit with your production Supabase credentialsRequired for Production:
NEXT_PUBLIC_SUPABASE_URL: Your managed Supabase URLNEXT_PUBLIC_SUPABASE_ANON_KEY: Your managed Supabase Anon KeySUPABASE_SERVICE_ROLE_KEY: Your managed Supabase Service Role Key
Optional for AI Features:
OPENAI_API_KEY: For AI-powered transcript analysis
Optional for AI Call Reviews (Vertex AI / Gemini):
The AI Call Review feature uses Google Gemini via Vertex AI to automatically detect hallucinations, wrong actions, and API failures in your call logs. To enable it:
- Create a Google Cloud project and enable the Vertex AI API
- Create a service account with the
Vertex AI Userrole - Download the service account JSON key file
- Provide credentials using one of the two options below:
Option A β Vercel / Serverless (recommended for hosting):
Base64-encode the JSON key and set it as an env var. This avoids needing a file on disk.
# Generate the base64 string:
cat your-credentials.json | base64 -w 0
# Then set in Vercel / .env:
GOOGLE_CREDENTIALS_JSON=eyJ0eXBlIjoic2VydmljZV9hY2NvdW50Ii...
GOOGLE_CLOUD_PROJECT_ID=your-gcp-project-idOption B β File-based (local dev / VMs):
Place the key file at src/credentials/google-credentials.json, or point to it with an env var:
GOOGLE_APPLICATION_CREDENTIALS=/path/to/your-credentials.json
GOOGLE_CLOUD_PROJECT_ID=your-gcp-project-idOptional overrides (defaults shown):
GOOGLE_CLOUD_LOCATION=us-central1
GOOGLE_GEMINI_MODEL=gemini-2.5-flashNote: The
src/credentials/directory is git-ignored. Never commit credential files to the repository.
- Docker Compose reads only
.env.docker(auto-generated by./scripts/docker-start.sh) - Vercel / Next.js reads
.envfor production deployments - Your unique JWT keys are saved in
.docker-credentials.txtfor easy reference - All credential files are git-ignored to protect your secrets
- To regenerate keys: delete
.env.dockerand run./scripts/docker-start.shagain
Use these commands from the project root. All Docker commands require --env-file .env.docker to avoid warnings about unset variables.
./scripts/docker-start.shThis handles everything: env generation, build, start, schema migration, and seeding.
| Scenario | Command |
|---|---|
| Changed application code (components, API routes, etc.) | docker compose --env-file .env.docker up -d --build web |
Changed .env.docker (runtime vars like API keys) |
docker compose --env-file .env.docker up -d web |
Changed NEXT_PUBLIC_* env vars |
docker compose --env-file .env.docker up -d --build web |
| Changed database schema | docker exec -i soundflare-db psql -U postgres -d postgres < database/setup-supabase.sql && docker restart soundflare-rest |
| Start all services | docker compose --env-file .env.docker up -d |
| Stop all services | docker compose down |
| Full reset (wipes database) | docker compose down -v && ./scripts/docker-start.sh |
| Regenerate JWT keys | rm .env.docker && ./scripts/docker-start.sh |
Why
--buildfor code changes? The Next.js app is compiled at Docker build time into a standalone production bundle. Without--build, the container runs the old compiled code.
Why
--buildforNEXT_PUBLIC_*vars? Variables prefixed withNEXT_PUBLIC_are inlined at build time by Next.js. Changing them at runtime has no effect β you must rebuild.
Why NO
--buildfor other env vars? Runtime-only vars (likeSUPABASE_SERVICE_ROLE_KEY,OPENAI_API_KEY,TRILLET_EVALS_MASTER_KEY) are read at request time from the container environment. Restarting the container picks them up without a rebuild.
# All services
docker compose --env-file .env.docker logs -f
# Web app only (API routes, errors)
docker compose --env-file .env.docker logs -f web
# Auth service
docker compose --env-file .env.docker logs -f auth
# Database
docker compose --env-file .env.docker logs -f dbNote: The web container runs a production build. Only
console.errorandconsole.warnappear in the logs βconsole.logis stripped. Useconsole.errorfor important diagnostics.
| Service | Container | Port | Purpose |
|---|---|---|---|
web |
soundflare-web | 8000 | Next.js dashboard + API routes |
db |
soundflare-db | 5432 | PostgreSQL database |
auth |
soundflare-auth | 9999 | Supabase Auth (GoTrue) |
rest |
soundflare-rest | 3000 | Supabase API (PostgREST) |
kong |
soundflare-kong | 54321 | API gateway (unified Supabase URL) |
If you cannot log in with the default credentials, or if the database container had issues starting:
-
Reset the Environment:
docker compose down -v ./scripts/docker-start.sh
-
Check Logs:
docker compose --env-file .env.docker logs -f web docker compose --env-file .env.docker logs -f auth docker compose --env-file .env.docker logs -f kong
-
View Your Credentials:
cat .docker-credentials.txt
If you see AuthApiError: Invalid Refresh Token in the web logs after regenerating JWT keys, clear your browser cookies for localhost:8000 and log in again. Old session tokens are invalid after key rotation.
If you see network errors in the browser console, ensure you are accessing the dashboard at http://localhost:8000 exactly. The local gateway is configured to allow requests from this origin.
Connect your Python-based LiveKit/Trillet agent using the SoundFlare SDK.
Coming Soon: The
soundflarepip package is being prepared for release.
pip install soundflareimport os
from soundflare import LivekitObserve
from livekit.agents import AgentSession
# Initialize
soundflare = LivekitObserve(
agent_id="YOUR_AGENT_ID",
apikey="YOUR_API_KEY" # Generate this in the SoundFlare Dashboard
)
async def entrypoint(ctx: JobContext):
session = AgentSession(...)
# Start monitoring
session_id = soundflare.start_session(session=session)
# Export data on shutdown
async def on_shutdown():
await soundflare.export(session_id)
ctx.add_shutdown_callback(on_shutdown)
await session.start(...)This project is licensed under the MIT License.
- GitHub: Report issues