A RAG-based support assistant for Bisq 2, providing automated support through a chat interface.
This project consists of the following components:
- API Service: FastAPI-based backend implementing the RAG (Retrieval Augmented Generation) system
- Web Frontend: Next.js web application for the chat interface
- Bisq Integration: Connection to Bisq 2 API for support chat data
- Monitoring: Prometheus and Grafana for system monitoring
This project includes two primary methods for getting started:
- Local Development: Use the
run-local.shscript for a fully containerized local environment on your machine. - Production Deployment: Use the
scripts/deploy.shscript for initial setup on a dedicated server.
For developing on your local machine, the project provides a comprehensive Docker Compose setup that mirrors the production environment.
Prerequisites: Docker, Git, Python 3.11+, Node.js 20+, Java 17+.
To start the local environment, use the provided shell script:
# This script handles building all containers and starting them in the correct order.
./run-local.shThis is the only command needed for local development. It uses docker/.env for secrets and docker-compose.local.yml for development-specific configurations like hot-reloading. The scripts in the scripts/ directory are not intended for local use.
If you want Dockerized support-agent services to use a manually started Bisq2 headless API on the host:
BISQ_API_URL=http://host.docker.internal:8090 ./run-local.shImportant: the Bisq2 API process must be reachable from Docker (bind host 0.0.0.0, not only 127.0.0.1).
For Matrix local testing, keep public support sync and staff notifications separated:
MATRIX_SYNC_ROOMS=!ilodKeOTMMMDTlGhkf:matrix.org
MATRIX_STAFF_ROOM=!KQdmdCuJsNAjLhkIre:matrix.org
# local testing can reuse the same room for alerts
MATRIX_ALERT_ROOM=!KQdmdCuJsNAjLhkIre:matrix.orgIf the Bisq2 API runs with authorizationRequired=true, enable authenticated support-agent access:
BISQ_API_AUTH_ENABLED=true
# Prefer durable client credentials in production...
BISQ_API_CLIENT_ID=...
BISQ_API_CLIENT_SECRET=...
# ...and use pairing data only for first bootstrap if needed:
# BISQ_API_PAIRING_CODE_ID=...
# BISQ_API_PAIRING_QR_FILE=/path/to/pairing_qr_code.txtNote: the current Bisq2 permission mapping must include /api/v1/support/* for fully authorized support export/send.
This project is designed to be deployed via Docker on a dedicated server. The scripts/ directory contains the necessary automation for installation, updates, and management.
The scripts/deploy.sh script is the main entrypoint for setting up a new production server. It performs the following actions:
- Creates a dedicated application user (
bisq-support). - Clones the repository into
/opt/bisq-support. - Creates all necessary data and logging directories.
- Sets the correct file ownership to the
bisq-supportuser. - Builds and starts all Docker containers.
To run it, execute the script from your server:
curl -sSL https://raw.githubusercontent.com/bisq-network/bisq2-support-agent/main/scripts/deploy.sh | sudo bashAfter deployment, you must configure the following settings in /opt/bisq-support/docker/.env:
-
CORS Settings - Required to access the admin interface:
# Update CORS_ORIGINS to include your server IP/domain: CORS_ORIGINS=http://localhost:3000,http://127.0.0.1:3000,http://YOUR_SERVER_IP -
Privacy and Security Settings (recommended for GDPR compliance):
# Data retention period in days (default: 30) DATA_RETENTION_DAYS=30 # Enable privacy-preserving features (recommended for production) ENABLE_PRIVACY_MODE=true # Enable PII detection in logs to prevent logging sensitive data PII_DETECTION_ENABLED=true # Log level (DEBUG, INFO, WARNING, ERROR, CRITICAL) LOG_LEVEL=INFO
-
Cookie Security - For Tor/.onion deployments:
# Set to false for HTTP/Tor deployments (default: true for HTTPS) COOKIE_SECURE=false -
Security Headers (enabled by default in production):
- The production deployment uses
docker/nginx/conf.d/default.prod.confwith security headers enabled - Headers include: CSP, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, Permissions-Policy
- Configuration is automatically applied via
docker-compose.yml - No additional setup required - security headers work out of the box
- The production deployment uses
-
Dark-deploy new operator surfaces first:
TRUST_MONITOR_ENABLED=false MATRIX_CHATOPS_ENABLED=false BISQ2_CHATOPS_ENABLED=false BISQ_API_AUTH_STATE_SECRET=replace-with-a-generated-secret
Later canaries:
TRUST_MONITOR_ENABLED=true TRUST_MONITOR_ALERT_SURFACE=admin_ui TRUST_MONITOR_ACTOR_KEY_SECRET=replace-with-a-generated-secret MATRIX_CHATOPS_ENABLED=true MATRIX_CHATOPS_ROOM_IDS=!your-staff-room:matrix.org -
Restart services after making changes:
cd /opt/bisq-support/scripts/ ./restart.sh -
Access admin interface: Navigate to
http://YOUR_SERVER_IP/adminand use theADMIN_API_KEYfrom the.envfile
Optional: Tor Hidden Service Deployment
To expose the application as a Tor hidden service (.onion address) for private, censorship-resistant access:
This is completely optional - the application works perfectly without Tor configuration.
📖 See the complete guide: Tor Hidden Service Deployment
Quick overview:
- Install Tor daemon on the host system
- Configure hidden service in
/etc/tor/torrc - Add
.onionaddress toTOR_HIDDEN_SERVICEenvironment variable - Set
COOKIE_SECURE=falsefor .onion deployments - Restart services
The guide includes:
- ✅ Step-by-step deployment instructions
- ✅ Security hardening configuration
- ✅ Backup and recovery procedures
- ✅ Monitoring and troubleshooting
- ✅ Automated security testing
Once deployed, the production application should be managed using the following scripts located in /opt/bisq-support/scripts/:
start.sh: Starts the application containers using the production configuration.stop.sh: Stops and removes the application containers.restart.sh: Performs a graceful stop followed by a start.cleanup_old_data.sh: Cleans up personal data older thanDATA_RETENTION_DAYS(GDPR compliance).
These scripts are location-aware and source their configuration from a production environment file (/etc/bisq-support/deploy.env). They should not be used for local development.
Note: If you encounter "Permission denied" errors when running scripts, ensure they are executable:
chmod +x /opt/bisq-support/scripts/*.shTo automatically clean up personal data on production servers, set up a cron job:
# Edit root crontab
sudo crontab -e
# Add this line to run cleanup daily at 2 AM
0 2 * * * /opt/bisq-support/scripts/cleanup_old_data.sh >> /var/log/bisq-data-cleanup.log 2>&1The cleanup script:
- Removes raw chat data older than
DATA_RETENTION_DAYS(default: 30 days) - Preserves anonymized FAQs permanently
- Respects the
ENABLE_PRIVACY_MODEsetting - Can be run manually with
--dry-runflag to preview deletions
The scripts/update.sh script handles pulling the latest changes from the Git repository, rebuilding Docker images, and restarting the services. It includes a rollback mechanism in case of failure.
The deployment script may make scripts executable, which Git sees as a file modification. The update.sh script is designed to handle this by stashing changes. If you encounter issues, you can resolve them manually by resetting the branch: git fetch origin && git reset --hard origin/main.
- Kernel Updates: The script may warn about a pending kernel upgrade. It is safe to proceed, but you should
sudo rebootafter the deployment is complete. - Environment Variables: Ensure all required variables are exported correctly before running scripts with
sudo -E. For a full list, see thedeploy.shscript and the Environment Configuration doc.
The project uses the following data directories within api/data/:
wiki/: Contains wiki documents for the RAG knowledge base.feedback.db: SQLite database storing user feedback (automatically created on first run).faqs.db: SQLite database storing FAQs (authoritative source, automatically created on first run).unified_training.db: SQLite database storing unified FAQ candidates, review decisions, calibration state, and learning thresholds. This is the training pipeline source of truth.bm25_vocabulary.json: BM25 vocabulary used by the hybrid retriever (runtime-generated).qdrant_index_metadata.json: Qdrant index build metadata (runtime-generated).
These are automatically created during deployment. For local development, create the wiki directory if needed: mkdir -p api/data/wiki.
Legacy note:
faq_candidates.dbandunified_candidates.dbmay still exist on older environments, but the current application does not use them at runtime.- The live unified training and review pipeline reads and writes
unified_training.db.
The feedback system has been migrated from JSONL files to SQLite for better data integrity and query performance:
- SQLite Database:
api/data/feedback.db- Primary feedback storage (automatically created) - Database Schema: Includes tables for feedback entries, conversation history, metadata, and issues
- Migration: Existing JSONL feedback files can be migrated using
python -m app.scripts.migrate_feedback_to_sqlite - Permissions: The database file must be writable by the API container user (UID 1001, the
bisq-supportuser in production). If you encounter permission errors, fix ownership with:sudo chown 1001:1001 api/data/feedback.db
For new deployments, no migration is needed - the database will be created automatically on first startup.
The FAQ extraction system needs to identify which users in the support chat are official support agents. This is configured using the SUPPORT_AGENT_NICKNAMES environment variable.
Add support agent nicknames to your environment configuration:
# Single support agent
SUPPORT_AGENT_NICKNAMES=suddenwhipvapor
# Multiple support agents (comma-separated)
SUPPORT_AGENT_NICKNAMES=suddenwhipvapor,strayorigin,toruk-maktoImportant Notes:
- Required for FAQ extraction: If not configured, no messages will be marked as support messages
- No fallback behavior: The system will NOT automatically detect support agents if this is not configured
- Case-sensitive: Nicknames must match exactly as they appear in the support chat
- Comma-separated: Use commas to separate multiple nicknames (no spaces recommended)
When processing support chat conversations:
- Messages from configured nicknames are marked as support messages
- Support messages are used to identify Q&A conversations for FAQ extraction
- Only conversations with both user questions and support answers are extracted as FAQs
If SUPPORT_AGENT_NICKNAMES is not configured:
- ❌ No messages will be marked as support messages
- ❌ No FAQs will be extracted from conversations
⚠️ The system will operate normally for answering questions, but won't learn from new support chats
The RAG system uses a hybrid retrieval pipeline combining:
- Metadata Filtering: Protocol-based prioritization (Bisq Easy vs Bisq 1)
- Keyword Search: BM25 sparse vectors for exact term matching
- Semantic Search: Dense vector embeddings for meaning-based retrieval
- Weighted Fusion: 60% semantic + 40% keyword scoring (configurable via
HYBRID_SEMANTIC_WEIGHT/HYBRID_KEYWORD_WEIGHT)
User Query → Version Detection → Multi-Stage Protocol Filtering
→ Hybrid Search (Semantic 60% + Keyword 40%)
→ Deduplication → Optional ColBERT Reranking
→ Context Assembly → LLM Generation
Key Parameters:
- Qdrant hybrid retrieval (dense + sparse vectors) with protocol-aware staged filtering
- BM25 with K1=1.5, B=0.75
- Embedding model:
text-embedding-3-small(1536 dimensions) - Hybrid weighting defaults: semantic 0.6 + keyword 0.4
For detailed architecture, see RAG Architecture.
- Location:
api/data/wiki/ - Purpose: Primary knowledge base with structured Bisq 1/2 documentation
- Processing: XML dumps →
process_wiki_dump.py→ JSONL with metadata - Source Weight: 1.1 (slightly higher than FAQs)
- Protocol Tagging: Documents auto-categorized as
bisq_easy,multisig_v1, orall
Metadata Structure:
{
"title": "Document Title",
"category": "bisq2|bisq1|general",
"type": "wiki",
"source_weight": 1.1,
"protocol": "bisq_easy|multisig_v1|all"
}- Storage: SQLite database (
api/data/faqs.db) - authoritative source - Extraction: Automatic from support chat via Training Pipeline
- Manual Addition: Admin interface at
/admin/manage-faqs - Source Weight: 1.0
For Bisq Easy queries (default):
| Stage | Filter | k | Trigger |
|---|---|---|---|
| 1 | protocol="bisq_easy" |
6 | Always |
| 2 | protocol="all" |
4 | If < 4 docs |
| 3 | protocol="multisig_v1" |
2 | If < 3 docs |
For Bisq 1 queries:
| Stage | Filter | k | Trigger |
|---|---|---|---|
| 1 | protocol="multisig_v1" |
4 | Always |
| 2 | protocol="all" |
2 | If < 3 docs |
When the API service starts, it processes wiki and FAQ sources, then ensures the Qdrant hybrid index is up to date (Qdrant data persists in the Docker volume bisq2-qdrant-data).
# Add new wiki content
cp your_document.md api/data/wiki/
docker compose -f docker/docker-compose.yml restart apiFor FAQ extraction details, see FAQ Extraction Documentation.
The project includes Prometheus and Grafana for monitoring.
- Prometheus: Collects metrics from the API and web services.
- Grafana: Provides dashboards for visualizing metrics.
For details on securing these services, see the Monitoring Security Guide.
This project uses pre-commit hooks to automatically enforce code quality standards. The hooks run before each commit to check formatting, imports, types, and tests.
Setting up pre-commit hooks:
# Install pre-commit (if not already installed)
cd api
pip install pre-commit
# Install the git hooks
pre-commit install
# (Optional) Run hooks on all files to verify setup
pre-commit run --all-filesWhat runs on each commit:
- ✅ black - Python code formatting
- ✅ isort - Import sorting
- ✅ mypy - Type checking
- ✅ flake8 - Code linting
- ✅ pytest - Fast tests (non-slow tests only)
- ✅ File checks (trailing whitespace, end-of-file, YAML/JSON syntax)
Bypassing hooks (use sparingly):
# Skip all hooks for a single commit (only when necessary)
git commit --no-verify -m "Emergency fix"Note: The same checks run in CI, so bypassing hooks locally will cause CI failures.
When you need to update Python dependencies or if GitHub Actions fails with "requirements.txt is not up to date":
❌ Don't do this (creates platform-specific dependencies):
pip-compile api/requirements.in -o api/requirements.txt✅ Do this instead (creates cross-platform compatible dependencies):
# Use Docker to generate requirements.txt in the same Linux environment as CI
docker compose -f docker/docker-compose.yml -f docker/docker-compose.dev.yml run --build --rm api pip-compile api/requirements.in -o api/requirements.txt --upgrade --no-strip-extrasThis ensures the generated requirements.txt is compatible with both your local development environment and the Linux-based GitHub Actions CI environment.
- Add the package to
api/requirements.in - Regenerate
requirements.txtusing the Docker command above - Restart the API service:
./run-local.shor restart containers manually
See the troubleshooting guide for solutions to common problems.
bisq2-apiconnection, Docker configuration, and more.
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add <descriptive message>') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.