Skip to content

metadist/synaplan-dev

Repository files navigation

Synaplan - AI-Powered Knowledge Management System

AI-powered knowledge management with chat, document processing, and RAG (Retrieval-Augmented Generation).

πŸš€ Quick Start

Prerequisites

  • Docker & Docker Compose
  • Git

Installation

git clone <repository-url>
cd synaplan-dev

# Quick start (models download on-demand)
docker compose up -d

# Or: Pre-download AI models during startup
AUTO_DOWNLOAD_MODELS=true docker compose up -d

What happens automatically:

  • βœ… Creates .env from .env.example (Docker Compose variables)
  • βœ… Creates backend/.env and frontend/.env (app-specific configs)
  • βœ… Installs dependencies (Composer, npm)
  • βœ… Generates JWT keypair for authentication
  • βœ… Creates database schema (migrations)
  • βœ… Loads test users and fixtures (if database is empty)
  • βœ… Starts all services
  • βœ… System ready in ~40 seconds!

First startup takes ~40 seconds because:

  • Database initialization: ~5s
  • Schema creation: ~2s
  • Fixtures loading: ~3s
  • Cache warming: ~2s
  • Total: ~40s (one-time setup)

Subsequent restarts take ~15 seconds (no fixtures needed).

AI Model Download Behavior:

By default, AI models are NOT downloaded automatically. They download on-demand when first used.

Option 1: Quick Start (Recommended for Development)

docker compose up -d
  • ⚑ Fast startup: ~40 seconds (first run), ~15s (subsequent)
  • πŸ“₯ Models: Download automatically when you first send a chat message (~2-3 minutes)
  • πŸ’‘ Best for: Development, testing, quick demos
  • 🎯 System is immediately usable for login, file uploads, user management

Option 2: Pre-download Models

AUTO_DOWNLOAD_MODELS=true docker compose up -d
  • πŸ”„ Backend ready: Still ~40 seconds
  • πŸ“¦ Models download in background: mistral:7b (4.1GB) + bge-m3 (670MB)
  • ⏱️ Total download time: ~5-10 minutes (depends on internet speed)
  • βœ… AI chat ready immediately after models finish downloading
  • πŸ’‘ Best for: Production, demos where AI must work immediately

Check download progress:

docker compose logs -f backend | grep -i "model\|background"

When to use which option:

  • Development/Testing: Use default (on-demand download)
  • Production/Demos: Use AUTO_DOWNLOAD_MODELS=true
  • CI/CD: Build a custom image with pre-downloaded models

🌐 Access

Service URL Description
Frontend http://localhost:5173 Vue.js Web App
Backend API http://localhost:8000 Symfony REST API
phpMyAdmin http://localhost:8082 Database Management
MailHog http://localhost:8025 Email Testing
Ollama http://localhost:11435 AI Models API

πŸ‘€ Test Users

Email Password Level
admin@synaplan.com admin123 BUSINESS
demo@synaplan.com demo123 PRO
test@example.com test123 NEW

🧠 RAG System

The system includes a full RAG (Retrieval-Augmented Generation) pipeline:

  • Upload: Multi-level processing (Extract Only, Extract + Vectorize, Full Analysis)
  • Extraction: Tika (documents), Tesseract OCR (images), Whisper (audio)
  • Vectorization: bge-m3 embeddings (1024 dimensions) via Ollama
  • Storage: Native MariaDB VECTOR type with VEC_DISTANCE_COSINE similarity search
  • Search: Semantic search UI with configurable thresholds and group filtering
  • Sharing: Private by default, public sharing with optional expiry

πŸŽ™οΈ Audio Transcription

Audio files are automatically transcribed using Whisper.cpp when uploaded:

  • Supported formats: mp3, wav, ogg, m4a, opus, flac, webm, aac, wma
  • Automatic conversion: FFmpeg converts all audio to optimal format (16kHz mono WAV)
  • Models: tiny, base (default), small, medium, large - configurable via .env
  • Setup:
    • Docker: Pre-installed, download models on first run
    • Local: Install whisper.cpp and FFmpeg, configure paths in .env

Environment variables (see .env.example):

WHISPER_BINARY=/usr/local/bin/whisper    # Whisper.cpp binary path
WHISPER_MODELS_PATH=/var/www/html/var/whisper  # Model storage
WHISPER_DEFAULT_MODEL=base               # tiny|base|small|medium|large
WHISPER_ENABLED=true                     # Enable/disable transcription
FFMPEG_BINARY=/usr/bin/ffmpeg           # FFmpeg for audio conversion

If Whisper is unavailable, audio processing is skipped gracefully (no errors).

πŸ“± WhatsApp Business API Integration

SynaPlan integrates with Meta's official WhatsApp Business API for bidirectional messaging.

Setup:

  1. Create WhatsApp Business Account: Meta Business Suite
  2. Get Credentials: Access Token, Phone Number ID, Business Account ID
  3. Set Environment Variables:
WHATSAPP_ACCESS_TOKEN=your_access_token
WHATSAPP_PHONE_NUMBER_ID=your_phone_number_id
WHATSAPP_BUSINESS_ACCOUNT_ID=your_business_account_id
WHATSAPP_WEBHOOK_VERIFY_TOKEN=your_verify_token
WHATSAPP_ENABLED=true
  1. Configure Webhook in Meta:
    • Callback URL: https://your-domain.com/api/v1/webhooks/whatsapp
    • Verify Token: Same as WHATSAPP_WEBHOOK_VERIFY_TOKEN
    • Subscribe to: messages

Phone Verification (Required):

Users must verify their phone number via WhatsApp to unlock full features:

  • ANONYMOUS (not verified): 10 messages, 2 images (very limited)
  • NEW (verified): 50 messages, 5 images, 2 videos
  • PRO/TEAM/BUSINESS: Full subscription limits

Verification Flow:

  1. User enters phone number in web interface
  2. 6-digit code sent via WhatsApp
  3. User confirms code
  4. Phone linked to account β†’ full access
  5. User can remove link anytime

Supported Features:

  • βœ… Text Messages (send & receive)
  • βœ… Media Messages (images, audio, video, documents)
  • βœ… Audio Transcription (via Whisper.cpp)
  • βœ… Phone Verification System
  • βœ… Full AI Pipeline (PreProcessor β†’ Classifier β†’ Handler)
  • βœ… Rate Limiting per subscription level
  • βœ… Message status tracking

Message Flow:

WhatsApp User β†’ Meta Webhook β†’ /api/v1/webhooks/whatsapp
  β†’ Message Entity β†’ PreProcessor (files, audio transcription)
  β†’ Classifier (sorting, tool detection) β†’ InferenceRouter
  β†’ AI Handler (Chat/RAG/Tools) β†’ Response β†’ WhatsApp

πŸ“§ Email Channel Integration

SynaPlan supports email-based AI conversations with smart chat context management.

Email Addresses:

  • General: smart@synaplan.com - Creates general chat conversation
  • Keyword-based: smart+keyword@synaplan.com - Creates dedicated chat context
    • Example: smart+project@synaplan.com for project discussions
    • Example: smart+support@synaplan.com for support tickets

Features:

  • βœ… Automatic User Detection: Registered users get their own rate limits
  • βœ… Anonymous Email Support: Unknown senders get ANONYMOUS limits
  • βœ… Chat Context: Email threads become chat conversations
  • βœ… Spam Protection:
    • Max 10 emails/hour per unknown address
    • Automatic blacklisting for spammers
  • βœ… Email Threading: Replies stay in the same chat context
  • βœ… Unified Rate Limits: Same limits across Email, WhatsApp, Web

How It Works:

User sends email to smart@synaplan.com
  β†’ System checks if email is registered user
  β†’ If yes: Use user's rate limits
  β†’ If no: Create anonymous user with ANONYMOUS limits
  β†’ Parse keyword from recipient (smart+keyword@)
  β†’ Find or create chat context
  β†’ Process through AI pipeline
  β†’ Send response via email (TODO: requires SMTP)

Rate Limits (Unified):

  • Registered User Email = User's subscription limits
  • Unknown Email = ANONYMOUS limits (10 messages total)
  • Spam Detection: Auto-blacklist after 10 emails/hour

πŸ”Œ External Channel Integration (Generic)

The API also supports other external channels via webhooks authenticated with API keys:

Setup:

  1. Create API Key: POST /api/v1/apikeys (requires JWT login)

    { "name": "Email Integration", "scopes": ["webhooks:*"] }

    Returns: sk_abc123... (store securely - shown only once!)

  2. Use Webhooks: Send messages via API key authentication

    • Header: X-API-Key: sk_abc123... or
    • Query: ?api_key=sk_abc123...

Endpoints:

  • Email: POST /api/v1/webhooks/email
  • WhatsApp: POST /api/v1/webhooks/whatsapp
  • Generic: POST /api/v1/webhooks/generic

Example (Email):

curl -X POST https://your-domain.com/api/v1/webhooks/email \
  -H "X-API-Key: sk_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "from": "user@example.com",
    "subject": "Question",
    "body": "Hello, how can I help?"
  }'

Response: AI-generated reply based on message content

API Key Management:

  • GET /api/v1/apikeys - List keys
  • POST /api/v1/apikeys - Create key
  • PATCH /api/v1/apikeys/{id} - Update (activate/deactivate)
  • DELETE /api/v1/apikeys/{id} - Revoke key

πŸ“ Project Structure

synaplan-dev/
β”œβ”€β”€ _devextras/          # Development extras
β”œβ”€β”€ _docker/             # Docker configurations
β”‚   β”œβ”€β”€ backend/         # Backend Dockerfile & scripts
β”‚   └── frontend/        # Frontend Dockerfile & nginx
β”œβ”€β”€ backend/             # Symfony Backend (PHP 8.3)
β”œβ”€β”€ frontend/            # Vue.js Frontend
└── docker-compose.yml   # Main orchestration

βš™οΈ Environment Configuration

Environment files are auto-generated on first start:

  • backend/.env.local (auto-created by backend container, only if not exists)
  • frontend/.env.docker (auto-created by frontend container)

Note: .env.local is never overwritten. To reset: delete the file and restart container.

Example files provided:

  • backend/.env.docker.example (reference)
  • frontend/.env.docker.example (reference)

πŸ› οΈ Development

# View logs
docker compose logs -f

# Restart services
docker compose restart backend
docker compose restart frontend

# Reset database (deletes all data!)
docker compose down -v
docker compose up -d

# Run migrations
docker compose exec backend php bin/console doctrine:migrations:migrate

# Install packages
docker compose exec backend composer require <package>
docker compose exec frontend npm install <package>

πŸ€– AI Models

Models are downloaded on-demand when first used:

  • mistral:7b - Main chat model (4.1 GB) - Downloaded on first chat
  • bge-m3 - Embedding model for RAG (2.2 GB) - Downloaded when using document search

Pre-download Models (Recommended)

To download models during startup (in background):

AUTO_DOWNLOAD_MODELS=true docker compose up -d

The backend starts immediately while models download in parallel. Monitor progress:

docker compose logs -f backend

You'll see messages like:

  • [Background] ⏳ Model 'mistral:7b' download in progress...
  • [Background] βœ… Model 'mistral:7b' downloaded successfully!

✨ Features

  • βœ… AI Chat: Multiple providers (Ollama, OpenAI, Anthropic, Groq, Gemini)
  • βœ… RAG System: Semantic search with MariaDB VECTOR + bge-m3 embeddings (1024 dim)
  • βœ… Document Processing: PDF, Word, Excel, Images (Tika + OCR)
  • βœ… Audio Transcription: Whisper.cpp integration
  • βœ… File Management: Upload, share (public/private), organize with expiry
  • βœ… App Modes: Easy mode (simplified) and Advanced mode (full features)
  • βœ… Security: Private files by default, secure sharing with tokens
  • βœ… Multi-user: Role-based access with JWT authentication
  • βœ… Responsive UI: Vue.js 3 + TypeScript + Tailwind CSS

πŸ“„ License

See LICENSE

About

Creating a new setup for a BE/FE seperation of code. Migrating synplan into Release 2.0 with Symfony, Vue and Triton as local inference.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors