Last updated: 2025-11-26
- Node.js ≥ 18, Docker + Docker Compose, Git.
- Llama Prompt Guard 2 model (downloaded via script).
- Clone repo:
git clone https://github.com/tbartel74/Vigil-Guard.git
cd Vigil-Guard- Download model (after accepting Meta license):
./scripts/download-llama-model.sh # saves to ../vigil-llm-models/- Run installer:
chmod +x install.sh
./install.sh- Start services (docker-compose):
docker-compose up -d- Smoke test heuristics (Branch A):
curl -X POST http://localhost:5005/analyze \
-H "Content-Type: application/json" \
-d '{"text":"Hello world","request_id":"qs-smoke"}'- Smoke test workflow webhook:
curl -X POST http://localhost:5678/webhook/vigil-guard-2 \
-H "Content-Type: application/json" \
-d '{"chatInput":"Hello world","sessionId":"demo-1"}'- Web UI: 5173 (dev) / 80 via Caddy (
/ui/), backend 8787. - Workflow (n8n): 5678 (
/n8n/), webhook/webhook/vigil-guard-2. - PII API: 5001, Language Detector: 5002.
- Heuristics: 5005, Semantic: 5006, LLM Safety Engine: 8000.
- ClickHouse: 8123 (HTTP), Grafana: 3001.
- Check architecture:
docs/architecture/pipeline.md - Configure:
docs/config/unified-config.md,docs/config/heuristics.md - Run tests:
docs/tests/index.md