Skip to content

Latest commit

 

History

History
51 lines (46 loc) · 1.36 KB

File metadata and controls

51 lines (46 loc) · 1.36 KB

Quickstart (5 minutes)

Last updated: 2025-11-26

Requirements

  • Node.js ≥ 18, Docker + Docker Compose, Git.
  • Llama Prompt Guard 2 model (downloaded via script).

Steps

  1. Clone repo:
git clone https://github.com/tbartel74/Vigil-Guard.git
cd Vigil-Guard
  1. Download model (after accepting Meta license):
./scripts/download-llama-model.sh   # saves to ../vigil-llm-models/
  1. Run installer:
chmod +x install.sh
./install.sh
  1. Start services (docker-compose):
docker-compose up -d
  1. Smoke test heuristics (Branch A):
curl -X POST http://localhost:5005/analyze \
  -H "Content-Type: application/json" \
  -d '{"text":"Hello world","request_id":"qs-smoke"}'
  1. Smoke test workflow webhook:
curl -X POST http://localhost:5678/webhook/vigil-guard-2 \
  -H "Content-Type: application/json" \
  -d '{"chatInput":"Hello world","sessionId":"demo-1"}'

Key ports

  • Web UI: 5173 (dev) / 80 via Caddy (/ui/), backend 8787.
  • Workflow (n8n): 5678 (/n8n/), webhook /webhook/vigil-guard-2.
  • PII API: 5001, Language Detector: 5002.
  • Heuristics: 5005, Semantic: 5006, LLM Safety Engine: 8000.
  • ClickHouse: 8123 (HTTP), Grafana: 3001.

What next

  • Check architecture: docs/architecture/pipeline.md
  • Configure: docs/config/unified-config.md, docs/config/heuristics.md
  • Run tests: docs/tests/index.md