-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path.env.example
More file actions
77 lines (69 loc) · 3.53 KB
/
.env.example
File metadata and controls
77 lines (69 loc) · 3.53 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
# ============================================================
# ShieldByte — Environment Variables
# ============================================================
# Copy this file to .env and fill in each value.
# cp .env.example .env
# ============================================================
# ── SUPABASE ─────────────────────────────────────────────────
# 1. Go to https://supabase.com → open your project
# 2. Click ⚙️ Project Settings → API
# 3. Copy "Project URL" → paste below as SUPABASE_URL
# 4. Copy the "service_role" key (NOT the anon/publishable key):
# → It's in the same API page, below the anon key
# → Click the 👁️ eye icon to reveal it
# → ⚠️ The anon key won't work for inserts with RLS enabled
SUPABASE_URL=
SUPABASE_SERVICE_ROLE_KEY=
# ── NEWSAPI ──────────────────────────────────────────────────
# 1. Go to https://newsapi.org → click "Get API Key"
# 2. Sign up with email (free = 100 req/day, last 30 days)
# 3. Copy the key from your dashboard
NEWSAPI_KEY=
# ── GROQ (AI Classification) ────────────────────────────────
# 1. Go to https://console.groq.com → sign up / log in
# 2. Go to API Keys → "Create API Key"
# 3. Copy the key (free = 14,400 req/day)
GROQ_API_KEY=
# Gemini setup:
# 1. Go to https://aistudio.google.com/apikey
# 2. Create a Gemini API key
# 3. Paste it below
# 4. Leave the model names as-is unless you want to override them
GEMINI_API_KEY=
GEMINI_CLASSIFIER_MODEL=gemini-2.0-flash
GEMINI_MISSION_MODEL=gemini-2.0-flash
GEMINI_FEEDBACK_MODEL=gemini-2.0-flash
# Ollama setup (optional local-first alternative to hosted models):
# 1. Install Ollama and pull a local model such as:
# ollama pull qwen2.5:7b
# 2. Start Ollama locally (default API: http://127.0.0.1:11434)
# 3. Set one or more model names below to make ShieldByte use Ollama as the last fallback
OLLAMA_BASE_URL=http://127.0.0.1:11434
OLLAMA_CLASSIFIER_MODEL=
OLLAMA_MISSION_MODEL=
OLLAMA_FEEDBACK_MODEL=
# OpenRouter setup (primary hosted failover chain):
# 1. Go to https://openrouter.ai/keys and create an API key
# 2. Add one or more comma-separated model IDs per task below
# 3. ShieldByte will try each configured OpenRouter model in order before falling back
OPENROUTER_API_KEY=
OPENROUTER_API_KEY_2=
OPENROUTER_API_KEY_3=
OPENROUTER_CLASSIFIER_MODELS=
OPENROUTER_MISSION_MODELS=
OPENROUTER_FEEDBACK_MODELS=
# Optional Claude-focused chains via OpenRouter:
OPENROUTER_CLAUDE_CLASSIFIER_MODELS=
OPENROUTER_CLAUDE_MISSION_MODELS=
OPENROUTER_CLAUDE_FEEDBACK_MODELS=
# ── CRON SECRET ──────────────────────────────────────────────
# Optional Phase 1 supervised category model path.
# If omitted, ShieldByte looks for tmp/phase1-category-model.json
PHASE1_CATEGORY_MODEL_PATH=
# ── API SECRET (Player Token Auth) ──────────────────────────
# Used to sign player tokens via HMAC-SHA256 to prevent API impersonation.
# Generate: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
API_SECRET=your-api-secret-here
# A random string to protect cron endpoints from public access.
# Generate: node -e "console.log(require('crypto').randomBytes(32).toString('hex'))"
CRON_SECRET=your-random-secret-here