Real-time mission-control simulation with safety-gated LLM planning, retrieval-augmented memory, and a live operator UI.
Architecture paper
Portfolio case study
Live telemetry, safety-gated planning, operator controls, and mission chat.
ARIA is a full-stack AI system, not just a chat demo:
- Simulates parafoil landing scenarios from telemetry CSV streams.
- Generates structured plans at 1 Hz using an LLM + retrieval + safety gate.
- Stores episodic decisions and distills cross-run lessons in SQLite/FTS.
- Supports operator-in-the-loop controls: approve, modify, reject.
Primary model is openai/gpt-oss-120b with automatic fallback to openrouter/aurora-alpha.
Telemetry CSVs (20 Hz)
-> Playback service (FastAPI)
-> SSE stream (/api/events/stream)
-> Next.js UI (gauges, timeline, plan panel)
Planner tick (1 Hz)
-> Working memory composer (recent events + lessons + docs)
-> Token budget governor
-> LLM JSON plan
-> Safety gate checks
-> plan_proposed SSE + episodic log write
Main modules:
- Backend API:
backend/api/ - Planner/services:
backend/services/ - Memory/retrieval:
backend/aria/memory/ - LLM abstraction:
backend/llm/ - Frontend app:
frontend/ - Scenario data:
data/telemetry/,data/aria.sqlite
- Backend: FastAPI, SQLite (FTS5), pandas, async SSE
- Frontend: Next.js (App Router), React, Tailwind, Recharts
- LLM routing: OpenAI-compatible client with provider/fallback policy
- Memory: episodic log + distilled semantic lessons + doc retrieval
- Python 3.11+
- Node 20+ (recommended) and pnpm
Backend (backend/.env):
cp backend/.env.example backend/.envSet at minimum:
LLM_PROVIDER=openrouter
OPENROUTER_API_KEY=...
OPENAI_BASE_URL=https://openrouter.ai/api/v1
MODEL_NAME=openai/gpt-oss-120b
FALLBACK_MODEL_NAME=openrouter/aurora-alphaFrontend (frontend/.env.local):
cp frontend/.env.local.example frontend/.env.localTypical value:
NEXT_PUBLIC_API_BASE=http://localhost:8000cd frontend && pnpm i && cd ..
cd backend && pip install -r requirements.txt && cd ..pnpm devServices:
- API:
http://localhost:8000 - Web:
http://localhost:3000
- Open
http://localhost:3000. - Click
Starton any scenario. - Verify:
- Gauges update in real time.
- Timeline receives run events.
- ARIA Plan populates and updates each second.
- Use Plan actions (
Approve,Modify,Reject) to simulate operator decisions. - Use Mission Chat to query lessons/docs.
GET /api/events/stream: SSE telemetry, plans, metrics, anomaliesPOST /api/start?scenario=<key>: start playbackPOST /api/stop: stop playbackGET /api/status: playback statusPOST /api/admin/flags: ablations (use_docs,use_lessons,use_gate)POST /api/plan/now: one-shot planning callPOST /api/chat: mission chat
- Isolated LLM layer (
backend/llm/) with provider and retry/fallback policy. - Backward-compatible retriever behavior for schema drift in existing SQLite files.
- Safety-gate enforcement before plan display.
- Thin compatibility facade in
backend/aria/agent.pyto avoid breaking imports during refactor.
OPENROUTER_API_KEY is not set- Put key in
backend/.env, not frontend env. - Restart API after edits.
- Put key in
- Plan panel shows
No plan yet- Check API logs for planner/retriever exceptions.
- Test
POST /api/plan/nowdirectly.
- Tokenizers fork warning on shutdown
- Set
TOKENIZERS_PARALLELISM=falseinbackend/.env.
- Set
MIT for code in this repository.
Files under data/docs retain their original licenses/sources.