Autonomous Agent Runtime & Intelligence Desktop
The conversation window is just an interface. The organism runs underneath.
| Platform | File | Notes |
|---|---|---|
| Linux (any distro) | Sovereign Engine-0.1.0.AppImage |
chmod +x and run |
| Debian / Ubuntu | sovereign-engine_0.1.0_amd64.deb |
sudo dpkg -i |
| Source / All platforms | Sovereign_Engine_Core_v0.1.0.zip |
bash install.sh |
| Windows | Coming soon | install.bat + start.bat ready |
Requires Python 3.11+ on your machine. The installer creates an isolated
.venvautomatically on first launch — nothing touches your system Python.
The Sovereign Engine Core is a production-hardened, zero-trust autonomous agent runtime. It establishes a complete multi-LLM operating environment designed to run entirely on your local hardware.
Moving beyond generic chat wrappers, the Sovereign Engine functions as a living software organism — with decentralized daemon architecture, asynchronous memory ingestion, deterministic telemetry via the Execution Ledger, and zero-trust payload containment that prevents autonomous agents from irreversibly mutating your system.
chmod +x "Sovereign Engine-0.1.0.AppImage"
./"Sovereign Engine-0.1.0.AppImage"sudo dpkg -i sovereign-engine_0.1.0_amd64.deb
# Launch from your applications menu or:
sovereign-engineunzip Sovereign_Engine_Core_v0.1.0.zip
cd Sovereign_Engine_Core_v0.1.0/Sovereign_Engine_Core
bash install.shThe installer creates a .venv, installs dependencies, generates a .env from the example, and boots the engine. On subsequent launches just run bash start.sh.
git clone https://github.com/NovasPlace/Sovereign_Engine_Core.git
cd Sovereign_Engine_Core
bash install.shOpen Configuration Mode in the UI to set API keys visually, or edit .env directly:
# Add any combination — engine auto-routes based on task type
GEMINI_API_KEY="your-key"
OPENAI_API_KEY="sk-..."
ANTHROPIC_API_KEY="sk-ant-..."
# Optional: point to a local Ollama instance
OLLAMA_HOST="http://127.0.0.1:11434"No keys? No problem. If Ollama is installed and running locally, the engine auto-detects and uses it with zero configuration.
When ACTIVE_MODEL is set to auto (default), the engine classifies each task and picks the best available model:
| Task Type | Detection | Model Priority |
|---|---|---|
| Simple | Casual prompts, short queries | Gemini Flash → GPT-4o-mini → Claude Haiku → Ollama |
| Code | function, debug, python, sql, regex, … |
deepseek-coder (local) → GPT-4o → Gemini 2.5 Pro |
| Heavy | analyze, architecture, research, >60 words |
Gemini 2.5 Pro → GPT-4o → Claude Opus → large local |
You can always override by selecting a specific model in the UI dropdown.
All agent file access is governed by a Workspace Jail. The <read> and <write> tools enforce is_in_jail(path) bounds with a 10MB OOM cap and symlink resolution blocks. Dangerous binaries (rm, curl, pip, etc.) require explicit operator approval before execution. Safety == Trust.
Natively supports Gemini, OpenAI, Anthropic, and local Ollama instances. The engine handles protocol normalization — agents hot-swap across providers transparently. Placeholder keys in .env are correctly ignored and never sent to APIs.
All context, decisions, and execution traces are journaled into a PostgreSQL schema (SQLite fallback on fresh installs). Features hot/warm session recovery, execution event ledger, and a fully decoupled async memory router that protects the UI thread from database latency.
The agent operates with real system agency via strict XML-tagged tools:
| Tool | What it does |
|---|---|
<execute> |
Spawns bash subprocesses (quarantine-gated) |
<read> / <write> |
Reads and rewrites source files (jail-bound, 10MB cap) |
<search> |
Live DuckDuckGo scraping to bypass knowledge cutoffs |
<fetch> |
Strips and reads raw website HTML |
<list_dir> |
Maps file system topologies |
<search_dir> |
Wildcard file discovery |
<grep> |
Deep text search inside codebases |
<system> |
OS telemetry — kernel info, datetime, hardware |
If a capability is missing, the agent writes and immediately executes custom scripts to extend itself.
Five visual themes swappable in real-time via CSS variables:
- 🟢 Bioforge Green — terminal moss
- 🔵 Gemini Forge — deep space indigo with azure particle fog
- 🟣 Neon Noir — hyper-magenta and cyan outrun
- ❄️ Ghost Protocol — clinical arctic blue on charcoal
- 🟠 Cyber Obsidian — burnished amber corporate intelligence
┌─────────────────────────────────────────────────────────┐
│ sov_electron / main.js │
│ (Electron Desktop Wrapper — AppImage / .deb / .exe) │
│ First-run: auto-installs .venv and dependencies │
└──────────────────────────┬──────────────────────────────┘
│ spawns start.sh
▼
┌─────────────────────────────────────────────────────────┐
│ start.sh Guardian │
│ Kill → Verify → Launch backend → Watch health loop │
└──────────────────────────┬──────────────────────────────┘
│ uvicorn
▼
┌─────────────────────────────────────────────────────────┐
│ main.py │
│ FastAPI Server + Smart Inference Router │
│ Task classifier → model selector → provider call │
└──────────────────────────┬──────────────────────────────┘
│
┌────────────┴─────────────┐
▼ ▼
┌─────────────────────┐ ┌─────────────────────────────┐
│ memory_api.py │ │ store.py │
│ (Memory & Events) │ │ PostgreSQL / SQLite fabric │
└─────────────────────┘ └─────────────────────────────┘
cd sov_electron
npm install
# Linux
npm run dist:linux # → dist/Sovereign Engine-x.x.x.AppImage + .deb
# Windows (run on Windows or CI)
npm run dist:win # → dist/Sovereign Engine Setup x.x.x.exe
# Both
npm run dist:allAxiom: The Execution Proof Law — the organism cannot claim success without raw execution output proving it. Confidence without evidence is hallucination.