The first Integrated Development Environment where AI isn't just a copilot—it's your engineering team.
AstraForge is a standalone, sovereign IDE built on Electron and React, designed to host a specialized panel of autonomous AI agents that debate, architect, and implement code changes through consensus.
Status (Feb 2026): 10 bugs found and fixed in the Feb 2026 audit. Electron terminal wired via IPC/PTY — now fully functional in packaged app. 57 test suites, 188 tests — all passing. OpenRouter/Anthropic/OpenAI providers confirmed working end-to-end. See
REPAIR_REPORT.mdfor full details.
Unlike standard chat-bots, AstraForge orchestrates a team of five specialized agents:
- Nexus (Orchestrator): Manages the workflow and proposes solutions.
- Vanguard (Security): Audits code for vulnerabilities and security risks.
- Prism (Product): Ensures user alignment and product value.
- Helix (AI Systems): Architects robust AI-native patterns.
- Cipher (Implementation): Writes the actual production code.
A dedicated visual interface for reviewing, diffing, and applying complex multi-file changes generated by the agent swarm. Watch as your AI team debates, votes, and generates code in real-time.
Connect to your preferred LLM providers:
- OpenAI (GPT-4, GPT-4o, etc.)
- Anthropic (Claude 3.5 Sonnet, Claude 3 Opus)
- Grok (xAI)
- OpenRouter (Access 100+ models)
- Ollama (100% Local, 100% Free)
- LM-Studio (Local models with OpenAI-compatible API)
Your code, your keys, your local environment. AstraForge is designed to run fully local or self-hosted. No data leaves your machine unless you choose to use cloud APIs.
Deploy the full environment instantly.
# Clone the repository
git clone https://github.com/up2itnow0822/AstraForge-the-App.git
cd AstraForge-the-App
# Create your .env file
cp example.env .env
# Edit .env and add your API keys
# Build and run
docker compose up --buildAccess the IDE at http://localhost:3000
Run directly from source for development.
# Install dependencies
npm install
# Start development server (Vite + Electron)
npm run devBuild the Electron application for distribution.
# Build everything
npm run build
# Create distributable
npm run distConfigure your LLM providers in the Settings modal (gear icon) or via environment variables:
# Global API Keys (in .env file)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GROK_API_KEY=xai-...
OPENROUTER_API_KEY=sk-or-v1-...
# Local Models
OLLAMA_ENDPOINT=http://127.0.0.1:11434
OLLAMA_MODEL=llama3Each agent can be configured with a different provider and model via the "Models" tab in Settings.
- Enter a Task: Type your development task in the "New Task" box
- Generate: Click "Generate" to start the AI debate process
- Watch the Debate: Agents will propose, critique, and vote on solutions
- Review Changes: Once consensus is reached, review generated code in the Composer
- Apply: Click "Apply Changes to Disk" to write the files
AstraForge/
├── src/
│ ├── core/ # Core business logic
│ │ ├── agents/ # LLM Agent implementations
│ │ ├── debate/ # Debate & consensus system
│ │ └── config/ # Configuration management
│ ├── renderer/ # React UI components
│ │ ├── components/ # UI components
│ │ └── api/ # Bridge to server
│ └── main/ # Server & Electron main
├── specs/ # Technical specifications
└── tests/ # Test suites
We believe in High-Agency AI. Tools should handle the implementation details while humans direct the architectural intent. AstraForge is the realization of this 'Sovereign AI' vision.
# Run all tests
npm test
# Run with coverage
npm run test:coverage
# Run in watch mode
npm run test:watch# Check for issues
npm run lint
# Auto-fix issues
npm run lint:fixSee CONTRIBUTING.md for guidelines.
MIT
Current Version: 1.0.0 (Alpha)
- 5-agent consensus debate flow (Nexus, Vanguard, Prism, Helix, Cipher)
- All LLM providers: OpenRouter, OpenAI, Anthropic, Grok, Ollama, LM-Studio
- Synthesis → User Approval Gate → Code Generation pipeline
- Composer view: file diff, apply-to-disk
- Settings modal: per-agent provider/model/key config
- Connection testing (server/web mode)
- Server mode (Express + Socket.io): full-featured
- Electron mode: full debate flow, approval gate, code generation
- Terminal (Electron) — FIXED: XTerminal now spawns a real PTY via
node-ptyin the main process and streams I/O to the renderer through Electron IPC (terminal:create,terminal:data,terminal:write,terminal:resize,terminal:exit). Works in both packaged Electron builds and dev mode.
- Agent hot-reload: Changing provider/model in Settings requires app restart
- Memory module: Vector DB integration (LanceDB) is a dependency but not yet connected to the UI
- Connection test (Electron): Not available in packaged app; use web mode for testing
See REPAIR_REPORT.md for the full audit and roadmap.
This is an alpha release. Core functionality is implemented and tested. Feedback and contributions welcome!