*
/|
/ |
/ |
* * _ ___ ____ _
/| /| / \ |_ _| _ \ / \
/ | / | / _ \ | || |_) | / _ \
/ |/ | / ___ \ | || _ < / ___ \
*---*---* /_/ \_\___|_| \_\/_/ \_\
Autonomous Intelligent Robot Agent - A complete AI-powered system built on NixOS with local LLM capabilities, MCP protocol support, and modern tooling.
AIRA is a fully reproducible, declarative NixOS system that provides:
- π€ Ollama - Local LLM engine with llama3.2:3b model
- π Agent Gateway - MCP Router for tool orchestration
- π» Open WebUI - Modern web interface for LLM interaction
- π₯οΈ aichat - Terminal UI client for chat
- π§ MCP Servers - Filesystem, Nix, systemd, and shell tool servers
- π¦ QEMU/ISO Images - Ready-to-use VM and installation images
| Component | Description | Port/Access |
|---|---|---|
| Ollama | Local LLM inference engine | localhost:11434 |
| Agent Gateway | MCP protocol router | localhost:8081 |
| Open WebUI | Web-based chat interface | localhost:8080 |
| aichat | Terminal UI client | CLI tool |
- Filesystem - Safe file operations with directory whitelisting
- Nix - System configuration and package management
- Systemd - Service control and monitoring
- Shell - Sandboxed command execution
- Linux system (x86_64)
- Nix with flakes enabled
- 4GB+ RAM
- 20GB+ free disk space
- Install Nix (if not already installed):
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install- Clone the repository:
git clone https://github.com/akagi-dev/aira.git
cd aira- Build the QEMU VM:
make vm- Run the VM:
make run- Access the system:
- Open WebUI: http://localhost:8080
- SSH:
ssh aira@localhost -p 2222(password:aira) - Ollama API: http://localhost:11434
Build and burn the ISO for bare-metal installation:
make iso
# Result will be in result/iso/βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β AIRA System β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ β
β β Open WebUI βββββββββΊβ aichat TUI β β
β ββββββββ¬ββββββββ ββββββββ¬ββββββββ β
β β β β
β βββββββββββββ¬ββββββββββββ β
β βΌ β
β βββββββββββββββββββ β
β β Agent Gateway β β
β β (MCP Router) β β
β ββββββββββ¬βββββββββ β
β β β
β βββββββββββββ΄ββββββββββββ β
β βΌ βΌ β
β βββββββββββββββ βββββββββββββββ β
β β Ollama β β MCP Servers β β
β β (LLM) β β β’ filesystemβ β
β β β β β’ nix β β
β βββββββββββββββ β β’ systemd β β
β β β’ shell β β
β βββββββββββββββ β
β β
β NixOS 24.11 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
make vm # Build QEMU VM image
make iso # Build ISO installer
make run # Run VM with graphics
make run-headless # Run VM in headless mode
make test # Run integration tests
make clean # Clean build artifacts
make update # Update flake inputs
make check # Check flake validity
make dev # Enter development shellaira/
βββ flake.nix # Nix flake configuration
βββ configuration.nix # Base NixOS configuration
βββ modules/ # Service modules
β βββ ollama.nix
β βββ agent-gateway.nix
β βββ open-webui.nix
β βββ aichat.nix
β βββ mcp-servers/ # MCP protocol servers
β βββ filesystem.nix
β βββ nix.nix
β βββ systemd.nix
β βββ shell.nix
βββ images/ # Image builders
β βββ qemu.nix # VM image
β βββ iso.nix # ISO image
βββ scripts/ # Helper scripts
β βββ test-system.sh
βββ tests/ # Integration tests
βββ integration/
The system is fully declarative. To modify:
- Edit
configuration.nixor module files - Rebuild inside the VM:
sudo nixos-rebuild switchOr rebuild the entire VM:
make clean && make vm && make runRun integration tests:
make testIndividual test scripts are in tests/integration/:
test_ollama.sh- Ollama service teststest_agent_gateway.sh- Agent Gateway teststest_mcp_servers.sh- MCP server tests
Edit configuration.nix:
services.aira.ollama.model = "llama3.1:8b"; # or any other modelEdit images/qemu.nix:
virtualisation = {
memorySize = 8192; # 8GB RAM
cores = 8; # 8 CPU cores
diskSize = 40960; # 40GB disk
};Control filesystem access in modules/mcp-servers/filesystem.nix:
services.aira.mcp.filesystem.allowedDirectories = [
"/home"
"/tmp"
"/your/custom/path"
];- Access http://localhost:8080
- Create an account (local only)
- Start chatting with the LLM
- Use MCP tools for system operations
SSH into the VM and run:
aichat
> Hello! Can you help me list files?curl http://localhost:11434/api/generate -d '{
"model": "llama3.2:3b",
"prompt": "Explain NixOS in one sentence",
"stream": false
}'- Default password is
aira- CHANGE IT IN PRODUCTION - MCP servers use whitelists and sandboxing
- Services run with minimal privileges
- Firewall enabled with only necessary ports open
For production use:
- Change user password
- Disable password authentication for SSH
- Configure proper firewall rules
- Review MCP server permissions
- Use secrets management for sensitive data
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
- CPU: 2 cores, 1.4 GHz
- RAM: 4GB
- Disk: 20GB
- KVM support (optional but recommended)
- CPU: 4+ cores, 2.0+ GHz
- RAM: 8GB+
- Disk: 40GB+
- KVM acceleration enabled
This project is licensed under the terms specified in the LICENSE file.
Q: Why NixOS?
A: NixOS provides reproducible, declarative system configuration. Every component is version-controlled and can be rolled back.
Q: Can I use different LLM models?
A: Yes! Change the services.aira.ollama.model option to any model supported by Ollama.
Q: Does this work on ARM/Apple Silicon?
A: Currently only x86_64 is supported. ARM support is planned.
Q: How much RAM do I really need?
A: 4GB minimum for llama3.2:3b, but 8GB+ recommended for larger models and better performance.
Q: Can I deploy this to a server?
A: Yes! Build the ISO and install on bare metal, or use the configuration directly with NixOS.
- ARM/aarch64 support
- Additional MCP servers (git, docker, etc.)
- Web-based configuration UI
- Multi-model support
- Clustering and distributed inference
- Integration with external services
Built with β€οΈ using NixOS