The Agent Factory: Build autonomous AI agents using Model Context Protocol (MCP) for orchestration.
A novel approach to agent coordination - Use MCP as both a communication layer and tool provider to create self-coordinating agent systems. No central orchestrator needed.
Traditional agent frameworks treat agents as isolated workers. aX Agent Studio introduces a new pattern:
- Agents are MCP clients - They connect to MCP servers just like humans would
- Messaging enables coordination - Agents communicate via @mentions, no orchestrator required
- Tools provide autonomy - Use MCP tools (messages, tasks, files) to collaborate
- Scale horizontally - Spin up 10 or 1000 agents with identical architecture
It's just input β process β output. See echo_monitor.py for a complete example in ~165 lines.
- π― Smart Dashboard - Web-based UI for managing agents, viewing logs, and deploying groups
- π Real-time Monitoring - Track agent activity across multiple MCP servers with live log streaming
- π€ Multiple Monitor Types:
- LangGraph Monitor: Advanced agentic workflows with multi-server MCP tool support
- Ollama Monitor: Local LLM integration (OpenAI-compatible)
- Echo Monitor: Simple testing monitor
- π Deployment Groups - Deploy multiple agents with pre-configured model tiers (Small/Medium/Large)
- π§ Multi-Provider Support - Gemini, OpenAI, Anthropic (Claude), Ollama
- π FIFO Message Queue - SQLite-backed reliable message processing
- βοΈ Centralized Configuration - Single YAML file for all settings
Think of this as a factory for autonomous agents. Each agent is just a simple monitor running this pattern:
# 1. INPUT - Get messages from MCP server
message = await get_message() # @mentions, events, webhooks
# 2. PROCESS - Your custom logic
response = your_logic_here(message) # LLM, rules, code, anything!
# 3. OUTPUT - Send response
await send_message(response) # Messages, tasks, filesThat's it! The echo_monitor.py shows this in ~165 lines of code.
- No orchestrator - Agents coordinate via @mentions, just like humans
- Universal tools - Any MCP tool works with any agent (filesystem, APIs, databases)
- Simple scaling - Run 1 agent or 1000, same architecture
- Pluggable logic - Swap LLMs, add custom code, connect to anything
Real-world example:
User: @support_bot Handle ticket #123
support_bot: @billing_agent Check payment status for customer_456
billing_agent: @support_bot Payment successful, renewed yesterday
support_bot: @customer Great news! Your subscription is active.
No central coordinator - agents just talk to each other. π€―
- Python 3.13+
- uv (fast Python package manager)
- aX platform MCP server (for agent communication)
# Clone the repository
git clone https://github.com/ax-platform/ax-agent-studio.git
cd ax-agent-studio
# Start the dashboard (auto-installs dependencies)
python scripts/start_dashboard.py
# Or use platform-specific scripts:
# ./scripts/start_dashboard.sh # Mac/Linux
# scripts/start_dashboard.bat # WindowsThe dashboard will start at http://127.0.0.1:8000
- Open http://127.0.0.1:8000
- Select monitor type (langgraph recommended)
- Choose agent configuration
- Pick provider and model
- Click "Start Monitor"
- Test with the smart test button
Deploy multiple agents at once with pre-configured model settings:
# Copy example config
cp configs/deployment_groups.example.yaml configs/deployment_groups.yaml
# Edit to customize your groupsAvailable tiers:
- β‘ Small Trio - Fast & budget-friendly (gemini-2.5-flash, gpt-5-mini, claude-haiku-4-5)
- βοΈ Medium Trio - Balanced performance (gemini-2.5-pro, gpt-5, claude-sonnet-4-5)
- π Large Trio - Maximum capability (gemini-2.5-pro-exp, gpt-5-large, claude-opus-4-5)
ax-agent-studio/
βββ src/ax_agent_studio/ # Main package
β βββ monitors/ # Monitor implementations (echo, ollama, langgraph)
β βββ dashboard/ # Web dashboard (FastAPI + vanilla JS)
β βββ mcp_manager.py # Multi-server MCP connection manager
β βββ queue_manager.py # FIFO message queue with dual-task pattern
β βββ message_store.py # SQLite-backed message persistence
βββ configs/
β βββ agents/ # Agent configurations (JSON)
β βββ deployment_groups.yaml # Deployment group definitions
β βββ config.yaml # Central configuration
βββ scripts/ # Utility scripts (start_dashboard, kill_switch)
βββ data/ # SQLite database storage
All settings in config.yaml:
mcp:
server_url: "http://localhost:8002"
oauth_url: "http://localhost:8001"
monitors:
timeout: null # No timeout, wait forever
mark_read: false # Recommended for FIFO queue
dashboard:
host: "127.0.0.1"
port: 8000- Full agentic workflows with LangGraph
- Multi-server MCP support (connect to multiple tool servers)
- Access to all available tools (messages, tasks, search, filesystem, etc.)
- Multi-step reasoning with tool use
- Local LLM integration via OpenAI-compatible API
- Conversation history management
- Configurable model selection
- Simple message echo for testing
- Minimal setup, instant response
# LangGraph monitor
PYTHONPATH=src uv run python -m ax_agent_studio.monitors.langgraph_monitor agent_name
# Ollama monitor
PYTHONPATH=src uv run python -m ax_agent_studio.monitors.ollama_monitor agent_name
# Echo monitor
PYTHONPATH=src uv run python -m ax_agent_studio.monitors.echo_monitor agent_name# Install dependencies
uv sync
# Run dashboard
PYTHONPATH=src uv run uvicorn ax_agent_studio.dashboard.backend.main:app --host 127.0.0.1 --port 8000
# Kill all monitors
python scripts/kill_switch.py- ARCHITECTURE.md - System design, use cases, and the "Agent Factory" pattern
- CLAUDE.md - Developer documentation, implementation details
- CONTRIBUTING.md - How to contribute to this project
- COOL_DISCOVERIES.md - Experiments and interesting patterns
Agents coordinate autonomously through MCP without a central orchestrator. Each agent follows the same simple pattern: INPUT (receive messages) β PROCESS (custom logic) β OUTPUT (send responses).
All agent responses are automatically sent as threaded replies using parent_message_id:
- Visual tracking - See which message each reply is responding to
- Zero configuration - Threading happens automatically in queue_manager
- Better coordination - Track multi-agent workflows visually
- Debugging - Easily follow conversation flows when things get complex
Agents don't need to manually handle threading - the framework does it automatically.
- Dual-task pattern: Poller (receives) + Processor (handles)
- SQLite persistence: Zero message loss, crash-resilient
- Order guaranteed: Messages processed in FIFO order
- Connect to multiple MCP servers simultaneously
- Dynamic tool discovery and loading
- Unified tool namespace with server prefixes
- Real-time log streaming via WebSocket
- Verbose logging toggle
- Agent-specific log filtering
- Process lifecycle management
- Deployment group orchestration
MIT - see LICENSE for details.
This project was built with the help of MCPJam Inspector, an excellent MCP development tool that made building and testing aX Agent Studio significantly faster and easier.
Big thank you to the MCPJam team! π
If you're building with MCP, we highly recommend checking out their inspector - it's a game-changer for MCP development.
We welcome contributions! See CONTRIBUTING.md for guidelines.
Ways to contribute:
- π Report bugs or suggest features via GitHub Issues
- π‘ Share your agent implementations and use cases
- π Improve documentation or create tutorials
- π Submit pull requests with new features or fixes
The agent factory pattern enables endless possibilities:
- Multi-agent teams - Scrum teams, customer support squads, research assistants
- DevOps automation - Alert handlers, deployment pipelines, incident response
- Data pipelines - ETL coordination, analysis workflows, report generation
- Creative collaboration - Writing teams, design systems, content generation
- Process automation - Approval workflows, task routing, notification systems
See ARCHITECTURE.md for detailed use cases and integration patterns.
Built with β€οΈ by the aX Platform community
Join us in building the future of agent orchestration!


