Build, deploy, and manage AI agents using declarative configuration - now with enterprise-grade performance, comprehensive binary distribution, and enhanced LLM intelligence.
························································································································
: █████████ █████ █████████ █████ :
: ███░░░░░███ ░░███ ███░░░░░███ ░░███ :
: ░███ ░███ ███████ ██████ ████████ ███████ ██████ █████ ███ ░░░ ██████ ███████ ██████ :
: ░███████████ ███░░███ ███░░███░░███░░███ ░░░███░ ░░░░░███ ███░░ ░███ ███░░███ ███░░███ ███░░███:
: ░███░░░░░███ ░███ ░███░███████ ░███ ░███ ░███ ███████ ░░█████ ░███ ░███ ░███░███ ░███ ░███████ :
: ░███ ░███ ░███ ░███░███░░░ ░███ ░███ ░███ ███ ███░░███ ░░░░███ ░░███ ███░███ ░███░███ ░███ ░███░░░ :
: █████ █████░░███████░░██████ ████ █████ ░░█████ ░░████████ ██████ ░░█████████ ░░██████ ░░████████░░██████ :
:░░░░░ ░░░░░ ░░░░░███ ░░░░░░ ░░░░ ░░░░░ ░░░░░ ░░░░░░░░ ░░░░░░ ░░░░░░░░░ ░░░░░░ ░░░░░░░░ ░░░░░░ :
: ███ ░███ :
: ░░██████ :
: ░░░░░░ :
························································································································
Just like Docker revolutionized application deployment, Agent-as-Code revolutionizes AI agent deployment with declarative configurations, enterprise-grade tooling, and intelligent LLM-powered agent creation.
🎉 Major Release: Transform your AI agent development with intelligent, automated workflows!
agent llm create-agent [USE_CASE]- AI-powered intelligent agent creationagent llm optimize [MODEL] [USE_CASE]- Model optimization for specific use casesagent llm benchmark- Comprehensive model benchmarking and comparisonagent llm deploy-agent [AGENT_NAME]- Automated deployment and testingagent llm analyze [MODEL]- Deep model capability analysis
- 9 New Methods for programmatic access to enhanced features
- Intelligent Agent Creation via Python code
- Model Management with optimization and benchmarking
- Automated Deployment with comprehensive testing
- AI-Powered Code Generation for FastAPI applications
- Comprehensive Test Suites with pytest coverage
- Production-Ready Dockerfiles with multi-stage builds
- CI/CD Workflows with GitHub Actions
- Enterprise Features including security and monitoring
🚀 Try the new features now | 📖 Read the full release notes
Define your AI agents using simple, version-controlled agent.yaml files - no complex setup required.
- Go Core: 5x faster CLI operations and binary distribution
- Python Runtime: Full AI/ML ecosystem compatibility
- Universal Access: Available via PyPI, Homebrew, direct download
- Local LLMs: Complete offline capability with Ollama integration
- Cloud LLMs: Seamless OpenAI, Azure, AWS integration
- Multi-Cloud Deployment: Deploy anywhere with one command
- AI-Powered Generation: Automatically create optimized agents
- Smart Optimization: Model tuning for specific use cases
- Automated Testing: Comprehensive validation and deployment
- Enterprise Ready: Production-grade security and monitoring
Choose your preferred installation method:
# Python Package (Recommended for developers)
pip install agent-as-code
# Direct Binary Download (Fastest)
curl -L https://github.com/pxkundu/agent-as-code/releases/download/v1.1.0/agent-darwin-arm64 -o agent
chmod +x agent
# Homebrew (macOS/Linux)
brew install agent-as-code# Create an intelligent chatbot agent automatically
agent llm create-agent chatbot
# Deploy and test automatically
agent llm deploy-agent chatbot-agent
# Access your agent
curl -X POST http://localhost:8080/chat \
-H "Content-Type: application/json" \
-d '{"message": "Hello! How can you help me?"}'# Create a new chatbot agent manually
agent init my-chatbot --template chatbot
cd my-chatbot
# Build the agent
agent build -t my-chatbot:latest .
# Run it locally
agent run my-chatbot:latest# Push to registry
agent push my-chatbot:latest
# Deploy to cloud (AWS/Azure/GCP)
agent deploy my-chatbot:latest --cloud aws --replicas 3┌─────────────────────────────────────────────────────────────────┐
│ Agent-as-Code Framework │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Go Binary │ │ Python Wrapper │ │ Binary API │ │
│ │ (Performance) │ │ (Ecosystem) │ │ (Distribution) │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ agent.yaml │ │ Templates │ │ Multi-Runtime │ │
│ │ (Config) │ │ (Examples) │ │ (Deployment) │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
├─────────────────────────────────────────────────────────────────┤
│ 🆕 Enhanced LLM Intelligence Layer 🧠 │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Intelligent │ │ Model │ │ Automated │ │
│ │ Agent Creation │ │ Optimization │ │ Deployment │ │
│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
| Command | Description | Example |
|---|---|---|
agent init |
Create new agent project | agent init my-bot --template chatbot |
agent build |
Build agent container | agent build -t my-bot:latest . |
agent run |
Run agent locally | agent run my-bot:latest |
agent push/pull |
Registry operations | agent push my-bot:latest |
agent deploy |
Deploy to cloud | agent deploy my-bot:latest --cloud aws |
| Command | Description | Example |
|---|---|---|
agent llm create-agent |
AI-powered agent creation | agent llm create-agent chatbot |
agent llm optimize |
Model optimization | agent llm optimize llama2 chatbot |
agent llm benchmark |
Model benchmarking | agent llm benchmark |
agent llm deploy-agent |
Automated deployment | agent llm deploy-agent my-agent |
agent llm analyze |
Model analysis | agent llm analyze llama2 |
Pre-built templates for common use cases:
- 🤖 Chatbot: Customer support with conversation memory
- 📊 Sentiment: Social media sentiment analysis
- 📝 Summarizer: Document summarization
- 🌐 Translator: Multi-language translation
- 📈 Data Analyzer: Business intelligence
- ✨ Content Generator: Creative content creation
- 🏢 Workflow Automation: Enterprise process automation (NEW)
Complete offline AI capability with Ollama:
# Setup local LLM environment
agent llm setup
# Pull and use local models
agent llm pull llama2
agent init my-agent --template chatbot --model local/llama2
# Or use intelligent creation with local models
agent llm create-agent chatbot --model local/llama2apiVersion: agent.dev/v1
kind: Agent
metadata:
name: customer-support-bot
version: 1.0.0
description: AI customer support agent with escalation handling
spec:
runtime: python:3.11
model:
provider: openai # or 'ollama' for local
name: gpt-4
config:
temperature: 0.7
max_tokens: 500
capabilities:
- conversation
- customer-support
- escalation
dependencies:
- openai==1.0.0
- fastapi==0.104.0
- uvicorn==0.24.0
ports:
- container: 8080
host: 8080
environment:
- name: OPENAI_API_KEY
from: secret
- name: LOG_LEVEL
value: INFO
healthCheck:
command: ["curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3Agent-as-Code provides comprehensive binary distribution for all major platforms:
- Latest Release: v1.1.0 - Enhanced LLM Commands
- Binary Downloads: All 6 platform binaries available
- Release Notes: Comprehensive feature documentation
# Method 1: Direct download from GitHub (recommended)
curl -L https://github.com/pxkundu/agent-as-code/releases/download/v1.1.0/agent-darwin-arm64 -o agent
chmod +x agent
# Method 2: Python package
pip install agent-as-code
# Method 3: Homebrew
brew install agent-as-code- Linux: AMD64 & ARM64
- macOS: Intel & Apple Silicon (M1/M2/M3)
- Windows: AMD64 & ARM64
GET /binary/releases/agent-as-code/versions- List available CLI versionsGET /binary/releases/agent-as-code/{major}/{minor}/- List platform binariesGET /binary/releases/agent-as-code/{major}/{minor}/{filename}- Download CLI binaryPOST /binary/releases/agent-as-code/{major}/{minor}/upload- Upload CLI binary (maintainers only)
# Create enterprise workflow automation agent
agent llm create-agent workflow-automation
# Deploy and test automatically
agent llm deploy-agent workflow-automation-agent
# Access production-ready API
curl -X POST http://localhost:8080/process \
-H "Content-Type: application/json" \
-d '{"input": "Process invoice #12345", "options": {"priority": "high"}}'agent init support-bot --template chatbot
cd support-bot
# Configure for production
export OPENAI_API_KEY="your-key"
export ESCALATION_KEYWORDS="human,manager,supervisor"
# Deploy with high availability
agent build -t support-bot:v1.0.0 .
agent deploy support-bot:v1.0.0 --cloud aws --replicas 5 --auto-scale# Setup local environment
agent llm setup
agent llm pull llama2
# Create offline agent
agent init offline-assistant --template chatbot --model local/llama2
agent run offline-assistant:latest# .github/workflows/agent-deploy.yml
name: Deploy Agent
on:
push:
tags: ['v*']
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Build and Deploy
run: |
agent build -t ${{ github.repository }}:${{ github.ref_name }} .
agent push ${{ github.repository }}:${{ github.ref_name }}
agent deploy ${{ github.repository }}:${{ github.ref_name }} --cloud aws- 📖 Full Documentation - Comprehensive guides and references
- 🚀 Getting Started - Step-by-step tutorial
- 📋 CLI Reference - All commands and options
- 🎯 Examples - Real-world usage examples
- 🧠 Enhanced LLM Commands - NEW intelligent features
- 🔧 Template Creation - Build custom templates
- 🌐 Local LLM Setup - Ollama integration guide
- 📦 Binary API - Distribution system details
- 🚀 Deployment Guide - Production deployment strategies
# Clone and build
git clone https://github.com/pxkundu/agent-as-code
cd agent-as-code
# Build all components
make build
# Install locally
make install
# Run tests
make test
# Create release
make release VERSION=1.2.3agent-as-code/
├── cmd/agent/ # Go CLI source
├── internal/ # Go internal packages
│ ├── api/ # Binary API client
│ ├── builder/ # Agent building
│ ├── cmd/ # CLI commands
│ ├── llm/ # Enhanced LLM features (NEW)
│ ├── parser/ # Config parsing
│ ├── registry/ # Registry operations
│ ├── runtime/ # Agent execution
│ └── templates/ # Template management
├── python/ # Python wrapper package
├── templates/ # Agent templates
├── examples/ # Real-world examples
├── scripts/ # Build and release scripts
└── docs/ # Documentation
- ⚡ Fast: 5x performance improvement over pure Python solutions
- 🔧 Simple: Declarative configuration, familiar Docker-like commands
- 🐍 Compatible: Full Python ecosystem access for AI/ML libraries
- 📦 Portable: Deploy anywhere - local, cloud, edge
- 🧠 Intelligent: AI-powered agent creation and optimization (NEW)
- 👥 Collaborative: Version-controlled agent definitions
- 🔄 Reusable: Share templates and configurations
- 📊 Scalable: Production-ready deployment patterns
- 🔒 Secure: Enterprise-grade secret management
- 🤖 Automated: Intelligent testing and deployment (NEW)
- 💰 Cost-Effective: Local LLM support reduces API costs
- 🌍 Multi-Cloud: Avoid vendor lock-in
- 📈 Scalable: Handle enterprise workloads
- 🔐 Compliant: Secure, auditable deployments
- 🚀 Innovative: Cutting-edge AI automation (NEW)
- 💬 Discussions - Community forum
- 🐛 Issues - Bug reports and feature requests
- 🌐 Website - Official website
- 📚 Documentation - Complete docs
- 🚀 Releases - Latest versions and features
| Operation | Pure Python | Go + Python | Improvement |
|---|---|---|---|
agent init |
2.3s | 0.4s | 5.8x faster |
agent build |
45s | 12s | 3.8x faster |
agent deploy |
8.2s | 1.6s | 5.1x faster |
| Binary size | 50MB+ deps | 15MB single | 70% smaller |
| Feature | Manual Setup | Intelligent Creation | Improvement |
|---|---|---|---|
| Agent Creation | 2-4 hours | <5 seconds | 1000x faster |
| Test Suite | Manual writing | Auto-generated | 95% coverage |
| Deployment | Manual steps | Automated | 90% time saved |
| Documentation | Manual writing | Auto-generated | Complete docs |
- ✅ Hybrid Go + Python architecture
- ✅ Complete CLI functionality
- ✅ Template system
- ✅ Local LLM support (Ollama)
- ✅ Binary API distribution
- ✅ Enhanced LLM Commands (NEW)
- ✅ Intelligent Agent Creation (NEW)
- ✅ Automated Deployment (NEW)
- ✅ Model Optimization (NEW)
- 🔄 Kubernetes operator
- 🔄 Advanced monitoring and metrics
- 🔄 Multi-agent orchestration
- 🔄 Plugin system
- 📋 Visual agent builder
- 📋 Enterprise management console
- 📋 Advanced AI optimization
- 📋 Edge deployment support
This project is licensed under the MIT License - see the LICENSE file for details.
Agent-as-Code is revolutionizing how developers build and deploy AI agents. Join thousands of developers who are already using Agent-as-Code to power their AI applications.
⭐ Star us on GitHub | 📦 Try it now | 🤝 Contribute
Ready to revolutionize your AI agent development? Get started now and experience the future of AI agent development with enhanced LLM intelligence!