@@ -44,6 +44,55 @@ The official `openai` and `anthropic` Python libraries read these automatically.
4444
4545---
4646
47+ ## 🦙 Ollama & Local Models
48+
49+ litecrew works with ** any OpenAI-compatible API** — including Ollama, LM Studio, vLLM, and more.
50+
51+ ``` python
52+ import openai
53+ from litecrew import Agent
54+
55+ # Point to your local Ollama server
56+ openai.base_url = " http://localhost:11434/v1"
57+ openai.api_key = " ollama" # Ollama doesn't need a real key
58+
59+ # Use any local model
60+ agent = Agent(
61+ name = " local" ,
62+ model = " llama3.2" , # or mistral, qwen2.5, phi3, etc.
63+ system = " You are a helpful assistant."
64+ )
65+
66+ response = agent(" Explain quantum computing in simple terms." )
67+ print (response)
68+ ```
69+
70+ ** Or use environment variables:**
71+
72+ ``` bash
73+ export OPENAI_BASE_URL=" http://localhost:11434/v1"
74+ export OPENAI_API_KEY=" ollama"
75+ ```
76+
77+ ** Supported local providers:**
78+ | Provider | Base URL | Notes |
79+ | ----------| ----------| -------|
80+ | Ollama | ` http://localhost:11434/v1 ` | Most popular |
81+ | LM Studio | ` http://localhost:1234/v1 ` | GUI-based |
82+ | vLLM | ` http://localhost:8000/v1 ` | Production-grade |
83+ | LocalAI | ` http://localhost:8080/v1 ` | Docker-friendly |
84+ | text-generation-webui | ` http://localhost:5000/v1 ` | With OpenAI extension |
85+
86+ ** Mix cloud and local:**
87+
88+ ``` python
89+ # Use local for research (free), cloud for final output (quality)
90+ researcher = Agent(" researcher" , model = " llama3.2" ) # Local via Ollama
91+ writer = Agent(" writer" , model = " gpt-4o" ) # Cloud via OpenAI
92+ ```
93+
94+ ---
95+
4796## 🎯 What litecrew IS
4897
4998A ** minimal orchestration layer** for simple multi-agent workflows.
0 commit comments