Skip to content

Latest commit

 

History

History
133 lines (95 loc) · 2.58 KB

File metadata and controls

133 lines (95 loc) · 2.58 KB

🚀 Quick Start - FlowLLM Python SDK

Get Started in 5 Minutes!

⚠️ Having issues? See TROUBLESHOOTING.md and SETUP_STATUS.md

1. Install Poetry (if not installed)

Option A: Standard Install

curl -sSL https://install.python-poetry.org | python3 -

Option B: If Poetry fails (Homebrew Python)

brew install python@3.11
curl -sSL https://install.python-poetry.org | python3 -

Option C: Use pip instead (no Poetry)

python3 -m pip install -r requirements.txt --user

2. Run Setup Script

cd /Users/n0r0bhn/Documents/flowllm-python
./setup.sh

3. Add API Keys

# Edit .env file
nano .env

# Add your OpenAI key (at minimum):
OPENAI_API_KEY=sk-proj-YOUR-KEY-HERE

4. Test It!

# Activate virtual environment
poetry shell

# Test imports
python test_imports.py

# Run an example
python examples/basic_agent.py

What You Can Do Now

Simple Agent

import asyncio
from flowllm import define_agent
from flowllm.providers import OpenAIProvider

async def main():
    agent = define_agent(
        provider=OpenAIProvider(model="gpt-4o-mini"),
        system_prompt="You are a helpful assistant."
    )
    
    response = await agent.execute("What is Python?")
    print(response.content)

asyncio.run(main())

With Custom Tools

from flowllm import define_agent, define_tool
from flowllm.providers import OpenAIProvider

@define_tool(name="get_time", description="Get current time")
async def get_time() -> str:
    from datetime import datetime
    return datetime.now().strftime("%H:%M:%S")

agent = define_agent(
    provider=OpenAIProvider(model="gpt-4"),
    tools=[get_time]
)

response = await agent.execute("What time is it?")

Streaming

async for chunk in agent.stream("Tell me a story"):
    print(chunk.content, end="", flush=True)

Available Examples

  1. basic_agent.py - Simple conversation
  2. custom_tools.py - Using custom tools
  3. streaming.py - Stream responses
  4. conversation.py - Multi-turn chat
  5. multi_provider.py - Test all providers

Documentation

  • Getting Started: docs/getting_started.md
  • Build Plan: PYTHON_BUILD_PLAN.md
  • Build Summary: BUILD_SUMMARY.md
  • Progress: BUILD_PROGRESS.md

Features

✅ 3 LLM Providers (OpenAI, Anthropic, Gemini)
✅ 3 Memory Strategies
✅ Custom Tools
✅ Streaming
✅ Cost Tracking
✅ Retry Logic
✅ Type-Safe
✅ Async/Await

Need Help?

Check BUILD_SUMMARY.md for a complete overview!

Happy building! 🤖