Thank you for your interest in contributing to FlowLLM! 🎉
-
Fork the repository
git clone https://github.com/flowllm/flowllm-python.git cd flowllm-python -
Install dependencies
# Install Poetry curl -sSL https://install.python-poetry.org | python3 - # Install project dependencies poetry install # Activate virtual environment poetry shell
-
Set up environment
cp .env.example .env # Add your API keys for testing
We use several tools to maintain code quality:
# Format code
poetry run black .
# Lint code
poetry run ruff check .
# Type checking
poetry run mypy flowllm
# Run all checks
poetry run black . && poetry run ruff check . && poetry run mypy flowllm# Run all tests
poetry run pytest
# Run with coverage
poetry run pytest --cov=flowllm --cov-report=html
# Run specific test file
poetry run pytest tests/test_providers.py
# Run specific test
poetry run pytest tests/test_providers.py::test_openai_provider-
Create a branch
git checkout -b feature/your-feature-name
-
Make your changes
- Write clean, documented code
- Add tests for new features
- Update documentation as needed
-
Test your changes
poetry run pytest poetry run mypy flowllm
-
Commit your changes
git add . git commit -m "feat: add your feature description"
We follow Conventional Commits:
feat:- New featurefix:- Bug fixdocs:- Documentation changestest:- Test changesrefactor:- Code refactoringchore:- Maintenance tasks
-
Push and create a Pull Request
git push origin feature/your-feature-name
- Follow PEP 8 guidelines
- Use type hints for all functions
- Write docstrings for classes and functions
- Keep functions small and focused
Example:
from typing import Optional
async def example_function(param: str, optional: Optional[int] = None) -> dict:
"""
Brief description of what the function does.
Args:
param: Description of param
optional: Description of optional parameter
Returns:
Description of return value
Raises:
ValueError: When something goes wrong
"""
# Implementation
return {"result": "value"}- Create a new file in
flowllm/providers/ - Inherit from
BaseLLMProvider - Implement required methods
- Add tests in
tests/test_providers.py - Update documentation
- Discuss the feature in an issue first
- Write tests before implementation (TDD)
- Update relevant documentation
- Add examples if applicable
flowllm-python/
├── flowllm/ # Main package
│ ├── core/ # Core functionality (agent, memory, tools)
│ ├── providers/ # LLM provider implementations
│ ├── mcp/ # MCP integration
│ └── utils/ # Utility functions
├── tests/ # Test files
├── examples/ # Example scripts
├── docs/ # Documentation
└── pyproject.toml # Project configuration
- Write tests for all new features
- Maintain test coverage above 80%
- Use pytest fixtures for common setup
- Mock external API calls
Example test:
import pytest
from flowllm import define_agent
from flowllm.providers import OpenAIProvider
@pytest.mark.asyncio
async def test_agent_execution():
"""Test basic agent execution."""
agent = define_agent(
provider=OpenAIProvider(model="gpt-4o-mini"),
system_prompt="You are helpful."
)
response = await agent.execute("Hello")
assert response.content is not None
assert len(response.content) > 0- Update README.md for major features
- Add examples for new functionality
- Update ARCHITECTURE.md for structural changes
- Keep QUICKSTART.md simple and beginner-friendly
- Open an issue for bugs or feature requests
- Start a discussion for questions or ideas
- Check existing issues and PRs first
- Be respectful and inclusive
- Provide constructive feedback
- Help others learn and grow
- Follow GitHub's Community Guidelines
By contributing, you agree that your contributions will be licensed under the MIT License.