Thank you for your interest in contributing to FlowLLM! This document provides guidelines for contributing to the project.
-
Prerequisites
- Node.js 18 or higher
- npm 9 or higher
- Git
-
Clone the repository
git clone https://github.com/flowllm/flowllm.git cd flowllm -
Install dependencies
npm install
-
Build all packages
npm run build
-
Run tests
npm test
flowllm/
├── packages/
│ ├── core/ # Core agent framework
│ ├── providers/ # LLM provider implementations
│ ├── mcp/ # MCP integration
│ └── flowllm/ # Main SDK package
├── examples/ # Usage examples
├── docs/ # Documentation
└── tests/ # Integration tests
-
Create a feature branch
git checkout -b feature/your-feature-name
-
Make your changes
- Write code following the existing style
- Add tests for new functionality
- Update documentation as needed
-
Run tests and linting
npm test npm run lint -
Commit your changes
git commit -m "feat: add new feature"Follow Conventional Commits:
feat:New featurefix:Bug fixdocs:Documentation changestest:Test additions or changesrefactor:Code refactoringchore:Maintenance tasks
-
Push and create a pull request
git push origin feature/your-feature-name
- Use TypeScript for all code
- Enable strict mode
- Provide type annotations for public APIs
- Avoid
anytypes when possible
- Use 2 spaces for indentation
- Use single quotes for strings
- Add semicolons
- Follow existing patterns in the codebase
- Write unit tests for all new functionality
- Aim for 80%+ code coverage
- Use descriptive test names
- Test both success and error cases
- Add JSDoc comments for public APIs
- Update README files when adding features
- Provide examples for new functionality
- Keep documentation clear and concise
To add support for a new LLM provider:
- Create a new file in
packages/providers/src/ - Implement the
LLMProviderinterface - Add cost tracking and token counting
- Write tests
- Export from
packages/providers/src/index.ts - Update documentation
Example:
export class NewProvider implements LLMProvider {
public readonly name = 'new-provider';
async chat(messages: Message[], config: LLMConfig): Promise<LLMResponse> {
// Implementation
}
async *stream(messages: Message[], config: LLMConfig): AsyncIterable<StreamChunk> {
// Implementation
}
countTokens(text: string, model: string): number {
// Implementation
}
getMaxTokens(model: string): number {
// Implementation
}
getCostPerToken(model: string): { prompt: number; completion: number } {
// Implementation
}
}- Ensure all tests pass
- Update documentation
- Add examples if applicable
- Request review from maintainers
- Address feedback
- Once approved, your PR will be merged
When reviewing PRs, consider:
- Correctness: Does the code work as intended?
- Tests: Are there adequate tests?
- Documentation: Is the code well-documented?
- Style: Does it follow project conventions?
- Performance: Are there any performance concerns?
- Security: Are there any security issues?
- Documentation: Check docs/README.md
- Examples: See examples/
- Issues: Browse GitHub Issues
- Discussions: Join GitHub Discussions
By contributing to FlowLLM, you agree that your contributions will be licensed under the MIT License.