Skip to content

speakleash/bielik-tools

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

bielik-tools

A collection of supplementary tools for working with Bielik language models.

Getting Started

Clone the repository:

git clone https://github.com/speakleash/bielik-tools.git
cd bielik-tools

Explore available tools and examples in their respective subdirectories.

Structured Outputs

Bielik models have been trained to generate structured outputs. Once the model is running in vLLM, you can try the structured_output.py example to generate structured outputs using OpenAI's Completions and Chat APIs.

Tool Calling

To use function/tool calling, you need to enable the extended chat template. This can be done using the provided advanced chat template and tool parser. Start vLLM with the following command:

vllm serve Bielik-11B-v2.5-Instruct \
    --enable-auto-tool-choice \
    --tool-parser-plugin ./bielik-tools/tools/bielik_vllm_tool_parser.py \
    --tool-call-parser bielik \
    --chat-template ./bielik-tools/tools/bielik_advanced_chat_template.jinja

Then, run tool_calling.py or tool_calling_streaming.py to see how tool calling works in practice.

Reasoning

Reasoning is currently available only in the Bielik 11B v2.5 Instruct model and is considered an experimental feature. Enabling reasoning allows the model to better handle complex questions by expanding its reasoning capabilities. To try it out, start vLLM with the following command:

vllm serve Bielik-11B-v2.5-Instruct \
    --chat-template ./bielik-tools/tools/bielik_advanced_chat_template.jinja \
    --reasoning-parser deepseek_r1 \
    --enable-reasoning

Then, run reasoning_streaming.py to see how the model performs in reasoning mode.

Multi-Agent with CrewAI

For this example to work you need Tavily API key. Create .env file with contents:

BASE_URL=http://0.0.0.0:8000/v1
MODEL_NAME=path_or_hf_repo
API_KEY=token-abc123
TAVILY_API_KEY=tvly-apikey123123123123

Run vllm

vllm serve Bielik-11B-v2.5-Instruct \
    --enable-auto-tool-choice \
    --tool-parser-plugin ./bielik-tools/tools/bielik_vllm_tool_parser.py \
    --tool-call-parser bielik \
    --chat-template ./bielik-tools/tools/bielik_advanced_chat_template.jinja \
    --port 8000 --host 0.0.0.0 \
    --api-key token-abc123

Then, run crewai_to_file.py
Final contens of report will be placed in bielik_output/atrakcje.md

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •