This script facilitates interaction with the Ollama API, enabling efficient communication with models that support single message responses, chat sessions, and multimodal inputs (including images). It's designed to work with the Ollama API, allowing users to easily integrate Large Language Model (LLM) interactions into their applications.
- Single Message and Chat Capabilities: Send a single message or initiate a chat session with the model.
- Multimodal Support: Supports sending images along with text prompts to multimodal models.
- Streaming Responses: Option to stream responses for real-time interaction.
- Error Handling: Robust error handling and informative messages for troubleshooting.
- Environment Integration: Automatically fetches API URL from environment variables for ease of configuration.
To use this script, you will need:
- Python 3.6 or higher.
requestslibrary installed. Install it via pip if you haven't already:pip install requests
- Access to the Ollama API and a valid API key if required.
- Clone the repository:
git clone https://github.com/ronigold/Ollama.git
- Navigate to the cloned directory:
cd Ollama - Ensure you have Python and the required packages installed.
Set the API_URL environment variable to point to your Ollama API endpoint. You can set it temporarily in the terminal:
export API_URL='http://localhost:11434/api'Or permanently by adding it to your .bashrc or .zshrc.
To interact with the API, run the interact_with_ollama function from your Python environment:
-
For Sending Text Prompts:
interact_with_ollama( model = 'llama3', prompt='Why is the sky blue?' )
-
For a Chat Session:
interact_with_ollama( model = 'openhermes2.5-mistral', messages=messages = [ {'role': 'user', 'content': 'What is machine learning?'}, {'role': 'assistant', 'content': 'Machine learning is a field of AI that enables systems to learn and improve from experience without being explicitly programmed.' {'role': 'user', 'content': 'interesting! How do these systems learn without explicitly programmed?'},} ] )
-
For Multimodal Interaction:
interact_with_ollama( model = 'llava', prompt='What is strange about this image?', image_path='path/to/image.jpg' )
You can customize how the outputs are handled by providing a custom function to output_handler. By default, it prints messages to the console.
Refer to the inline comments in the script for detailed documentation on each function and parameter.
For more information on setting up and using the Ollama API, refer to the blog post: Running Llama 3 on Personal Linux Hardware (GPU/CPU).
Contributions to the script are welcome! Please fork the repository and submit a pull request with your enhancements.
Distributed under the MIT License. See LICENSE for more information.
