Open-WebUI-Functions is a collection of Python-based functions that extend Open WebUI with additional pipelines, filters, and integrations. These functions make it easier to connect external AI providers, process data, and tailor the Open WebUI experience to real-world workflows.
- π Overview
- π·οΈ Version
- β¨ Features
- ποΈ Project structure
- π Prerequisites
- π Installation
- π‘οΈ Security features
- π§© Pipelines
- π Filters
- π€ Integrations
- πͺ Contributing
- π License
- π¬ Support
- π« Star history
This repository focuses on reusable Python functions for Open WebUI. It includes provider-specific pipelines, request and response filters, optional analytics helpers, secure secret handling, and both streaming and non-streaming integrations.
| Branch | Version |
|---|---|
main |
|
hotfix/* |
|
release/* |
|
dev |
|
feature/* |
These badges are generated and updated automatically by the GitVersion Badge workflow for all GitFlow branch types.
- π§© Custom pipelines: Extend Open WebUI with AI processing pipelines, including model inference and data transformations.
- π Filters for data processing: Apply custom filtering logic to refine, manipulate, or preprocess input and output data.
- π€ Azure AI support: Seamlessly connect Open WebUI with Azure OpenAI and other Azure AI models.
- π€ N8N workflow integration: Enable interactions with N8N for automation.
- π± Flexible configuration: Use environment variables to adjust function settings dynamically.
- π Streaming and non-streaming support: Handle both real-time and batch processing efficiently.
- π‘οΈ Secure API key management: Automatically encrypt sensitive information such as API keys.
.
βββ docs/
β βββ azure-ai-citations.md
β βββ azure-ai-integration.md
β βββ google-gemini-integration.md
β βββ infomaniak-integration.md
β βββ n8n-integration.md
β βββ n8n-tool-usage-display.md
β βββ setup-azure-log-analytics.md
βββ filters/
β βββ google_search_tool.py
β βββ time_token_tracker.py
β βββ vertex_ai_search_tool.py
βββ pipelines/
βββ azure/
βββ google/
βββ infomaniak/
βββ n8n/
Important
To use these functions, make sure the following requirements are met:
- An active Open WebUI instance: You must have Open WebUI installed and running.
- Required AI services (if applicable): Some pipelines depend on external AI services, such as Azure AI.
- Admin access: You must have administrator privileges in Open WebUI to install functions.
Tip
Follow these steps to install and configure functions in Open WebUI.
- Ensure admin access
Note
You must be an admin in Open WebUI to install functions.
-
Open Admin Settings
- Navigate to the Admin Settings section in Open WebUI.
-
Open the Functions tab
- Go to the Functions tab in the admin panel.
-
Create a new function
- Click Add New Function.
- Copy the function code from this repository and paste it into the function editor.
-
Set environment variables if required
- Some functions require API keys or provider-specific configuration through environment variables.
Important
Set WEBUI_SECRET_KEY for secure encryption of sensitive API keys. This is required for the encryption features to work properly.
- Save and activate
- Save the function, and it will be available inside Open WebUI.
Warning
API key security: Always use encryption for sensitive information such as API keys.
The functions include a built-in encryption mechanism for sensitive information:
- Automatic encryption: API keys and other sensitive data are automatically encrypted when stored.
- Encrypted storage: Values are stored with an
encrypted:prefix followed by the encrypted data. - Transparent usage: Encryption and decryption happen automatically when values are accessed.
- No extra configuration required: Everything works out of the box when WEBUI_SECRET_KEY is set.
Important
To enable encryption, set the WEBUI_SECRET_KEY environment variable:
# Set this in your Open WebUI environment or .env file
WEBUI_SECRET_KEY="your-secure-random-string"Note
Pipelines are processing functions that extend Open WebUI with custom AI models, external integrations, and data manipulation logic.
Tip
Azure OpenAI integration made easy
This pipeline provides seamless integration with Azure OpenAI and other Azure AI models, with advanced features such as Azure Search integration and multiple model support.
- Enables interaction with Azure OpenAI and other Azure AI models.
- Supports Azure Search / RAG integration for enhanced document retrieval (Azure OpenAI only).
- Native OpenWebUI citations support π―: Rich citation cards, source previews, relevance scores, and automatic
[docX]β clickable markdown link conversion (Azure OpenAI only). - Relevance scores: BM25 keyword and semantic rerank scores from Azure AI Search are displayed as a relevance percentage on citation cards, with independently configurable normalization via
BM25_SCORE_MAXandRERANK_SCORE_MAX. - Supports multiple models via
AZURE_AI_MODEL(semicolon- or comma-separated, for examplegpt-4o;gpt-4o-mini) or automatic model extraction from the Azure OpenAI URL. - Large predefined model catalogue (GPT-4o, GPT-5, o3, o4-mini, Phi-4, DeepSeek-R1/V3, Mistral, Llama 3.x, Cohere, Grok, and more) via
USE_PREDEFINED_AZURE_AI_MODELS. - Customizable pipeline display prefix via
AZURE_AI_PIPELINE_PREFIX. - Flexible authentication:
api-keyheader (default) orAuthorization: Bearertoken viaAZURE_AI_USE_AUTHORIZATION_HEADER. - Token usage tracking: Requests
stream_options.include_usagein streaming mode so token counts are saved to the Open WebUI database. - Filters valid parameters to ensure clean requests.
- Handles both streaming and non-streaming responses.
- Provides configurable error handling and timeouts.
- Supports encryption of sensitive information such as API keys.
π Azure AI Pipeline in Open WebUI
π Learn more about Azure AI
π Azure AI citations documentation
2. N8N Pipeline
Tip
N8N workflow automation integration
Connect Open WebUI with N8N to leverage powerful workflow automation. It includes configurable AI agent tool usage display for better transparency into agent actions.
- Integrates Open WebUI with N8N, an automation and workflow platform.
- AI agent tool usage display (v2.2.0) π οΈ: Shows tool calls from N8N AI Agent workflows with three verbosity levels (minimal, compact, detailed) and customizable length limits (non-streaming mode only).
- Streaming and non-streaming support for real-time and batch data processing.
- Sends messages from Open WebUI to an N8N webhook.
- Supports real-time message processing with dynamic field handling.
- Enables automation of AI-generated responses inside an N8N workflow.
- Supports encryption of sensitive information such as API keys.
- Includes an example N8N workflow for the N8N Pipeline.
Important
Tool usage display limitation: The AI agent tool call display currently works only in non-streaming mode due to N8N's current streaming implementation. The code is future-proof and will work automatically when N8N adds intermediateSteps to streaming responses.
π N8N Pipeline in Open WebUI
π Learn more about N8N
π N8N tool usage display documentation
3. Infomaniak
- Integrates Open WebUI with Infomaniak, a Swiss web hosting and cloud services provider.
- Sends messages from Open WebUI to an Infomaniak AI Tool.
- Supports encryption of sensitive information such as API keys.
π Infomaniak Pipeline in Open WebUI
π Learn more about Infomaniak
- Integrates Open WebUI with Google Gemini, a generative AI model by Google.
- Supports integration with the Google Generative AI API or Vertex AI API for content generation.
- Sends messages from Open WebUI to Google Gemini.
- Supports encryption of sensitive information such as API keys.
- Supports both streaming and non-streaming responses (streaming is automatically disabled for image generation models).
- Thinking & reasoning: Configurable thinking levels (
low/high) for Gemini 3 models and thinking budgets (0β32 768 tokens) for Gemini 2.5 models, with per-chat override support. - Provides configurable error handling and timeouts.
- Advanced image processing: Optimized image handling with configurable compression, resizing, and quality settings.
- Configurable parameters: Environment variables for image optimization (quality, max dimensions, format conversion).
- Multi-image history: Configurable history image limit, hash-based deduplication, and automatic
[Image N]labels so the model can reference earlier images. - Image generation (Gemini 3): Configurable aspect ratio (for example
16:9or1:1) and resolution (1K,2K, or4K) for Gemini 3 image models, with per-user valve overrides. - Video generation (Veo): Generate videos with Google Veo models (3.1, 3, 2). Configurable aspect ratio, resolution, duration, negative prompt, and person generation controls. Supports text-to-video and image-to-video for all supported Veo models. Videos are automatically uploaded and embedded with playback controls.
- Token usage tracking: Returns prompt, completion, and total token counts to Open WebUI for automatic persistence in the database.
- Model whitelist & additional models: Restrict the visible model list via
GOOGLE_MODEL_WHITELISTand add SDK-unsupported models viaGOOGLE_MODEL_ADDITIONAL. - Grounding with Google Search via the google_search_tool.py filter
- Grounding with Vertex AI Search via the vertex_ai_search_tool.py filter
- Native tool calling support
- Configurable API version support
π Google Gemini Pipeline in Open WebUI
π Learn more about Google Gemini
Note
For LiteLLM users: To use Google Gemini models through LiteLLM, configure LiteLLM directly in Open WebUI under Admin Panel β Settings β Connections β OpenAI instead of using this pipeline. For more information about LiteLLM, visit the official LiteLLM GitHub repository.
Note
Filters allow preprocessing and post-processing of data within Open WebUI.
Note
Performance monitoring for AI interactions
Track response times, token usage, and optionally send analytics to Azure Log Analytics for more complete observability.
- Measures response time and token usage for AI interactions.
- Supports tracking of total token usage and per-message token counts.
- Can calculate token usage for all messages or only a subset.
- Uses OpenAI's
tiktokenlibrary for token counting (accurate only for OpenAI models). - Optionally sends logs to an Azure Log Analytics Workspace.
π Time Token Tracker in Open WebUI
π How to set up Azure Log Analytics
See the Azure AI integration guide.
See the N8N integration guide.
See the Infomaniak integration guide.
See the Google Gemini integration guide.
Tip
We welcome contributions of all kinds. You do not need to write code to contribute.
For detailed onboarding and contribution guidance, see CONTRIBUTING.md.
This project is licensed under the Apache License 2.0. See the LICENSE file for details.
Note
If you have any questions, suggestions, or need assistance, please open an issue to connect with us. π€
Created by owndev β let's make Open WebUI even more amazing together. πͺ