diff --git a/integrations/llms/ollama.mdx b/integrations/llms/ollama.mdx
index f9117eaf..aefd912e 100644
--- a/integrations/llms/ollama.mdx
+++ b/integrations/llms/ollama.mdx
@@ -22,11 +22,13 @@ docker run -d -p 8787:8787 portkeyai/gateway:latest
Then, connect to your local Ollama instance:
+
+
```python Python
from portkey_ai import Portkey
portkey = Portkey(
- base_url="http://localhost:8787", # Your local Gateway
+ base_url="http://localhost:8787/v1", # Your local Gateway
provider="ollama",
custom_host="http://localhost:11434" # Your Ollama instance
)
@@ -37,6 +39,23 @@ response = portkey.chat.completions.create(
)
```
+```javascript Node.js
+import Portkey from 'portkey-ai';
+
+const portkey = new Portkey({
+ baseURL: 'http://localhost:8787/v1', // Your local Gateway
+ provider: 'ollama',
+ customHost: 'http://localhost:11434' // Your Ollama instance
+});
+
+const response = await portkey.chat.completions.create({
+ model: 'llama3',
+ messages: [{ role: 'user', content: 'Hello!' }]
+});
+```
+
+
+
@@ -134,7 +153,7 @@ const portkey = new Portkey({
-**Important:** For Ollama integration, you only need to pass the base URL to `customHost` **without** the version identifier (such as `/v1`) - Portkey handles the rest!
+**Important:** The `custom_host` / `customHost` parameter (your Ollama URL) should be passed **without** `/v1` — Portkey handles the provider routing automatically. However, when using a local gateway, the `base_url` / `baseURL` parameter **must** include `/v1` (e.g. `http://localhost:8787/v1`).
---