Add Ollama AI support with local model detection and configuration#7
Add Ollama AI support with local model detection and configuration#7lkolebka wants to merge 1 commit into
Conversation
There was a problem hiding this comment.
Pull request overview
This pull request adds comprehensive Ollama integration for local LLM-based translation and title generation. It introduces a new "Ollama (Local)" provider that doesn't require API keys and supports auto-detection of locally installed models.
Key Changes:
- Added Ollama as a new LLM provider with local model detection capabilities
- Implemented UI for server URL configuration and model selection via dropdown
- Modified API key validation logic across all features to accommodate local providers that don't require authentication
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 13 comments.
| File | Description |
|---|---|
| llm.js | Added Ollama provider configuration, getOllamaUrl(), and fetchOllamaModels() for API interaction |
| index.html | Added Ollama settings section with URL input, detect button, and model dropdown |
| app.js | Implemented detectOllamaModels() for UI population, translateWithOllama(), formatBytes() utility, and settings persistence |
| magical-titles.js | Added generateTitlesWithOllama() for vision-based title generation and updated API key checks for local providers |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
|
||
| // Ollama uses the /api/chat endpoint with images array | ||
| const response = await fetch(`${baseUrl}/api/chat`, { |
There was a problem hiding this comment.
The URL concatenation doesn't handle trailing slashes properly. If the user enters a URL with a trailing slash (e.g., 'http://localhost:11434/'), the resulting URL will be 'http://localhost:11434//api/chat' with a double slash. While this typically works, it's not ideal. Consider normalizing the URL by removing trailing slashes before concatenation.
| // Ollama uses the /api/chat endpoint with images array | |
| const response = await fetch(`${baseUrl}/api/chat`, { | |
| const normalizedBaseUrl = baseUrl.replace(/\/+$/, ''); | |
| // Ollama uses the /api/chat endpoint with images array | |
| const response = await fetch(`${normalizedBaseUrl}/api/chat`, { |
| const sizes = ['B', 'KB', 'MB', 'GB']; | ||
| const i = Math.floor(Math.log(bytes) / Math.log(k)); |
There was a problem hiding this comment.
The formatBytes function doesn't handle the case when the calculated index i is greater than the sizes array length. For very large byte values (e.g., terabytes), Math.floor(Math.log(bytes) / Math.log(k)) could return 4 or higher, but the sizes array only has 4 elements (indices 0-3). This would result in "undefined" being appended to the formatted string. Consider adding 'TB', 'PB' to the sizes array or clamping the index to the array length.
| const sizes = ['B', 'KB', 'MB', 'GB']; | |
| const i = Math.floor(Math.log(bytes) / Math.log(k)); | |
| const sizes = ['B', 'KB', 'MB', 'GB', 'TB', 'PB']; | |
| const i = Math.min(Math.floor(Math.log(bytes) / Math.log(k)), sizes.length - 1); |
There was a problem hiding this comment.
thats not realistic in this case
| // Populate dropdown | ||
| select.innerHTML = models.map(model => { | ||
| const name = model.name; | ||
| const size = model.size ? ` (${formatBytes(model.size)})` : ''; | ||
| const selected = name === savedModel ? ' selected' : ''; | ||
| return `<option value="${name}"${selected}>${name}${size}</option>`; | ||
| }).join(''); |
There was a problem hiding this comment.
There is a potential XSS vulnerability here. Model names returned from the Ollama API are directly interpolated into HTML without escaping. If a malicious model name contains HTML/JavaScript code, it could be executed. Consider escaping the model name before inserting it into the HTML, or use textContent instead of innerHTML to populate the options.
| // Populate dropdown | |
| select.innerHTML = models.map(model => { | |
| const name = model.name; | |
| const size = model.size ? ` (${formatBytes(model.size)})` : ''; | |
| const selected = name === savedModel ? ' selected' : ''; | |
| return `<option value="${name}"${selected}>${name}${size}</option>`; | |
| }).join(''); | |
| // Populate dropdown without using innerHTML to avoid XSS | |
| // Clear existing options | |
| select.innerHTML = ''; | |
| models.forEach(model => { | |
| const name = model.name; | |
| const size = model.size ? ` (${formatBytes(model.size)})` : ''; | |
| const option = document.createElement('option'); | |
| option.value = name; | |
| option.textContent = `${name}${size}`; | |
| if (name === savedModel) { | |
| option.selected = true; | |
| } | |
| select.appendChild(option); | |
| }); |
There was a problem hiding this comment.
Pretty low probability, that this would ever happen. We should do this nontheless.
| const baseUrl = urlInput.value.trim() || llmProviders.ollama.defaultUrl; | ||
|
|
There was a problem hiding this comment.
The server URL input lacks validation. Users could enter invalid URLs (e.g., 'not-a-url', 'javascript:alert(1)') which could cause unexpected behavior or errors when constructing API endpoints. Consider validating that the URL is properly formatted (starts with http:// or https://) before attempting to use it, or sanitizing the input.
| const baseUrl = urlInput.value.trim() || llmProviders.ollama.defaultUrl; | |
| const rawUrl = urlInput.value.trim(); | |
| let baseUrl = rawUrl || llmProviders.ollama.defaultUrl; | |
| // Validate user-provided URL (if any) to ensure it is a proper http(s) URL | |
| if (rawUrl) { | |
| try { | |
| const parsed = new URL(baseUrl); | |
| if (parsed.protocol !== 'http:' && parsed.protocol !== 'https:') { | |
| throw new Error('Invalid protocol'); | |
| } | |
| // Normalize to origin so we always have a clean base URL | |
| baseUrl = parsed.origin; | |
| } catch (e) { | |
| status.textContent = 'Invalid Ollama URL. Please use a valid http:// or https:// address.'; | |
| status.className = 'settings-key-status error'; | |
| btn.disabled = false; | |
| btn.innerHTML = ` | |
| <svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"> | |
| <path d="M23 4v6h-6M1 20v-6h6"/> | |
| <path d="M3.51 9a9 9 0 0114.85-3.36L23 10M1 14l4.64 4.36A9 9 0 0020.49 15"/> | |
| </svg> | |
| Detect | |
| `; | |
| return; | |
| } | |
| } |
| if (urlInput) { | ||
| const url = urlInput.value.trim() || config.defaultUrl; | ||
| localStorage.setItem('ollamaUrl', url); | ||
| } | ||
| if (modelInput) { | ||
| const model = modelInput.value.trim() || config.defaultModel; | ||
| localStorage.setItem(config.modelStorageKey, model); | ||
| } | ||
| if (status) { |
There was a problem hiding this comment.
The server URL input lacks validation. Invalid URLs are saved to localStorage without verification. Consider validating that the URL is properly formatted (starts with http:// or https://) before saving it to localStorage.
| if (urlInput) { | |
| const url = urlInput.value.trim() || config.defaultUrl; | |
| localStorage.setItem('ollamaUrl', url); | |
| } | |
| if (modelInput) { | |
| const model = modelInput.value.trim() || config.defaultModel; | |
| localStorage.setItem(config.modelStorageKey, model); | |
| } | |
| if (status) { | |
| let localSettingsValid = true; | |
| if (urlInput) { | |
| const url = urlInput.value.trim() || config.defaultUrl; | |
| const hasValidScheme = /^https?:\/\//i.test(url); | |
| if (!hasValidScheme) { | |
| if (status) { | |
| status.textContent = 'Invalid URL. Must start with http:// or https://'; | |
| status.className = 'settings-key-status error'; | |
| } | |
| if (provider === selectedProvider) { | |
| allValid = false; | |
| } | |
| localSettingsValid = false; | |
| } else { | |
| localStorage.setItem('ollamaUrl', url); | |
| } | |
| } | |
| if (modelInput) { | |
| const model = modelInput.value.trim() || config.defaultModel; | |
| localStorage.setItem(config.modelStorageKey, model); | |
| } | |
| if (status && localSettingsValid) { |
There was a problem hiding this comment.
Same thing, URLs should be validated to give the user feedback
| // Auto-detect Ollama models when switching to Ollama | ||
| if (provider === 'ollama') { | ||
| detectOllamaModels(); | ||
| } |
There was a problem hiding this comment.
Auto-detecting models when switching to Ollama could create a race condition. If a user quickly switches between providers or closes the settings modal before detection completes, the async detectOllamaModels() call could update UI elements that no longer exist or are in an unexpected state. Consider checking if the modal is still open and the Ollama section is still visible before updating the UI in the detectOllamaModels function.
| </h4> | ||
| <p class="settings-description" style="margin-bottom: 12px;">Run AI models locally with Ollama. No API key required.</p> | ||
| <div class="settings-model-group" style="margin-bottom: 12px;"> | ||
| <label class="settings-model-label">Server URL</label> |
There was a problem hiding this comment.
The Server URL input field lacks an accessible label association. While there is a visible label element, it should be associated with the input using the 'for' attribute matching the input's id, or the input should have an 'aria-label' attribute. This would improve accessibility for screen reader users.
| <label class="settings-model-label">Server URL</label> | |
| <label for="settings-ollama-url" class="settings-model-label">Server URL</label> |
| </div> | ||
| </div> | ||
| <div class="settings-model-group"> | ||
| <label class="settings-model-label">Model</label> |
There was a problem hiding this comment.
The Model select element lacks an accessible label association. While there is a visible label element, it should be associated with the select using the 'for' attribute matching the select's id, or the select should have an 'aria-label' attribute. This would improve accessibility for screen reader users.
| <label class="settings-model-label">Model</label> | |
| <label for="settings-model-ollama" class="settings-model-label">Model</label> |
| try { | ||
| const response = await fetch(`${url}/api/tags`, { |
There was a problem hiding this comment.
The URL concatenation doesn't handle trailing slashes properly. If the user enters a URL with a trailing slash (e.g., 'http://localhost:11434/'), the resulting URL will be 'http://localhost:11434//api/tags' with a double slash. While this typically works, it's not ideal. Consider normalizing the URL by removing trailing slashes before concatenation.
| try { | |
| const response = await fetch(`${url}/api/tags`, { | |
| const normalizedUrl = url.replace(/\/+$/, ''); | |
| try { | |
| const response = await fetch(`${normalizedUrl}/api/tags`, { |
|
|
||
| const response = await fetch(`${baseUrl}/api/chat`, { |
There was a problem hiding this comment.
The URL concatenation doesn't handle trailing slashes properly. If the user enters a URL with a trailing slash (e.g., 'http://localhost:11434/'), the resulting URL will be 'http://localhost:11434//api/chat' with a double slash. While this typically works, it's not ideal. Consider normalizing the URL by removing trailing slashes before concatenation.
| const response = await fetch(`${baseUrl}/api/chat`, { | |
| const normalizedBaseUrl = baseUrl.replace(/\/+$/, ''); | |
| const response = await fetch(`${normalizedBaseUrl}/api/chat`, { |
|
@copilot open a new pull request to apply changes based on the comments in this thread |
Added Ollama Integration for local LLM translation
Auto-Detection of Models
UI Changes
How it works