Add LM Studio as a local LLM provider#195
Merged
Merged
Conversation
Adds LM Studio support using the existing OpenAI SDK with a custom baseURL (http://localhost:1234/v1), since LM Studio exposes a fully OpenAI-compatible REST API. Uses json_object response format (not Structured Outputs) for broad model compatibility, with graceful tool-call fallback mirroring the Ollama provider. Changes: - packages/llm: LMStudioProvider class, updated ProviderType union, SettingsConfig, provider-factory, and index exports - packages/database: add 'lmstudio' to llm_provider enum, add lmstudio_base_url column, migration 0019 - apps/api: new /api/v1/lmstudio/models and /health routes - apps/web: LM Studio provider button, dynamic model list, base URL input in Advanced Settings - .env.example: LM_STUDIO_BASE_URL variable https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG
apps/api doesn't have openai as a direct dependency. Replace OpenAI client with a direct fetch to the LM Studio /v1/models endpoint (OpenAI-compatible), fixing the TS2307 and TS7006 build errors. https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG
SlayterDev
commented
Mar 12, 2026
Owner
Author
SlayterDev
left a comment
There was a problem hiding this comment.
@claude check these review comments and address them please
| const baseURL = settings.lmstudioBaseUrl || DEFAULT_BASE_URL; | ||
|
|
||
| try { | ||
| const response = await fetch(`${baseURL}/models`, { |
Owner
Author
There was a problem hiding this comment.
The user shouldn't have to specify "api/v1" in their base url. They should only need to set host and port.
| <option value="">Use currently loaded model</option> | ||
| {lmstudioModels.map((model) => ( | ||
| <option key={model.id} value={model.id}> | ||
| {model.id} |
Owner
Author
There was a problem hiding this comment.
The LM Studio API has a display_name property. Let's add that to the type and use that for user facing display.
- Base URL: users now enter host:port only (e.g. http://localhost:1234); the /v1 path is appended automatically by the provider and API route. Added a toBaseURL() helper that also handles the case where a user has already stored a /v1 URL (backwards compatible). - Models: added display_name to LMStudioModel type (API route, web client) and use it in the SettingsPage dropdown for user-facing labels. - Updated migration default and schema default to http://localhost:1234. - Updated .env.example accordingly. https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG
SlayterDev
added a commit
that referenced
this pull request
Mar 13, 2026
* Add LM Studio as a local LLM provider Adds LM Studio support using the existing OpenAI SDK with a custom baseURL (http://localhost:1234/v1), since LM Studio exposes a fully OpenAI-compatible REST API. Uses json_object response format (not Structured Outputs) for broad model compatibility, with graceful tool-call fallback mirroring the Ollama provider. Changes: - packages/llm: LMStudioProvider class, updated ProviderType union, SettingsConfig, provider-factory, and index exports - packages/database: add 'lmstudio' to llm_provider enum, add lmstudio_base_url column, migration 0019 - apps/api: new /api/v1/lmstudio/models and /health routes - apps/web: LM Studio provider button, dynamic model list, base URL input in Advanced Settings - .env.example: LM_STUDIO_BASE_URL variable https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG * Fix lmstudio route: use fetch instead of openai SDK apps/api doesn't have openai as a direct dependency. Replace OpenAI client with a direct fetch to the LM Studio /v1/models endpoint (OpenAI-compatible), fixing the TS2307 and TS7006 build errors. https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG * Address PR review feedback for LM Studio support - Base URL: users now enter host:port only (e.g. http://localhost:1234); the /v1 path is appended automatically by the provider and API route. Added a toBaseURL() helper that also handles the case where a user has already stored a /v1 URL (backwards compatible). - Models: added display_name to LMStudioModel type (API route, web client) and use it in the SettingsPage dropdown for user-facing labels. - Updated migration default and schema default to http://localhost:1234. - Updated .env.example accordingly. https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG * LM Studio fixes --------- Co-authored-by: Claude <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds LM Studio support using the existing OpenAI SDK with a custom
baseURL (http://localhost:1234/v1), since LM Studio exposes a
fully OpenAI-compatible REST API. Uses json_object response format
(not Structured Outputs) for broad model compatibility, with graceful
tool-call fallback mirroring the Ollama provider.
Changes:
SettingsConfig, provider-factory, and index exports
lmstudio_base_url column, migration 0019
input in Advanced Settings
https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG