Skip to content

Add LM Studio as a local LLM provider#195

Merged
SlayterDev merged 4 commits into
mainfrom
claude/add-lm-studio-support-mSsZ8
Mar 13, 2026
Merged

Add LM Studio as a local LLM provider#195
SlayterDev merged 4 commits into
mainfrom
claude/add-lm-studio-support-mSsZ8

Conversation

@SlayterDev
Copy link
Copy Markdown
Owner

Adds LM Studio support using the existing OpenAI SDK with a custom
baseURL (http://localhost:1234/v1), since LM Studio exposes a
fully OpenAI-compatible REST API. Uses json_object response format
(not Structured Outputs) for broad model compatibility, with graceful
tool-call fallback mirroring the Ollama provider.

Changes:

  • packages/llm: LMStudioProvider class, updated ProviderType union,
    SettingsConfig, provider-factory, and index exports
  • packages/database: add 'lmstudio' to llm_provider enum, add
    lmstudio_base_url column, migration 0019
  • apps/api: new /api/v1/lmstudio/models and /health routes
  • apps/web: LM Studio provider button, dynamic model list, base URL
    input in Advanced Settings
  • .env.example: LM_STUDIO_BASE_URL variable

https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG

claude added 2 commits March 12, 2026 21:38
Adds LM Studio support using the existing OpenAI SDK with a custom
baseURL (http://localhost:1234/v1), since LM Studio exposes a
fully OpenAI-compatible REST API. Uses json_object response format
(not Structured Outputs) for broad model compatibility, with graceful
tool-call fallback mirroring the Ollama provider.

Changes:
- packages/llm: LMStudioProvider class, updated ProviderType union,
  SettingsConfig, provider-factory, and index exports
- packages/database: add 'lmstudio' to llm_provider enum, add
  lmstudio_base_url column, migration 0019
- apps/api: new /api/v1/lmstudio/models and /health routes
- apps/web: LM Studio provider button, dynamic model list, base URL
  input in Advanced Settings
- .env.example: LM_STUDIO_BASE_URL variable

https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG
apps/api doesn't have openai as a direct dependency.
Replace OpenAI client with a direct fetch to the LM Studio
/v1/models endpoint (OpenAI-compatible), fixing the TS2307
and TS7006 build errors.

https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG
Copy link
Copy Markdown
Owner Author

@SlayterDev SlayterDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@claude check these review comments and address them please

const baseURL = settings.lmstudioBaseUrl || DEFAULT_BASE_URL;

try {
const response = await fetch(`${baseURL}/models`, {
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The user shouldn't have to specify "api/v1" in their base url. They should only need to set host and port.

Comment thread apps/web/src/pages/SettingsPage.tsx Outdated
<option value="">Use currently loaded model</option>
{lmstudioModels.map((model) => (
<option key={model.id} value={model.id}>
{model.id}
Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LM Studio API has a display_name property. Let's add that to the type and use that for user facing display.

claude and others added 2 commits March 13, 2026 00:02
- Base URL: users now enter host:port only (e.g. http://localhost:1234);
  the /v1 path is appended automatically by the provider and API route.
  Added a toBaseURL() helper that also handles the case where a user
  has already stored a /v1 URL (backwards compatible).
- Models: added display_name to LMStudioModel type (API route, web
  client) and use it in the SettingsPage dropdown for user-facing labels.
- Updated migration default and schema default to http://localhost:1234.
- Updated .env.example accordingly.

https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG
@SlayterDev SlayterDev merged commit c830704 into main Mar 13, 2026
2 checks passed
@SlayterDev SlayterDev deleted the claude/add-lm-studio-support-mSsZ8 branch March 13, 2026 13:08
SlayterDev added a commit that referenced this pull request Mar 13, 2026
* Add LM Studio as a local LLM provider

Adds LM Studio support using the existing OpenAI SDK with a custom
baseURL (http://localhost:1234/v1), since LM Studio exposes a
fully OpenAI-compatible REST API. Uses json_object response format
(not Structured Outputs) for broad model compatibility, with graceful
tool-call fallback mirroring the Ollama provider.

Changes:
- packages/llm: LMStudioProvider class, updated ProviderType union,
  SettingsConfig, provider-factory, and index exports
- packages/database: add 'lmstudio' to llm_provider enum, add
  lmstudio_base_url column, migration 0019
- apps/api: new /api/v1/lmstudio/models and /health routes
- apps/web: LM Studio provider button, dynamic model list, base URL
  input in Advanced Settings
- .env.example: LM_STUDIO_BASE_URL variable

https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG

* Fix lmstudio route: use fetch instead of openai SDK

apps/api doesn't have openai as a direct dependency.
Replace OpenAI client with a direct fetch to the LM Studio
/v1/models endpoint (OpenAI-compatible), fixing the TS2307
and TS7006 build errors.

https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG

* Address PR review feedback for LM Studio support

- Base URL: users now enter host:port only (e.g. http://localhost:1234);
  the /v1 path is appended automatically by the provider and API route.
  Added a toBaseURL() helper that also handles the case where a user
  has already stored a /v1 URL (backwards compatible).
- Models: added display_name to LMStudioModel type (API route, web
  client) and use it in the SettingsPage dropdown for user-facing labels.
- Updated migration default and schema default to http://localhost:1234.
- Updated .env.example accordingly.

https://claude.ai/code/session_014vPCcYQRHJpXauLWVzqVvG

* LM Studio fixes

---------

Co-authored-by: Claude <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants