Skip to content

feat: Add OpenAI-compatible provider for local LLMs and custom APIs#1

Open
d33pstatetech-stack wants to merge 2 commits intomainfrom
feat/openai-compatible-provider
Open

feat: Add OpenAI-compatible provider for local LLMs and custom APIs#1
d33pstatetech-stack wants to merge 2 commits intomainfrom
feat/openai-compatible-provider

Conversation

@d33pstatetech-stack
Copy link
Copy Markdown
Owner

Summary

Adds a new "Custom (OpenAI-Compatible)" provider option that lets Siftly work with any service implementing the OpenAI chat completions API.

Supported Endpoints

Local LLMs:

  • Ollama (http://localhost:11434/v1)
  • llama.cpp (http://localhost:8080/v1)
  • vLLM (http://localhost:8000/v1)
  • LM Studio (http://localhost:1234/v1)
  • LocalAI, text-generation-webui

Cloud APIs:

  • Together AI, Groq, Fireworks, Deepseek, Mistral, and any OpenAI-compatible provider

Changes

File What
lib/openai-compatible-auth.ts New auth module — flexible base URL, optional API key (local servers often need none)
lib/ai-client.ts OpenAICompatibleClient class + updated AIProviderType union
lib/settings.ts New provider type, free-form model name getter (no allowlist — models vary by endpoint)
app/api/settings/route.ts CRUD for base URL, model name, and API key settings
app/api/settings/test/route.ts Connection test with friendly error messages (ECONNREFUSED, 404, etc.)
app/settings/page.tsx New purple "Custom" provider tab with endpoint URL, model name, optional API key, and test button
.env.example Documented OPENAI_COMPATIBLE_BASE_URL and OPENAI_COMPATIBLE_API_KEY with common examples

How It Works

  1. Select Custom in Settings → AI Provider
  2. Enter the endpoint URL (e.g., http://localhost:11434/v1 for Ollama)
  3. Type the model name (e.g., llama3.2)
  4. Optionally add an API key (cloud providers need one, local servers usually don't)
  5. Click Test Connection to verify

All existing AI features (categorization, search, image analysis, mind maps) work with the new provider.

Add a new 'Custom (OpenAI-Compatible)' provider option that supports
any service implementing the OpenAI chat completions API:

- Local LLMs: Ollama, llama.cpp, vLLM, LM Studio, LocalAI
- Cloud APIs: Together AI, Groq, Fireworks, Deepseek, Mistral, etc.

Changes:
- lib/openai-compatible-auth.ts: New auth module (flexible base URL, optional API key)
- lib/ai-client.ts: OpenAICompatibleClient class, updated AIProviderType
- lib/settings.ts: New provider type, free-form model getter
- app/api/settings/route.ts: CRUD for base URL, model name, API key
- app/api/settings/test/route.ts: Connection test with friendly errors
- app/settings/page.tsx: New provider tab with config UI and test button
- .env.example: Documented new env vars with common endpoint examples
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9b6736e999

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread app/settings/page.tsx
}

function ProviderToggle({ value, onChange }: { value: 'anthropic' | 'openai' | 'minimax'; onChange: (v: 'anthropic' | 'openai' | 'minimax') => void }) {
function ProviderToggle({ value, onChange }: { value: 'anthropic' | 'openai' | 'minimax' | 'openai_compatible'; onChange: (v: 'anthropic' | 'openai' | 'minimax' | 'openai_compatible') => void }) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Render a Custom provider option in the provider toggle

The settings UI adds openai_compatible to provider state and save logic, but this toggle still renders only three buttons (Anthropic/OpenAI/MiniMax). As a result, users cannot switch to the new custom provider from the UI unless they manually set the DB/API value first, which makes the main feature path effectively inaccessible.

Useful? React with 👍 / 👎.

Comment thread app/api/settings/route.ts
Comment on lines +173 to +174
if (openaiCompatibleModel !== undefined) {
const trimmed = openaiCompatibleModel.trim()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Validate OpenAI-compatible fields before calling trim

These new settings branches call .trim() without checking that the incoming JSON value is a string. If a client sends a malformed payload (for example {"openaiCompatibleModel":123}), the route throws a runtime TypeError and returns a 500 instead of a clean 400 validation error. The older key handlers already guard with typeof ... === 'string', so these branches should do the same to avoid avoidable server errors.

Useful? React with 👍 / 👎.

…ble input types

Address Codex review feedback:
- Render the 'Custom' button in ProviderToggle so users can actually
  switch to the openai_compatible provider from the UI
- Add typeof string checks on openaiCompatibleModel, openaiCompatibleBaseUrl,
  and openaiCompatibleApiKey before calling .trim(), matching the pattern
  used by the existing key handlers to return 400 instead of 500 on
  malformed payloads
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant