Skip to content

Bug: Custom Gemini models fail with OpenAI 401 error due to strict GEMINI_MODELS list fallback #87

@hakstudio

Description

@hakstudio

When attempting to use a newer or custom Gemini model (e.g., gemini-3.1-flash-lite-preview) by setting PREFERRED_PROVIDER="google" and defining SMALL_MODEL / BIG_MODEL in .env, the server fails and tries to route the request to OpenAI instead.

This results in a 401 Unauthorized Error (Incorrect API key provided) because the application uses the dummy OPENAI_API_KEY when it fails to recognize the model as a Gemini model.

Root Cause: In

server.py, the validation checks strictly if the model name is strictly inside the hardcoded GEMINI_MODELS list. If it isn't, the fallback logic incorrectly assumes it is an OpenAI model.

Suggested Fix: In server.py (MessagesRequest and TokenCountRequest validators), update the fallback checks to allow any model name starting with "gemini" alongside the list check.

For example, change:

python
if clean_v in GEMINI_MODELS:

To:

python
if clean_v in GEMINI_MODELS or clean_v.startswith("gemini"):

This tiny adjustment allows passing arbitrary valid Gemini model strings through the proxy without throwing OpenAI authentication errors.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions