Added glm-4p7 and qwen3-vl-30b-a3b-instruct#431
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 9122c8db10
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "Codex (@codex) review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "Codex (@codex) address that feedback".
| "fireworks" | ||
| ] | ||
| }, | ||
| "fireworks_ai/accounts/fireworks/models/qwen3-vl-30b-a3b-instruct": { |
There was a problem hiding this comment.
Register provider mapping for qwen3-vl model
This new model is added to model_list.json but not to AvailableEndpointTypes, so getModelEndpointTypes() falls back to DefaultEndpointTypes["openai"] (openai/azure) instead of fireworks. In the edge fallback path (when /api/secret lookup fails), the auth token is then tagged with the wrong provider type and requests for this model can be routed to the wrong endpoint/auth scheme. Please add fireworks_ai/accounts/fireworks/models/qwen3-vl-30b-a3b-instruct to AvailableEndpointTypes like glm-4p7.
Useful? React with 👍 / 👎.
aswink
left a comment
There was a problem hiding this comment.
Worth double checking that running sync-models won't overwrite this change.
Unfortunately seems like this model isn't in https://raw.githubusercontent.com/BerriAI/litellm/refs/heads/main/litellm/model_prices_and_context_window_backup.json yet, not sure how we typically handle missing values
Got the details from:
https://fireworks.ai/models/fireworks/glm-4p7
https://fireworks.ai/models/fireworks/qwen3-vl-30b-a3b-instruct
https://docs.z.ai/guides/llm/glm-4.7