Skip to content

feat: add Baidu Qianfan as a first-class LLM provider#2354

Open
jimmyzhuu wants to merge 1 commit intoarc53:mainfrom
jimmyzhuu:codex/qianfan-provider
Open

feat: add Baidu Qianfan as a first-class LLM provider#2354
jimmyzhuu wants to merge 1 commit intoarc53:mainfrom
jimmyzhuu:codex/qianfan-provider

Conversation

@jimmyzhuu
Copy link
Copy Markdown

  • What kind of change does this PR introduce?
    Feature

  • Why was this change needed?
    DocsGPT already supports several first-class cloud LLM providers, but Baidu Qianfan users still had to rely on the generic OpenAI-compatible configuration path.

This change adds Qianfan as a dedicated provider so it can be configured more clearly in .env, registered in the model registry, and documented alongside the other supported providers.

To keep the initial integration easy to review and maintain, this first version is intentionally scoped to text chat with ernie-5.0. Attachments, tool calling, and structured output remain disabled until they are verified end-to-end against the Qianfan API.

  • Other information
    What changed:
  • Added a dedicated QianfanLLM provider under application/llm
  • Registered qianfan in the LLM creator, handler creator, settings, and model utilities
  • Added QIANFAN_API_KEY support for multi-provider deployments
  • Registered ernie-5.0 as the first built-in Qianfan model
  • Updated deployment and cloud provider docs with Qianfan configuration examples and scope notes
  • Added focused unit tests for provider wiring, model registry, API key resolution, and handler mapping

User-facing configuration:

  • LLM_PROVIDER=qianfan
  • QIANFAN_API_KEY=YOUR_QIANFAN_API_KEY
  • LLM_NAME=ernie-5.0

The initial integration is intentionally scoped to text chat only.

Tests / validation:

  • Added unit tests for the new Qianfan provider
  • Added registry and API key resolution coverage for qianfan
  • Added handler mapping coverage
  • Verified the Qianfan default model path in utils

Focused test subset run locally:

  • tests/llm/test_qianfan_llm.py
  • tests/llm/handlers/test_handler_creator.py
  • tests/core/test_model_settings.py
  • tests/core/test_model_utils.py
  • tests/test_utils.py -k qianfan

Notes for reviewers:

  • This follows the same dedicated-provider pattern already used for other OpenAI-compatible providers such as Novita
  • The first version intentionally keeps advanced capabilities disabled until they are verified end-to-end
  • I did not rely on a broad full-suite pass here because the existing offline test environment has unrelated tiktoken download failures in utility tests

@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 3, 2026

@jimmyzhuu is attempting to deploy a commit to the Arc53 Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant