Skip to content

feat: OpenAI LLM integration#764

Open
sharifajahanshaik wants to merge 1 commit into
juspay:releasefrom
sharifajahanshaik:openai-llm-integration
Open

feat: OpenAI LLM integration#764
sharifajahanshaik wants to merge 1 commit into
juspay:releasefrom
sharifajahanshaik:openai-llm-integration

Conversation

@sharifajahanshaik
Copy link
Copy Markdown
Contributor

@sharifajahanshaik sharifajahanshaik commented May 15, 2026

Summary by CodeRabbit

  • New Features
    • Added OpenAI as a fully supported language model provider option for voice agent operations, seamlessly integrated with existing provider frameworks. Includes comprehensive API key configuration, model selection, temperature control, token limit management, and optional advanced reasoning effort settings. Dynamic configuration capabilities for temperature and token parameters are available through the Redis backend.

Review Change Stack

Copilot AI review requested due to automatic review settings May 15, 2026 13:05
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 15, 2026

Important

Review skipped

Auto incremental reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bcad21ef-104d-4362-a700-91503b6c7059

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review

Walkthrough

This PR adds direct OpenAI support to the Breeze Buddy voice agent by introducing a new LLM provider option. New configuration schemas, factory methods, and runtime dispatch logic enable the system to resolve and construct OpenAI LLM services alongside existing Azure, Google Vertex, and Claude-on-Vertex providers.

Changes

OpenAI Direct LLM Integration

Layer / File(s) Summary
OpenAI Provider Type and Configuration
app/ai/voice/llm/types.py, app/ai/voice/llm/openai.py
The LLMProvider enum adds the OPENAI member. New OpenAIConfig dataclass captures API key, model, optional temperature/max_tokens/reasoning_effort, and function-call timeout. build_openai_llm factory conditionally applies reasoning effort and completion token limits, then constructs OpenAILLMService.
LLM Module Re-exports
app/ai/voice/llm/__init__.py
Package now re-exports OpenAIConfig and build_openai_llm to integrate OpenAI into the shared LLM module interface.
Static and Dynamic Configuration
app/core/config/static.py, app/core/config/dynamic.py
Static config provides OPENAI_API_KEY (from env, no default) and OPENAI_MODEL (from env, defaults to gpt-4o). Dynamic config adds Redis-backed accessors BREEZE_BUDDY_OPENAI_MAX_COMPLETION_TOKENS (default 300) and BREEZE_BUDDY_OPENAI_TEMPERATURE (default 0.4).
Breeze Buddy Factory OpenAI Integration
app/ai/voice/agents/breeze_buddy/llm/__init__.py
New _resolve_openai helper resolves OpenAI API key from template config or static defaults, applies template/dynamic overrides for model, temperature, and max tokens, extracts optional thinking reasoning effort, and constructs the service with default function-call timeout. get_llm_service return type now includes OpenAILLMService, and dispatch logic routes LLMProvider.OPENAI to the new resolver.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Possibly related PRs

  • juspay/clairvoyance#652: Earlier PR introduced the provider-based resolver pattern in the Breeze Buddy LLM factory; this PR extends that same factory infrastructure to add the OPENAI provider branch and resolver.

Poem

🐰 A new provider hops into town,
OpenAI's models ready to renown,
Config and resolvers dance in the night,
Breeze Buddy breathes with fresh, flexible might! ✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: OpenAI LLM integration' accurately reflects the main objective of the pull request, which introduces direct OpenAI LLM support across multiple modules including new configuration, factory functions, and provider routing.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a direct OpenAI LLM provider (in addition to the existing Azure OpenAI and Google Vertex providers) for the Breeze Buddy voice agent. Introduces a new OPENAI enum value, a config dataclass + builder for the stock pipecat OpenAILLMService, dynamic Redis config keys for temperature / max completion tokens, and wires a new _resolve_openai branch into get_llm_service.

Changes:

  • New LLMProvider.OPENAI enum, OpenAIConfig dataclass, and build_openai_llm builder over pipecat.services.openai.llm.OpenAILLMService.
  • New static env vars (OPENAI_API_KEY, OPENAI_MODEL) and dynamic config (BREEZE_BUDDY_OPENAI_TEMPERATURE, BREEZE_BUDDY_OPENAI_MAX_COMPLETION_TOKENS).
  • New _resolve_openai resolver and dispatch branch in the Breeze Buddy LLM factory, with widened return type union.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
app/core/config/static.py Adds OPENAI_API_KEY / OPENAI_MODEL env vars with defaults.
app/core/config/dynamic.py Adds Breeze Buddy OpenAI temperature & max-completion-tokens dynamic config getters.
app/ai/voice/llm/types.py Adds OPENAI member to LLMProvider.
app/ai/voice/llm/openai.py New OpenAI builder + config dataclass wrapping pipecat's OpenAILLMService.
app/ai/voice/llm/init.py Re-exports OpenAIConfig and build_openai_llm.
app/ai/voice/agents/breeze_buddy/llm/init.py Adds _resolve_openai and dispatches LLMProvider.OPENAI; widens return type union.
Comments suppressed due to low confidence (2)

app/ai/voice/agents/breeze_buddy/llm/init.py:304

  • The module-level dispatch docstring (lines 6-9) and the get_llm_service dispatch docstring (lines 301-304) were not updated to mention the new OPENAI provider branch. This documentation now diverges from the actual dispatch logic added at lines 332-334.
    Dispatch:
      - No config / provider == AZURE  -> Azure (env defaults + template overrides)
      - provider == GOOGLE_VERTEX, sdk == ANTHROPIC -> Claude on Vertex (all from template)
      - provider == GOOGLE_VERTEX, sdk is None/GOOGLE -> Gemini on Vertex (all from template)

app/ai/voice/agents/breeze_buddy/llm/init.py:139

  • max_tokens is selected via a truthy check (if llm_config and llm_config.max_tokens), so a template explicitly setting max_tokens=0 would be treated as unset and silently overridden by the dynamic default. This same pattern exists in _resolve_azure, but it is being repeated here for the new OpenAI resolver. Prefer an explicit is not None check for numeric overrides to make zero an unambiguous (if unusual) override and to fail fast on bad config rather than masking it.
    max_tokens = (
        llm_config.max_tokens
        if llm_config and llm_config.max_tokens
        else await BREEZE_BUDDY_OPENAI_MAX_COMPLETION_TOKENS()
    )

"""Build direct OpenAI LLM."""
api_key = OPENAI_API_KEY
if llm_config and llm_config.api_key_name:
api_key = await get_config(llm_config.api_key_name, "", str)
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@app/ai/voice/agents/breeze_buddy/llm/__init__.py`:
- Around line 125-128: The code currently sets api_key = OPENAI_API_KEY and, if
llm_config.api_key_name is provided, overrides it via await get_config(...), but
does not validate the resolved api_key; change the logic in __init__.py to
validate that the resolved api_key (from OPENAI_API_KEY or get_config) is
non-empty and if empty raise ValueError (same behavior as Azure resolution).
Specifically, after the get_config call for llm_config.api_key_name, check the
variable api_key and raise ValueError with a clear message if it is falsy;
reference the symbols OPENAI_API_KEY, llm_config.api_key_name, get_config, and
the api_key variable to locate where to insert the validation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2621460a-f829-41c3-9557-a5e155001776

📥 Commits

Reviewing files that changed from the base of the PR and between 8c19455 and fbbc62a.

📒 Files selected for processing (6)
  • app/ai/voice/agents/breeze_buddy/llm/__init__.py
  • app/ai/voice/llm/__init__.py
  • app/ai/voice/llm/openai.py
  • app/ai/voice/llm/types.py
  • app/core/config/dynamic.py
  • app/core/config/static.py

Comment on lines +125 to +128
api_key = OPENAI_API_KEY
if llm_config and llm_config.api_key_name:
api_key = await get_config(llm_config.api_key_name, "", str)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Fail fast when OpenAI API key resolution is empty.

If the configured key name is missing in live config (or env default is empty), the service is created with an empty API key and fails later at runtime. Validate and raise ValueError here, same as Azure resolution behavior.

🔧 Suggested fix
 async def _resolve_openai(llm_config: LLMConfiguration | None) -> OpenAILLMService:
     """Build direct OpenAI LLM."""
     api_key = OPENAI_API_KEY
     if llm_config and llm_config.api_key_name:
         api_key = await get_config(llm_config.api_key_name, "", str)
+        if not api_key:
+            raise ValueError(
+                f"API key not found for config key: {llm_config.api_key_name}"
+            )
+    if not api_key:
+        raise ValueError("OPENAI_API_KEY is required for openai provider")
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@app/ai/voice/agents/breeze_buddy/llm/__init__.py` around lines 125 - 128, The
code currently sets api_key = OPENAI_API_KEY and, if llm_config.api_key_name is
provided, overrides it via await get_config(...), but does not validate the
resolved api_key; change the logic in __init__.py to validate that the resolved
api_key (from OPENAI_API_KEY or get_config) is non-empty and if empty raise
ValueError (same behavior as Azure resolution). Specifically, after the
get_config call for llm_config.api_key_name, check the variable api_key and
raise ValueError with a clear message if it is falsy; reference the symbols
OPENAI_API_KEY, llm_config.api_key_name, get_config, and the api_key variable to
locate where to insert the validation.

@sharifajahanshaik sharifajahanshaik force-pushed the openai-llm-integration branch from fbbc62a to 649e1ad Compare May 15, 2026 13:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants