feat: OpenAI LLM integration#764
Conversation
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR adds direct OpenAI support to the Breeze Buddy voice agent by introducing a new LLM provider option. New configuration schemas, factory methods, and runtime dispatch logic enable the system to resolve and construct OpenAI LLM services alongside existing Azure, Google Vertex, and Claude-on-Vertex providers. ChangesOpenAI Direct LLM Integration
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Adds a direct OpenAI LLM provider (in addition to the existing Azure OpenAI and Google Vertex providers) for the Breeze Buddy voice agent. Introduces a new OPENAI enum value, a config dataclass + builder for the stock pipecat OpenAILLMService, dynamic Redis config keys for temperature / max completion tokens, and wires a new _resolve_openai branch into get_llm_service.
Changes:
- New
LLMProvider.OPENAIenum,OpenAIConfigdataclass, andbuild_openai_llmbuilder overpipecat.services.openai.llm.OpenAILLMService. - New static env vars (
OPENAI_API_KEY,OPENAI_MODEL) and dynamic config (BREEZE_BUDDY_OPENAI_TEMPERATURE,BREEZE_BUDDY_OPENAI_MAX_COMPLETION_TOKENS). - New
_resolve_openairesolver and dispatch branch in the Breeze Buddy LLM factory, with widened return type union.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| app/core/config/static.py | Adds OPENAI_API_KEY / OPENAI_MODEL env vars with defaults. |
| app/core/config/dynamic.py | Adds Breeze Buddy OpenAI temperature & max-completion-tokens dynamic config getters. |
| app/ai/voice/llm/types.py | Adds OPENAI member to LLMProvider. |
| app/ai/voice/llm/openai.py | New OpenAI builder + config dataclass wrapping pipecat's OpenAILLMService. |
| app/ai/voice/llm/init.py | Re-exports OpenAIConfig and build_openai_llm. |
| app/ai/voice/agents/breeze_buddy/llm/init.py | Adds _resolve_openai and dispatches LLMProvider.OPENAI; widens return type union. |
Comments suppressed due to low confidence (2)
app/ai/voice/agents/breeze_buddy/llm/init.py:304
- The module-level dispatch docstring (lines 6-9) and the
get_llm_servicedispatch docstring (lines 301-304) were not updated to mention the newOPENAIprovider branch. This documentation now diverges from the actual dispatch logic added at lines 332-334.
Dispatch:
- No config / provider == AZURE -> Azure (env defaults + template overrides)
- provider == GOOGLE_VERTEX, sdk == ANTHROPIC -> Claude on Vertex (all from template)
- provider == GOOGLE_VERTEX, sdk is None/GOOGLE -> Gemini on Vertex (all from template)
app/ai/voice/agents/breeze_buddy/llm/init.py:139
max_tokensis selected via a truthy check (if llm_config and llm_config.max_tokens), so a template explicitly settingmax_tokens=0would be treated as unset and silently overridden by the dynamic default. This same pattern exists in_resolve_azure, but it is being repeated here for the new OpenAI resolver. Prefer an explicitis not Nonecheck for numeric overrides to make zero an unambiguous (if unusual) override and to fail fast on bad config rather than masking it.
max_tokens = (
llm_config.max_tokens
if llm_config and llm_config.max_tokens
else await BREEZE_BUDDY_OPENAI_MAX_COMPLETION_TOKENS()
)
| """Build direct OpenAI LLM.""" | ||
| api_key = OPENAI_API_KEY | ||
| if llm_config and llm_config.api_key_name: | ||
| api_key = await get_config(llm_config.api_key_name, "", str) |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@app/ai/voice/agents/breeze_buddy/llm/__init__.py`:
- Around line 125-128: The code currently sets api_key = OPENAI_API_KEY and, if
llm_config.api_key_name is provided, overrides it via await get_config(...), but
does not validate the resolved api_key; change the logic in __init__.py to
validate that the resolved api_key (from OPENAI_API_KEY or get_config) is
non-empty and if empty raise ValueError (same behavior as Azure resolution).
Specifically, after the get_config call for llm_config.api_key_name, check the
variable api_key and raise ValueError with a clear message if it is falsy;
reference the symbols OPENAI_API_KEY, llm_config.api_key_name, get_config, and
the api_key variable to locate where to insert the validation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 2621460a-f829-41c3-9557-a5e155001776
📒 Files selected for processing (6)
app/ai/voice/agents/breeze_buddy/llm/__init__.pyapp/ai/voice/llm/__init__.pyapp/ai/voice/llm/openai.pyapp/ai/voice/llm/types.pyapp/core/config/dynamic.pyapp/core/config/static.py
| api_key = OPENAI_API_KEY | ||
| if llm_config and llm_config.api_key_name: | ||
| api_key = await get_config(llm_config.api_key_name, "", str) | ||
|
|
There was a problem hiding this comment.
Fail fast when OpenAI API key resolution is empty.
If the configured key name is missing in live config (or env default is empty), the service is created with an empty API key and fails later at runtime. Validate and raise ValueError here, same as Azure resolution behavior.
🔧 Suggested fix
async def _resolve_openai(llm_config: LLMConfiguration | None) -> OpenAILLMService:
"""Build direct OpenAI LLM."""
api_key = OPENAI_API_KEY
if llm_config and llm_config.api_key_name:
api_key = await get_config(llm_config.api_key_name, "", str)
+ if not api_key:
+ raise ValueError(
+ f"API key not found for config key: {llm_config.api_key_name}"
+ )
+ if not api_key:
+ raise ValueError("OPENAI_API_KEY is required for openai provider")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@app/ai/voice/agents/breeze_buddy/llm/__init__.py` around lines 125 - 128, The
code currently sets api_key = OPENAI_API_KEY and, if llm_config.api_key_name is
provided, overrides it via await get_config(...), but does not validate the
resolved api_key; change the logic in __init__.py to validate that the resolved
api_key (from OPENAI_API_KEY or get_config) is non-empty and if empty raise
ValueError (same behavior as Azure resolution). Specifically, after the
get_config call for llm_config.api_key_name, check the variable api_key and
raise ValueError with a clear message if it is falsy; reference the symbols
OPENAI_API_KEY, llm_config.api_key_name, get_config, and the api_key variable to
locate where to insert the validation.
fbbc62a to
649e1ad
Compare
Summary by CodeRabbit