feat: include OpenCode in AI provider auto-discovery#93
feat: include OpenCode in AI provider auto-discovery#93n00ki wants to merge 3 commits intoerictli:mainfrom
Conversation
📝 WalkthroughWalkthroughThe changes introduce support for a new OpenCode AI provider by implementing dynamic AI provider discovery. The CommandPalette component fetches available providers at runtime and generates AI commands dynamically, replacing hardcoded provider definitions. A new OpenCodeIcon component and AI service runtime checks support the expanded provider ecosystem. Changes
Sequence DiagramsequenceDiagram
actor User
participant CommandPalette
participant AI Service
participant CLI/System
User->>CommandPalette: Opens palette (open=true)
activate CommandPalette
CommandPalette->>CommandPalette: Effect triggered (open or currentNote change)
CommandPalette->>AI Service: getAvailableAiProviders()
activate AI Service
AI Service->>CLI/System: checkClaude()
AI Service->>CLI/System: checkCodex()
AI Service->>CLI/System: checkOpenCodeCli()
AI Service->>CLI/System: checkOllama()
CLI/System-->>AI Service: availability results
AI Service-->>CommandPalette: [available providers]
deactivate AI Service
CommandPalette->>CommandPalette: Generate aiCommands from providers
CommandPalette->>CommandPalette: Merge into baseCommands
CommandPalette->>User: Display dynamic AI commands
deactivate CommandPalette
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/services/ai.ts (1)
54-67: Consider logging provider check failures for debugging.The silent catch block at lines 60-61 prevents discovery failures from surfacing. While this is appropriate for user experience (no error toasts needed), logging the error would aid debugging when a provider unexpectedly fails to be discovered.
🔧 Optional: Add error logging
try { const installed = await providerCheckers[provider](); return installed ? provider : null; - } catch { + } catch (error) { + console.warn(`AI provider check failed for ${provider}:`, error); return null; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/ai.ts` around lines 54 - 67, The provider discovery in getAvailableAiProviders silences all errors in the try/catch, making failures hard to debug; update the catch block inside the AI_PROVIDER_ORDER map (where providerCheckers[provider]() is called) to log the caught error (including the provider identifier) to the application logger or console.error before returning null so discovery still fails silently to the user but errors are recorded for debugging.src/components/command-palette/CommandPalette.tsx (1)
161-204: Consider a data-driven approach for AI command definitions.The if/else chain works correctly but could be simplified with a lookup object, making it easier to add new providers in the future.
♻️ Optional: Data-driven provider command mapping
const AI_COMMAND_CONFIG: Record<AiProvider, { id: string; label: string; icon: ReactNode }> = { claude: { id: "ai-edit-claude", label: "Edit with Claude Code", icon: <ClaudeIcon className="w-4.5 h-4.5 fill-text-muted" />, }, codex: { id: "ai-edit-codex", label: "Edit with OpenAI Codex", icon: <CodexIcon className="w-4.5 h-4.5 fill-text-muted" />, }, opencode: { id: "ai-edit-opencode", label: "Edit with OpenCode", icon: <OpenCodeIcon className="w-4.5 h-4.5 fill-text-muted" />, }, ollama: { id: "ai-edit-ollama", label: "Edit with Ollama", icon: <OllamaIcon className="w-4.5 h-4.5 fill-text-muted" />, }, }; // Then in useMemo: const aiCommands: Command[] = onOpenAiModal ? availableAiProviders.map((provider) => ({ ...AI_COMMAND_CONFIG[provider], action: () => { onOpenAiModal(provider); onClose(); }, })) : [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/command-palette/CommandPalette.tsx` around lines 161 - 204, Replace the long provider if/else chain that builds aiCommands with a data-driven mapping: define an AI_COMMAND_CONFIG lookup (keyed by AiProvider values like "claude", "codex", "opencode", "ollama") that contains id, label and icon, then map availableAiProviders to Command objects by spreading AI_COMMAND_CONFIG[provider] and adding the action that calls onOpenAiModal(provider) and onClose(); update the aiCommands construction (the useMemo that creates aiCommands) to use this lookup so adding new providers only requires updating AI_COMMAND_CONFIG.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/services/ai.ts`:
- Around line 39-41: The frontend function checkOpenCodeCli invokes the Tauri
command "ai_check_opencode_cli" which doesn't exist in the backend; add a
corresponding Tauri command handler named ai_check_opencode_cli to the backend
command registry (same style as the existing ai_check_claude_cli,
ai_check_codex_cli, and ai_check_ollama_cli handlers) and implement its logic to
check the OpenCode CLI, then register that command so
invoke("ai_check_opencode_cli") succeeds at runtime.
---
Nitpick comments:
In `@src/components/command-palette/CommandPalette.tsx`:
- Around line 161-204: Replace the long provider if/else chain that builds
aiCommands with a data-driven mapping: define an AI_COMMAND_CONFIG lookup (keyed
by AiProvider values like "claude", "codex", "opencode", "ollama") that contains
id, label and icon, then map availableAiProviders to Command objects by
spreading AI_COMMAND_CONFIG[provider] and adding the action that calls
onOpenAiModal(provider) and onClose(); update the aiCommands construction (the
useMemo that creates aiCommands) to use this lookup so adding new providers only
requires updating AI_COMMAND_CONFIG.
In `@src/services/ai.ts`:
- Around line 54-67: The provider discovery in getAvailableAiProviders silences
all errors in the try/catch, making failures hard to debug; update the catch
block inside the AI_PROVIDER_ORDER map (where providerCheckers[provider]() is
called) to log the caught error (including the provider identifier) to the
application logger or console.error before returning null so discovery still
fails silently to the user but errors are recorded for debugging.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d0929d86-82af-4e21-ab3d-eb51f69ab5a2
📒 Files selected for processing (3)
src/components/command-palette/CommandPalette.tsxsrc/components/icons/index.tsxsrc/services/ai.ts
| export async function checkOpenCodeCli(): Promise<boolean> { | ||
| return invoke("ai_check_opencode_cli"); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify ai_check_opencode_cli command registration in Rust backend
echo "=== Searching for ai_check_opencode_cli function definition ==="
rg -n "fn ai_check_opencode_cli" --type rust
echo ""
echo "=== Searching for ai_check_opencode_cli in codebase ==="
rg -n "ai_check_opencode_cli" --type rust
echo ""
echo "=== Listing all ai_check_* commands in lib.rs ==="
rg -n "ai_check_" src-tauri/src/lib.rs | head -30
echo ""
echo "=== Checking invoke_handler registration block ==="
sed -n '2900,2970p' src-tauri/src/lib.rs 2>/dev/null || echo "File or range not found, searching for invoke_handler instead:"
rg -A 80 "fn invoke_handler" src-tauri/src/lib.rs | head -100Repository: erictli/scratch
Length of output: 3059
Backend command ai_check_opencode_cli is not implemented.
The Tauri command ai_check_opencode_cli is invoked at lines 39-41, but this command is not defined in the Rust backend. Only ai_check_claude_cli, ai_check_codex_cli, and ai_check_ollama_cli are implemented and registered in src-tauri/src/lib.rs.
This violates the coding guideline: "Define all backend operations as Tauri commands in src-tauri/src/lib.rs and call them from frontend via invoke()". The function will fail at runtime until the backend implementation is added as part of PR #90.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/ai.ts` around lines 39 - 41, The frontend function
checkOpenCodeCli invokes the Tauri command "ai_check_opencode_cli" which doesn't
exist in the backend; add a corresponding Tauri command handler named
ai_check_opencode_cli to the backend command registry (same style as the
existing ai_check_claude_cli, ai_check_codex_cli, and ai_check_ollama_cli
handlers) and implement its logic to check the OpenCode CLI, then register that
command so invoke("ai_check_opencode_cli") succeeds at runtime.
|
This PR is intentionally stacked and depends on #89 and #90.
provider checks are wrapped in |
opencodeto AI provider auto-discovery order and checker map insrc/services/ai.tssrc/components/command-palette/CommandPalette.tsxOpenCodeIconinsrc/components/icons/index.tsxfor command palette displayDependency / Stacking
feat/ai-agent-discovery)feat/opencode-cli-integration) for full OpenCode execution path and backend command wiringNote: before feat: add OpenCode as an AI editing provider #90 is merged, OpenCode is safely filtered out during discovery if the check command is unavailable
Summary by CodeRabbit
Release Notes