The LLM was returning JSON responses like
{"owner":"starspacegroup","repo":"hermes"} instead of actually calling the
GitHub API tools.
- Missing
tool_choiceconfiguration - The OpenAI Realtime API wasn't explicitly configured to use function calling - Unclear repository context - The system prompt didn't make it clear enough that the repository context was already set
- Insufficient logging - Hard to debug what was happening during tool calls
File: src/routes/api/voice/+server.ts
session: {
// ... other config
tools,
tool_choice: 'auto' // Enable automatic tool calling
}This tells OpenAI to automatically decide when to use tools based on the conversation.
Made the repository context much more explicit:
repoContext =
`You are currently working with the GitHub repository: **${repository}**\n`;
repoContext += `Repository Owner: ${owner}\n`;
repoContext += `Repository Name: ${repo}\n\n`;
repoContext +=
`IMPORTANT: You have DIRECT ACCESS to GitHub API tools for the ${repository} repository.\n`;
repoContext += `Available actions:\n`;
repoContext += `- Get repository summary (use get_repository_summary tool)\n`;
repoContext += `- List issues (use list_issues tool)\n`;
// ... etc
repoContext +=
`When users ask you to perform these actions, you MUST USE THE TOOLS. Do NOT return JSON objects or describe what you would do - EXECUTE THE TOOL CALLS.\n\n`;Removed the automatic initial message that was sent on connection. This could have been confusing the model's context.
Added more detailed logging for debugging:
// Log more details for responses
if (message.type.includes("response")) {
console.log(
"Response details:",
JSON.stringify(message, null, 2).substring(0, 500),
);
}
console.log("Tool call executing:", functionName, "with args:", args);- The dev server should auto-reload with the changes
- If not, stop and restart:
npm run dev
Basic Tool Calls:
- "List the open issues" → Should call
list_issuestool - "Tell me about this repository" → Should call
get_repository_summarytool - "Search for authentication code" → Should call
search_codetool - "Show me the file structure" → Should call
get_repository_treetool
Expected Behavior:
- You should see console logs like:
Tool call executing: list_issues with args: { state: 'open' } - Apollo should respond with actual data, not JSON objects
- You should see actual issue lists, repository info, etc.
Good signs:
OpenAI message type: response.output_item.added
Function call item added: {...}
Tool call executing: list_issues with args: {...}
OpenAI message type: response.output_item.done
Bad signs (if still happening):
OpenAI message type: response.text.delta
// No function call messages
// Apollo returns JSON like {"owner":"...", "repo":"..."}
- Check console logs - Look for "Tool call executing" messages
- Verify repository is selected - Check the UI shows connected repository
- Check authentication - Make sure you're signed in with GitHub
- Try explicit commands - "Use the list_issues tool to show me open issues"
- Check OpenAI API - Verify your API key has Realtime API access
Add this to the browser console to see WebSocket messages:
// In browser console
window.addEventListener("message", (e) => console.log("WS:", e.data));The gpt-4o-mini-realtime-preview model supports function calling but requires:
- ✅ Tools array properly formatted
- ✅
tool_choiceconfigured (now set to 'auto') - ✅ Clear instructions about when to use tools
- ✅ Repository context that makes it clear tools are available
If the model still doesn't call tools:
- Try setting
tool_choice: 'required'to force tool usage (may be too aggressive) - Try
gpt-4o-realtime-previewinstead (larger model, better at function calling) - Add few-shot examples directly in the system prompt
- Check if OpenAI has updated the Realtime API function calling format
Updated: November 7, 2025 Status: Changes deployed, awaiting testing