-
Notifications
You must be signed in to change notification settings - Fork 2
Description
Current state
The Python tools in tools/ are standalone — they are not actually called during AI chat sessions. When the AI estimates a waterline job in the chat, it's making up numbers from training data, not running estimate_project_cost().
The OpenAI function-calling schemas for all tools are already defined in tools/registry.py.
What needs to happen
Wire api/chat.js to use OpenAI function calling with the schemas from tools/registry.py. When the model decides to call a tool:
- Parse the function call from the OpenAI response
- Call the appropriate Python tool via
child_process.exec()or a small Python HTTP server sidecar - Feed the result back into the conversation as a tool result message
- Let the model produce a final response using real computed numbers
Approach options
Option A — exec Python directly from Node
In api/chat.js, use child_process.execSync() to call a Python script with JSON args and capture stdout. Simple, no extra infrastructure.
Option B — Python FastAPI sidecar
Run a small Python server alongside Vercel dev that handles tool calls. More scalable but adds complexity.
Option A is probably the right starting point for Vercel serverless.
Files to look at
api/chat.js— where tool calling needs to be addedtools/registry.py— OpenAI-compatible tool schemas already definedtools/__init__.py— all tools exported here
Why this matters
This is the difference between an AI that talks about construction and one that computes construction estimates. It's the most impactful open engineering task in the repo.