AI-powered documentation chatbot API for OpenClaw, built by OpenKnot.
This powers the embedded docs agent that helps users navigate and understand OpenClaw's documentation through natural conversation.
This API serves as the backend for OpenClaw's docs chat widget. It uses RAG (Retrieval-Augmented Generation) to:
- Index OpenClaw documentation into a vector store
- Retrieve relevant docs based on user questions
- Stream AI-generated answers with context from the documentation
- Framework: Next.js 16 with Edge Runtime
- Runtime: Bun
- Deployment: Vercel Edge Functions
- Vector Store: Upstash Vector
- Rate Limiting: Upstash Redis
- AI: OpenAI (gpt-4.1-mini for chat, text-embedding-3-large for embeddings)
- Language: TypeScript
| Endpoint | Method | Description |
|---|---|---|
/api/chat |
POST | Send a question, get a streaming response |
/api/health |
GET | Health check |
/api/webhook |
POST | GitHub docs webhook for re-indexing |
{
"message": "How do I get started with OpenClaw?"
}Returns a streaming text/plain response with an AI-generated answer grounded in OpenClaw documentation.
Rate Limit Headers:
X-RateLimit-Limit- Maximum requests allowedX-RateLimit-Remaining- Requests remaining in windowX-RateLimit-Reset- Timestamp when the limit resets
CORS: The API allows requests from configured origins. To add your domain, update the ALLOWED_ORIGINS array in app/api/chat/route.ts.
- Install dependencies:
bun install- Copy
.env.exampleto.envand fill in your credentials:
cp .env.example .env| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes | OpenAI API key |
UPSTASH_VECTOR_REST_URL |
Yes | Upstash Vector endpoint |
UPSTASH_VECTOR_REST_TOKEN |
Yes | Upstash Vector auth token |
UPSTASH_REDIS_REST_URL |
Yes | Upstash Redis endpoint (rate limiting) |
UPSTASH_REDIS_REST_TOKEN |
Yes | Upstash Redis auth token |
GITHUB_WEBHOOK_SECRET |
No | Secret for GitHub webhook (required for automatic re-indexing) |
- Build the vector index (indexes documentation into Upstash):
bun run build:indexbun run devRuns locally at http://localhost:3000.
| Script | Description |
|---|---|
bun run dev |
Start development server |
bun run build |
Build for production |
bun run start |
Start production server |
bun run lint |
Run ESLint |
bun run build:index |
Index documentation into vector store |
bun run deploy |
Deploy to Vercel |
bun run deployDeploys to Vercel.
The API supports automatic re-indexing when documentation changes are pushed to the main branch. This is powered by a GitHub webhook that triggers the /api/webhook endpoint.
- A push is made to the
main(ormaster) branch of your docs repository - GitHub sends a webhook payload to
/api/webhook - The API verifies the signature, fetches
https://docs.openclaw.ai/llms-full.txt, chunks the content, generates embeddings, and replaces the vector store
-
Set the webhook secret in your environment variables:
GITHUB_WEBHOOK_SECRET=your-secret-here
-
Create a webhook in your docs repository:
- Go to your docs repo → Settings → Webhooks → Add webhook
- Payload URL:
https://your-api-domain.com/api/webhook - Content type:
application/json - Secret: Use the same value as
GITHUB_WEBHOOK_SECRET - Events: Select "Just the push event"
-
Verify it's working:
- GitHub sends a
pingevent when the webhook is created - Check the webhook's "Recent Deliveries" tab for the response
- Push a change to main and confirm re-indexing occurs
- GitHub sends a
You can check the webhook status at any time:
curl https://your-api-domain.com/api/webhookReturns the current indexing status, last indexed time, and result of the most recent indexing run.
MIT
