This service provides the intelligent backbone for AURA, handling document processing, Knowledge Base (RAG) search, and LLM reasoning.
The AI Service is built with FastAPI and uses a RAG (Retrieval-Augmented Generation) pattern to grounded AURA's responses in specific documents.
- Framework: FastAPI (Python 3.10+)
- Vector Search: Qdrant (local or cloud)
- Embeddings: Sentence-Transformers
- ORMs/Tools: Pydantic, SQLAlchemy
- Document Ingestion: Upload
.txt,.pdf, and.pptxdocuments to AURA's brain. - RAG Search: Semantic search over uploaded documents to provide context to the Voice Agent.
- API Endpoints:
GET /api/v1/rag/search: Search the knowledge base.POST /api/v1/rag/upload: Ingest new documents.
- Navigate to directory:
cd ai-service - Setup environment:
python -m venv venv - Activate environment:
- Windows:
venv\Scripts\activate - Unix:
source venv/bin/activate
- Windows:
- Install dependencies:
pip install -r requirements.txt - Configure
.env:OPENAPI_KEY: Your LLM provider key.QDRANT_URL: URL to your vector database.
- Run:
uvicorn app.main:app --reload