Backend service for building and managing AI-powered conversation threads. This project is written in Python 3.13+ using FastAPI with an asynchronous SQLAlchemy ORM for database interaction. It integrates with multiple large language model providers (Gemini, Claude, OpenAI) and includes utility support for Supabase storage.
app/– Main application packageapp.py– FastAPI application factory with async database engine and CORS middlewareconfig.py– Pydantic settings class for environment variablesdeps.py– Dependency helpers (database session)routes.py– Registers high‑level API routersapi/– Endpoint implementations for threads, messages and model registrydatabase/– SQLAlchemy models, Pydantic schemas and CRUD helpersutils/– Helpers for logging, Supabase, AI client registry and message formatting
main.py– Uvicorn entry point for developmenttesting.py– Quick script to dump LLM model metadata vialitellmmigrations/– Alembic migration scripts for PostgreSQL schema evolution
- Threaded Conversations –
- Parent threads initiate a conversation.
- Child threads branch off for continuations/contexts.
- Automatic title and system prompt generation via LLMs.
- Message Management –
- Add user/assistant messages to threads.
- Fetch chat history formatted for both API and LLM consumption.
- Multi‑Provider AI Support –
- Factory for Gemini, Claude, OpenAI clients using shared
AsyncOpenAIinterface. - Model registry endpoint exposing capabilities of popular LLMs.
- Factory for Gemini, Claude, OpenAI clients using shared
- Persistent Storage –
- Async PostgreSQL database through SQLAlchemy with full CRUD support.
- Supabase utilities for file upload/download/public URLs.
- Robust Error Handling & Logging –
- Centralized logger with rotating file and console handlers.
- Detailed exception handling in each endpoint.
- Automatic API Docs – Accessible at
/docs(Swagger UI) and/redoc.
- Python 3.13+ (see
pyproject.toml) - PostgreSQL database (asyncpg driver)
- Environment variables (see below)
Run the following command to start the PostgreSQL 15 database instance:
docker run -d \
--name fluxa_db \
--restart always \
-e POSTGRES_USER=fluxa_user \
-e POSTGRES_PASSWORD=password123 \
-e POSTGRES_DB=fluxa_db \
-p 5432:5432 \
-v postgres_data:/var/lib/postgresql/15/data \
postgres:15uv venv # create virtual environment
source .venv/bin/activate # activate
uc sync # install dependenciesUse Alembic (scripts included under migrations/):
alembic upgrade head(ensure DATABASE_URL is set)
uv run main.py # development- There are no formal tests yet; run
testing.pyfor a quick metadata dump:
python testing.py- Future tests can be added under a
tests/directory with pytest.
All routes are mounted under
/api.
GET /api/models/capabilities– list supported LLMs and their features.
-
POST /api/parent-threads/create– start a new parent thread. -
GET /api/parent-threads/get/{user_id}– list threads belonging to a user. -
PATCH /api/parent-threads/update/title– change a parent thread title. -
DELETE /api/parent-threads/delete/{thread_id}– remove a thread (and children). -
POST /api/child-threads/create– create a continuation child thread. -
PATCH /api/child-threads/update/title– rename a child thread. -
DELETE /api/child-threads/delete/{thread_id}– delete a child thread.
POST /api/messages/add– append a message to parent/child thread.POST /api/messages/get/{thread_id}– fetch messages (use?child_thread=trueif needed).
/add-users,/get-users,/delete-users/{user_id}– simple user CRUD for testing.
The interactive docs show request/response models and examples.
Currently the API has no authentication layer; endpoints accept user IDs directly. Integrate OAuth/JWT or API key logic as needed before production.
| Variable | Description |
|---|---|
| GEMINI_API_KEY | Gemini LLM API key |
| ANTHROPIC_API_KEY | Claude API key |
| OPENAI_API_KEY | OpenAI API key |
| DEEPSEEK_API_KEY | Additional provider key (unused?) |
| DATABASE_URL | Async PG connection string |
| SUPABASE_URL | Supabase project URL |
| SUPABASE_API_KEY | Service role key for storage |
| BUCKET_NAME | Supabase storage bucket |
| LOG_LEVEL | Logging level (DEBUG/INFO/WARN/ERROR) |
- Code style: follows PEP 8 with type hints.
- Async SQLAlchemy sessions are passed via FastAPI dependencies.
- LLM prompts are defined in
app/api/prompts.py; adjust them to change AI behaviour. client_registry.pycentralizes LLM provider configuration.- Logging uses rotating files under
logs/.
Consider adding:
- Authentication middleware
- Rate limiting
- Unit/integration tests
- Dockerfile / docker-compose for deployment (already present)
Add your license here (e.g., MIT) and contribution guidelines.