Agentic AI – A FastAPI backend for AI agents with memory, RAG, MCP servers, scheduled jobs, and a skills system.
- Python: 3.11+
- Version: 0.2.0
- MockEHR + Wise adapter path - Wise-side MockEHR adapter and S18-compatible tool stubs were integrated for cross-repo interoperability, with S18 consuming MockEHR data through MCP flows.
- CBC schema hardening - Added Pydantic clinical schema validation and follow-up fixes for CBC unit normalization and stable fast/full CDSS payload handling.
- MCP routing/tool-calling robustness - Improved MCP routing, timeout handling, retry/error behavior, and agent alias support for more reliable tool execution.
- Supabase integration touchpoints - Added/expanded Supabase-backed auth verification and optional request/result logging paths used by S18 integration flows.
- Closed as implemented -
#69,#127,#128 - Progress-updated and intentionally open -
#67,#73,#129,#130,#156,#202,#205,#206 - Kept open for future/compliance stage -
#155,#210,#211, and#183+ - Detailed matrix and evidence links:
docs/governance/WISE_S18_issue_reconciliation_2026-03-17.md
- Canonical (Mar 2026 sync) -
docs/architecture/WISE_AI_CDSS_Architecture_2026-03.md - Previous conceptual baseline -
docs/architecture/WISE_AI_CDSS_Architecture.mdin wise-ai/TSAI-EAG-Capstone
Set WISE_MOCKEHR_BASE_URL to the base URL of the wise-ai FastAPI app (Mock EHR). Use whatever host and port actually serve that API—for example http://localhost:8000 when wise-ai runs on your machine, or a Compose service URL such as http://backend:8000 when both stacks share a Docker network. The integration is the same whether wise-ai is started with uvicorn, Docker, or another wrapper, as long as S18 can reach the URL.
For Docker Compose flows that run wise-ai together with S18 (local builds, images from GHCR, or the full-stack compose file), see the wise-ai repo: deployment/docker/README.md — use the Build and run locally, Run from GitHub Container Registry, and Full stack (wise-ai + S18Share) subsections as needed.
Run API:
uv run python api.pyRun targeted integration tests:
uv run pytest tests/test_mockehr_mcp.py tests/test_clinical_schema.py test_e2e.pyOptional Supabase readiness check:
python scripts/check_supabase_integration.py- Agent loop – Multi-step planning and execution with retries and circuit breakers
- REMME (Remember Me) – User memory and preferences: extraction, staging, normalizer, belief updates, and hubs (Preferences, Operating Context, Soft Identity). See remme/ARCHITECTURE.md.
- RAG – Document indexing and search (FAISS + optional BM25), chunking, and ingestion
- MCP servers – RAG, browser, sandbox, and configurable external servers
- Scheduler – Cron-style jobs with skill routing (e.g. Market Analyst, System Monitor, Web Clipper) and inbox integration
- Skills – Pluggable skills with intent matching and run/success hooks
- Streaming – SSE endpoint for real-time events from the event bus
- Config – Centralized settings in
config/(Ollama, models, RAG, agent, REMME)
Using uv:
uv syncOr with pip:
pip install -e .| Variable | Purpose |
|---|---|
GEMINI_API_KEY |
Google Gemini API key (used for agents, apps, and some MCP tools when configured) |
AUTH_ENABLED |
Enable backend bearer-token verification (true/false) |
S18_AUTH_ENABLED |
Docker-only override mapped to AUTH_ENABLED for this service (prevents cross-repo env collisions) |
SUPABASE_URL |
Supabase project URL (used for auth verify and optional logging) |
SUPABASE_ANON_KEY |
Supabase anon key (optional for frontend/public client flows) |
SUPABASE_JWT_AUDIENCE |
Expected access-token aud claim for backend verification (default authenticated) |
SUPABASE_LOGGING_ENABLED |
Enable request/result persistence to Supabase tables (true/false) |
SUPABASE_SERVICE_ROLE_KEY |
Service role key for backend writes to Supabase tables |
Optional:
- Ollama – Default config points to
http://127.0.0.1:11434. Run Ollama locally for embedding, semantic chunking, and optional agent overrides. - Git – Required for GitHub explorer features; the API will warn at startup if Git is not found.
- WISE_MOCKEHR_BASE_URL – Base URL of the wise-ai Mock EHR API. When set, the EHRDataMinerAgent's mockehr MCP fetches
/patients/{id}and/patients/{id}/labsfrom wise-ai for end-to-end integration. Examples:http://localhost:8000(wise-ai on host),http://backend:8000(typical Compose service name on the shared network). Match the URL to how you run wise-ai, not only to Docker.
- Frontend/S18 performs login with Supabase Auth and sends
Authorization: Bearer <access_token>. - Backend verifies the JWT on protected endpoints using Supabase JWKS (
/auth/v1/.well-known/jwks.json) with issuer/audience checks (no backend-managed Supabase session). - If S18 is called through another backend/proxy, it also accepts
X-Forwarded-Authorization: Bearer <access_token>. - Optional persistence can write to two Supabase tables:
ehr_request_log(inbound request/audit trail)ehr_clinical_result(normalized RAC/CBC/ABDM/FHIR-aligned outcome)
- Reference SQL schema:
docs/supabase_ehr_schema.sql - Quick environment/table readiness check:
python scripts/check_supabase_integration.pyuv run python api.pyOr:
uv run uvicorn api:app --host 0.0.0.0 --port 8000 --reload- API: http://localhost:8000
- Health: http://localhost:8000/health
- Docs: http://localhost:8000/docs
The app expects a frontend at http://localhost:5173 (CORS is configured for it).
cp .env.example .envPowerShell:
Copy-Item .env.example .envSet GEMINI_API_KEY in .env.
Set in .env:
OLLAMA_BASE_URL=http://host.docker.internal:11434Then:
docker compose up --build -d apiKeep in .env:
OLLAMA_BASE_URL=http://ollama:11434Then:
docker compose up --build -d- API: http://localhost:8001
- Health: http://localhost:8001/health
- Docs: http://localhost:8001/docs
- Prometheus scrape: http://localhost:8001/metrics/prometheus
Persistent state is stored on host-mounted folders:
data/memory/config/mcp_servers/faiss_index/
Monitoring assets are in monitoring/ and run as an additive stack:
- Prometheus config/rules:
monitoring/prometheus/ - Alertmanager config:
monitoring/alertmanager/ - Grafana provisioning/dashboard:
monitoring/grafana/
docker compose up --build -d api
docker compose -f monitoring/docker-compose.monitoring.yml up -dIf you want local Ollama in Docker too:
docker compose up --build -d
docker compose -f monitoring/docker-compose.monitoring.yml up -d- Prometheus target page: http://localhost:9090/targets
- Alertmanager: http://localhost:9093
- Grafana: http://localhost:3000 (
admin/admin)
Expected key metric families:
wiseai_api_requests_totalwiseai_api_requests_success_totalwiseai_api_request_latency_mswiseai_orchestrator_runs_totalwiseai_orchestrator_run_latency_mswiseai_rag_requests_totalwiseai_mcp_tool_calls_totalwiseai_memory_operations_total
If local ports conflict, override host mappings in monitoring/docker-compose.monitoring.yml:
- Prometheus:
9090 - Alertmanager:
9093 - Grafana:
3000
This repo now includes a dedicated Docker build target for CI:
docker build --target ci -t s18share-ci .
docker run --rm s18share-ciThe CI target uses pinned dependencies from requirements-ci.txt (exported from uv.lock) and runs a quick compile sanity check.
| Path | Description |
|---|---|
api.py |
FastAPI app, lifespan, CORS, router includes |
core/ |
Agent loop, scheduler, event bus, circuit breaker, persistence, model manager, skills |
remme/ |
Memory and preferences pipeline (extractor, store, hubs, normalizer) |
routers/ |
API routes: RAG, remme, agent, chat, runs, stream, cron, skills, inbox, etc. |
mcp_servers/ |
MCP server implementations (RAG, browser, sandbox, multi_mcp) |
config/ |
Settings loader, settings.json, settings.defaults.json, agent config |
data/ |
Inbox DB, system jobs/snapshot, RAG documents |
memory/ |
Execution context, remme index, debug logs |
agents/ |
Agent runner and config-driven agents |
scripts/ |
Utility and test scripts |
tests/ |
Verification and integration-style tests |
- Main settings:
config/settings.json(created fromconfig/settings.defaults.jsonif missing). - Agent prompts and MCP:
config/agent_config.yaml. - REMME extraction prompt and options: under
remmein settings.
See repository or project metadata for license information.