Autonomous Update Review Architecture for federated learning security.
AURA is a full-stack platform for inspecting, scoring, and auditing federated model updates before they are accepted into a training ecosystem. It combines a FastAPI backend, a React dashboard, federated learning simulation modules, SHAP-based explainability, anomaly detection, and ledger-style evidence tracking in one repository.
FastAPI React 19 TypeScript Flower PyTorch SHAP scikit-learn SQLite
AURA is built around one practical question:
How do you trust a model update in federated learning when the raw data never leaves the hospital?
This repository answers that by combining three layers:
| Layer | Responsibility | Key Modules |
|---|---|---|
| Federated Learning | Simulate distributed training across hospital clients | fl_client/, fl_server/, scripts/simulate_fl.py |
| Security Analysis | Evaluate model behavior, generate SHAP fingerprints, detect anomalies | aura_backend/, detector/, sentinel/ |
| Audit and Operations | Persist evidence, expose APIs, surface results in a UI | aura_backend/, ledger_stub/, aura-frontend/ |
The current primary runtime is the unified backend in aura_backend/, which exposes the API consumed by the dashboard and orchestrates the end-to-end analysis pipeline.
- Accepts federated model submissions from hospital-style participants.
- Evaluates submitted models against a golden validation dataset.
- Produces explainable fingerprints using SHAP-derived feature importance signals.
- Scores behavior with anomaly-detection logic to approve or reject suspicious updates.
- Stores evidence artifacts and ledger records for auditability.
- Exposes analytics, dataset inspection, evidence review, and security telemetry through a frontend dashboard.
- Includes standalone research and legacy microservice modules for Sentinel and ledger simulation.
Hospital Client
|
v
FL Training / Generated Pipeline
|
v
AURA Backend (FastAPI)
|
+--> Model intake and hashing
+--> Golden-set evaluation
+--> SHAP-based fingerprint extraction
+--> Isolation Forest anomaly scoring
+--> Evidence report generation
+--> SQLite ledger logging
|
v
React Dashboard
|
+--> Upload and monitoring
+--> Attack evidence review
+--> Ledger verification
+--> Analytics and security views
aura_backend/ is the main application entry point and the center of gravity for the project.
It provides:
- Model submission and interrogation APIs
- Golden-set inspection and hospital dataset exploration
- Evidence retrieval by session or transaction
- Pipeline execution and generated-model download endpoints
- Ledger history, stats, and verification
- Analytics, security, and dashboard summary endpoints
- Optional API-key protection with
AURA_API_KEY
Important note: the backend configuration currently enforces AURA_MODE=real. Older documentation that refers to a demo mode is outdated relative to the current code.
aura-frontend/ is a React + TypeScript + Vite application that acts as the operator console.
Implemented pages include:
- Dashboard
- Upload
- Sentinel Monitor
- Dataset Explorer
- Attack Evidence
- Reports
- Ledger
- Analytics
- Security
The FL runtime is implemented through Flower-based server and client modules:
fl_server/server.pystarts the coordinatorfl_client/client_1.py,client_2.py,client_3.pysimulate hospitalsfl_client/models/base_model.pycontains baseline modelsscripts/simulate_fl.pylaunches a multi-client simulation flow
Security analysis is spread across:
detector/for Isolation Forest training and inferencesentinel/for the standalone Sentinel API and SHAP utilitiesaura_backend/real_handler.pyandpipeline_runner.pyfor the integrated backend pipeline
Auditability is handled in two ways:
aura_backend/ledger_manager.pywrites transaction data to SQLite for the unified backendledger_stub/provides a standalone blockchain-style ledger simulation service for integration experiments and tests
aura/
|-- aura_backend/ Main FastAPI backend and integrated analysis pipeline
|-- aura-frontend/ React dashboard and operator UI
|-- fl_client/ Flower clients and local training utilities
|-- fl_server/ Flower server and aggregation strategy
|-- detector/ Isolation Forest detector and trainer
|-- sentinel/ Standalone Sentinel API and SHAP tooling
|-- ledger_stub/ Standalone ledger simulation service
|-- attack_simulation/ Poisoning and attack-evaluation utilities
|-- scripts/ Run helpers for simulation, detector training, and services
|-- tests/ Unit and integration tests
|-- data/ Golden-set and training data assets
|-- docs/ Architecture notes
|-- fingerprints/ Fingerprint artifacts
|-- fl_logs/ FL update logs
|-- xai_reports/ Generated explainability outputs
| Area | Stack |
|---|---|
| Backend | FastAPI, Uvicorn, Pydantic, NumPy, Pandas |
| ML / FL | PyTorch, Flower |
| Explainability | SHAP |
| Detection | scikit-learn Isolation Forest |
| Frontend | React 19, TypeScript, Vite, Tailwind CSS, Axios, Recharts, Framer Motion |
| Persistence | SQLite, filesystem artifacts |
| Testing | Pytest |
- Python 3.10+
- Node.js 20+
pipandnpm
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txtCopy-Item aura_backend\.env.example aura_backend\.env
cd aura_backend
uvicorn main:app --host 0.0.0.0 --port 8000 --reloadBackend defaults worth knowing:
AURA_MODE=realis required by the current backend configAURA_CORS_ORIGINSdefaults tohttp://localhost:5173AURA_API_KEYis optional; if set, clients must send it asx-api-key
Open a second terminal:
cd aura-frontend
npm install
Copy-Item .env.example .env
npm run devFrontend environment variables:
VITE_API_BASE_URL=http://localhost:8000VITE_AURA_API_KEY=for optional backend authVITE_DASHBOARD_DEMO=false
- Frontend:
http://localhost:5173 - Backend API:
http://localhost:8000 - Backend OpenAPI docs:
http://localhost:8000/docs
Use this for the main product experience.
# terminal 1
cd aura_backend
uvicorn main:app --reload --port 8000
# terminal 2
cd aura-frontend
npm run devpython -m scripts.simulate_flThis starts the Flower server and three simulated clients.
python -m scripts.train_detectorpython -m scripts.run_attack_simulation --quickor the full workflow:
python -m scripts.run_attack_simulationThese are not required for the unified backend flow, but they remain useful for experiments and tests.
python -m scripts.start_sentinel --reload
python -m ledger_stub.mainThe unified backend exposes a broad surface area. The most important groups are:
| Area | Example Endpoints |
|---|---|
| System | /system/mode, /system/health, /status |
| Model Submission | /sentinel/submit_update, /sentinel/report/{submission_id}, /sentinel/reports |
| Dataset and Evidence | /dataset/inspection, /api/dataset/hospital/{hospital_id}, /api/evidence/{session_id} |
| Pipeline Runs | /sentinel/pipeline/start, /sentinel/pipeline/status/{run_id}, /sentinel/pipeline/download/{run_id} |
| Ledger | /ledger/transactions, /ledger/transaction/{tx_id}, /ledger/stats, /ledger/verify/{tx_id} |
| Operations | /analytics/daily, /analytics/attacks, /security/threats, /stats/summary |
The repository includes coverage across the main subsystems:
- FL core behavior
- detector integration
- SHAP integration
- backend setup
- ledger integration
- attack simulation
- real-model flows
Run the suite with:
pytest tests -q| Path | Purpose |
|---|---|
aura_backend/received_models/ |
Uploaded and generated model files |
aura_backend/xai_reports/ |
Backend evidence reports |
aura_backend/aura_ledger.db |
Unified backend ledger database |
fingerprints/ |
Saved behavioral fingerprint vectors and metadata |
fl_logs/ |
Local FL update logs |
attack_simulation/results/ |
Attack evaluation reports and plots |
xai_reports/ |
Top-level explainability artifacts used by other modules |
- The repo contains both the current integrated backend and older standalone services; prefer
aura_backend/plusaura-frontend/for the main product path. - Some architecture markdown files still describe removed demo-mode behavior; use the source code as the authority.
- The backend expects golden-set and detector artifacts, but parts of the pipeline include fallbacks when assets are missing.
- The project is research-oriented in places, so some modules are exploratory rather than hardened production services.
AURA is not just a federated learning demo and not just a dashboard. It is a stitched-together security workflow:
- distributed training behavior
- model validation
- explainable evidence generation
- anomaly scoring
- audit logging
- operator-facing observability
That combination is the core value of the project.