A clinician-facing, AI-augmented platform for structured mental health assessments
SynapseCore is a clinician‑facing, AI‑augmented workbench for psychiatry. It addresses persistent problems in clinical practice: documentation burden, fragmented tooling across risk/safety workflows, and variable adoption of Measurement‑Based Care (MBC). The application consolidates validated symptom scales, structured clinical flows (e.g., safety reviews, capacity, agitation), and a session timer into a single workspace that favors clarity, auditability, and reproducibility.
From a psychiatric perspective, SynapseCore is designed as a "thinking scaffold" for the clinician. It foregrounds symptom quantification (PHQ‑9, GAD‑7, PCL‑5, Y‑BOCS, AUDIT‑C), explicit risk formulation, and capacity reasoning, while deliberately avoiding black‑box decision support. It is suitable for teaching trainees how to structure assessments, for consultation‑liaison work on busy medical wards, and for longitudinal follow‑up in outpatient or community clinics.
Technically, SynapseCore is a React/TypeScript single‑page application (Vite build) with a modular AI orchestration layer that can target multiple model providers (OpenAI, Anthropic, Gemini, and local Ollama). Sampling parameters are normalized across providers; streaming responses and telemetry are first‑class; and safety guardrails (PII‑like and secret redaction, risky command detection) are included. Observability hooks expose spans and metrics through OpenTelemetry‑compatible interfaces for methodical evaluation in research or quality‑improvement contexts.
Typical usage contexts include outpatient follow‑up, consultation‑liaison (CL) psychiatry, academic seminars, OSCE training, and quality‑improvement projects. In each scenario, SynapseCore aims to standardize inputs (MBC), structure assessments (flows), and accelerate documentation while explicitly keeping the clinician in the loop.
IMPORTANT — NOT A MEDICAL DEVICE: SynapseCore does not diagnose, predict outcomes, or make autonomous treatment decisions. It does not implement triage algorithms, risk calculators, or capacity determinations. Use under the supervision of licensed clinicians and according to local policies. The clinician remains responsible for the clinical record, risk formulation, and disposition.
- Node.js: v20.x LTS (or later compatible with Vite 6).
- Package manager:
npm(bundled with Node). - Environment: development use is intended on a local machine or secure institutional workstation.
Minimal environment variables (development examples):
set VITE_OPENAI_API_KEY=sk-... # if using OpenAI
set VITE_ANTHROPIC_API_KEY=... # if using Anthropic (optional)
set VITE_GEMINI_API_KEY=... # if using Gemini (optional)
set VITE_OLLAMA_BASE_URL=http://localhost:11434 # if using Ollama (optional)In production, these should be set via your platform’s secret manager or environment configuration, not committed to source control.
npm run devnpm run dev checks dependencies and installs them automatically on first run when needed.
By default, Vite serves the app on http://localhost:5173 (or http://localhost:3000 if you use the dev:safe script). The main SPA entry loads at the root path (/).
- Enter PHQ‑9 (and optionally GAD‑7):
- Open the MBC/scale panel and enter responses.
- Verify that totals, bands, and any item‑9 flags appear in the autoscore HTML.
- Complete the safety flow:
- Open the structured safety flow (ideation, plan, means, protective factors, observation).
- Ensure each section is completed with neutral, clinician‑authored text.
- Request an AI‑assisted summary (optional):
- Open the AI panel and choose a provider/model configured for your environment.
- Send a prompt that includes the safety flow outcome and a short summary of the PHQ‑9 results (e.g., “Generate a concise safety‑focused summary; do not make recommendations.”).
- Edit and export:
- Review the streamed draft carefully; edit, shorten, or discard sections as needed.
- Use the export/print panel to generate a note snippet suitable for pasting into your clinical record system (subject to local policy).
Throughout this workflow, all clinical decisions remain with the clinician; AI is used only as a drafting aid.
-
Multi‑model AI orchestration (grounded in code): The provider registry and capabilities are defined in
src/ai/modelRegistry.ts(e.g., static model lists,getCaps, and dynamic listing via provider clients insrc/ai/providerClients/*). Provider‑specific request builders live insrc/ai/samplingMapper.tsand encode differences across OpenAI (/chat/completions), Anthropic (/messages), Gemini (:generateContent), and Ollama (/api/chat). Normalization and safety metadata accompany each request (e.g.,jsonModeApplied,topPSupported). -
Deterministic MBC calculators with autoscore HTML:
src/features/psychiatry/mbc/calculators.tsimplements PHQ‑9, GAD‑7, PCL‑5, Y‑BOCS, and AUDIT‑C with strict clamping and explicit severity bands. Functions return totals, band labels, and flags (e.g., PHQ‑9 item 9 > 0 → “discuss safety”).renderAutoscoreHTML(...)generates a compact HTML snippet that surfaces items, totals, severity, and anchors suitable for printing or export.Clinically, these instruments approximate latent constructs:
- PHQ‑9 (depressive symptom burden): frequency of core depressive symptoms over the last 2 weeks, focusing on anhedonia, mood, sleep, energy, appetite, self‑worth, concentration, psychomotor change, and thoughts of death.
- GAD‑7 (generalized anxiety): excessive worry, tension, restlessness, irritability, and somatic anxiety symptoms over 2 weeks.
- PCL‑5 (PTSD‑related symptoms): intrusions, avoidance, negative alterations in cognition and mood, and hyperarousal over the last month.
- Y‑BOCS (obsessive‑compulsive severity): time, interference, distress, resistance, and control for obsessions and compulsions.
- AUDIT‑C (alcohol use patterns): frequency and intensity of drinking and heavy episodic use.
Typical neutral clinical questions these tools support include:
- Is symptom burden improving or worsening over time?
- Do current scores and change trajectories support considering a treatment adjustment?
- Is there a signal suggesting trauma‑related symptoms, obsessive‑compulsive severity, or hazardous alcohol use that merits discussion?
-
Structured clinical flows (builders → outcome prose): Flow builders under
src/centerpanel/Flows/builders/*(e.g.,safetyOutcome.ts,capacityOutcome.ts,agitationOutcome.ts) convert structured selections into concise, safety‑first narrative suitable for handoffs and notes. Builders centralize wording and include timestamping (seesrc/centerpanel/Flows/time).These flows are intentionally conservative: they support, but never replace, the clinician’s own risk formulation and capacity assessment. For example, the safety flow ensures the clinician explicitly records ideation, intent, plan, access to means, protective factors, and an observation/risk management plan in neutral language.
-
Session timer and session analytics (TF.js): The timer subsystem (
src/centerpanel/timerHooks/*) provides a metronome, calendar integration, and persistence.useSessionML.tsuses TensorFlow.js to learn on‑device patterns from local session histories (stored in localStorage/IndexedDB) and suggest the next segment type/duration. By default, no PHI leaves the browser. -
Developer‑facing IDE surface: An embedded IDE (
src/components/ide/*) and file explorer (src/components/file-explorer/*) support content and flow authoring.src/services/editorBridge.tsexposesinsertIntoActive,openNewTab, andreplaceSelectionwith language inference for common fenced blocks. This is particularly helpful for prototyping flows or note templates alongside clinical modules. -
Observability and telemetry:
src/observability/otel.tsprovides tracer/meter shims that can be bound at runtime viawindow.__otel_setup(AppConfig).src/observability/spans.tsexportswithSpan(...)to wrap async work, andsrc/observability/aiRouteTelemetry.tsdebounces and emits AI route changes (e.g., provider/model switches), optionally surfacing in‑app toasts. -
Guardrails and redaction:
src/services/ai/guardrails/redact.tsapplies pattern‑based redaction across secrets, PII‑like strings, risky commands, and potential exfiltration URLs. Guardrail warnings are emitted and can fail CI in hardened pipelines vianpm run guardrails:ci. -
State management and persistence: Global state is managed with Zustand (see
package.jsonandsrc/store/src/stores). Timers, drafts, and histories use localStorage/IndexedDB (e.g.,useSessionML.ts). Canary and feature flags read from URL params, localStorage, andimport.meta.env(src/config/flags.ts).
- UI layer: clinical flows (
src/centerpanel/Flows/*), timer (src/centerpanel/timerHooks/*), MBC outputs, and IDE/file explorer (src/components/ide/*,src/components/file-explorer/*). - AI orchestration: registry, request builders, adapters, parameter normalization (
src/ai/*,src/services/ai/*,src/hooks/useAiStreaming.ts). - Clinical logic: calculators and flow builders (
src/features/psychiatry/*,src/centerpanel/Flows/builders/*). - Observability: tracer/meter initialization and span helpers (
src/observability/*). - Configuration and theming: app config, flags, tokens (
src/config/*,src/theme,src/ui/theme).
graph LR
subgraph UI["UI Layer - React"]
IDE["IDE and File Explorer"]
Flows["Structured Flows"]
Timer["Session Timer"]
MBC["Autoscore Reports"]
Chat["AI Panels"]
end
subgraph AI["AI Orchestration"]
Reg["Model Registry"]
Map["Sampling Mapper"]
Norm["Param Normalizer"]
Adapt["Provider Adapters"]
end
subgraph Obs["Observability"]
Otel["OpenTelemetry"]
Spans["Spans"]
Route["AI Route Telemetry"]
end
UI --> AI
AI --> UI
UI --> Obs
AI --> Obs
sequenceDiagram
autonumber
participant Clin as Clinician
participant View as Flow UI
participant Builder as safetyOutcome.ts
participant Hook as useAiStreaming
participant Adapt as Adapter
participant Prov as Model Provider
participant Tele as Observability
Clin->>View: Select answers in safety flow
View->>Builder: buildSafetyOutcome(form, timestamp)
Builder-->>View: Outcome prose (safety-focused)
View->>Hook: startStreaming with provider & model
Hook->>Adapt: stream with messages & options
Adapt->>Prov: HTTP request (SSE)
Prov-->>Adapt: token deltas + usage
Adapt-->>Hook: onEvent(delta/usage/done)
Hook-->>View: onDelta(text) and onComplete(full)
View-->>Clin: Live AI narrative (clinician edits)
The core layout can be understood as three cooperating regions: a left rail for navigation and MBC, a central workspace for flows and notes, and right/secondary panels for AI, export, and utilities.
flowchart LR
subgraph Left["Left Rail"]
L1["Patient and Session<br/>Selector"]
L2["MBC Scales<br/>PHQ-9, GAD-7"]
L3["Structured Flows<br/>Safety, Capacity"]
end
subgraph Center["Center Panel"]
C1["Session Timer<br/>and Segments"]
C2["Outcome Text Editor<br/>Flow Outputs"]
end
subgraph Right["Right Panels"]
R1["AI Assistant<br/>Multi-Provider"]
R2["Export and<br/>Print Preview"]
R3["Status Bar and<br/>Telemetry"]
end
L2 --> C2
L3 --> C2
C2 --> R1
C2 --> R2
This diagram is intentionally abstract; concrete component trees live under src/centerpanel, src/components/ai, src/components/ide, and src/components/terminal.
graph TD
A["MBC Calculators"] -->|exports scores| UI["UI Components"]
B["Flow Builders"] -->|build text| UI
C["AI Registry"] -->|build requests| Hook["useAiStreaming"]
Hook --> Ad["Adapters"]
Ad --> Prov["Providers<br/>OpenAI, Anthropic, Gemini"]
C --> Obs["Observability<br/>OpenTelemetry"]
UI --> Obs
C --> Cfg["Config<br/>env and flags"]
UI --> Theme["Theme and Styling"]
Most runtime behaviour is controlled via a small number of environment variables and config objects in src/config/env.ts and src/config/flags.ts. This section summarises the main toggles.
CONFIG is derived from VITE_PROFILE (dev | staging | prod) and includes tracing/metrics controls:
| Field | Source | What it controls | Typical dev value | Typical prod value |
|---|---|---|---|---|
CONFIG.profile |
VITE_PROFILE |
Global profile; affects defaults | dev |
prod |
CONFIG.flags.enableTracing |
hard‑coded true |
Whether OpenTelemetry spans are produced in the browser | true (OK; send to local collector) |
true, but route to institutional collector |
CONFIG.flags.enableMetrics |
hard‑coded true |
Whether basic metrics are recorded | true |
true, subject to governance |
CONFIG.otel.otlpEndpoint |
VITE_OTLP_HTTP |
OTLP HTTP endpoint for spans/metrics | http://localhost:4318/v1/traces |
Institutional OTEL/collector endpoint |
CONFIG.otel.samplingRatio |
derived from profile | Fraction of spans sampled | 1.0 |
e.g. 0.15 |
The flags object combines environment variables, query‑string parameters, and localStorage keys:
| Flag | Env / storage | Meaning | Recommended dev | Recommended prod |
|---|---|---|---|---|
flags.aiTrace |
VITE_AI_TRACE, ?trace=1, localStorage['synapse.flags.aiTrace'] |
Extra AI tracing / debugging output | Often true for debugging |
Usually false except on internal test tenants |
flags.a11yEnabled |
?a11y=1, localStorage['synapse.flags.a11y'] |
Enables additional accessibility affordances in the UI | Encourage true |
Encourage true |
flags.simpleStream |
VITE_SIMPLE_STREAM, query/localStorage |
Chooses simpler streaming path for AI outputs | true by default |
true unless advanced streaming is needed |
flags.synapseCoreAI |
VITE_SYN_CORE_AI, query/localStorage |
Master switch for SynapseCore AI panel | true |
true or institution‑specific |
flags.consultonAI |
VITE_CONSULTON_AI | VITE_FEATURE_CONSULTON_AI |
Enables experimental Consulton AI flows | true to test |
false or tightly gated canary |
flags.consultonAICanaryPercent |
VITE_CONSULTON_CANARY_PERCENT |
Percentage of clients included in canary rollout | e.g. 50 |
Low values (e.g. 5–10) |
LLM provider credentials are injected via import.meta.env (e.g., VITE_OPENAI_API_KEY, VITE_ANTHROPIC_API_KEY, VITE_GEMINI_API_KEY, VITE_OLLAMA_BASE_URL) and read in the AI adapter layer. They must never be hard‑coded in the repository.
The following checklist is intended for teams deploying SynapseCore‑like tooling in institutional environments. It does not replace institutional security reviews.
- Environment / secrets management
- Store
VITE_*_API_KEYvalues and OTLP endpoints in your platform’s secret manager. - Ensure build pipelines do not echo secrets in logs. - Transport security - Serve the app only over HTTPS in production. - Ensure all AI provider calls are made over HTTPS and pinned to official endpoints.
- Telemetry routing and sampling
- Set
VITE_OTLP_HTTPto a trusted, institution‑controlled collector. - TuneCONFIG.otel.samplingRatioto balance observability and data minimisation. - Verify that no PHI is placed into span attributes, logs, or metric labels. - Guardrails and redaction
- Treat
src/services/ai/guardrails/redact.tsas mandatory; do not bypass it in production. - Runnpm run guardrails:ciin CI and fail builds when redaction warnings are emitted for representative samples. - Network and perimeter controls - Place the app behind an institutional reverse‑proxy or gateway that terminates TLS and enforces authentication/authorisation. - Consider a WAF / API gateway with rate‑limiting and egress controls for AI provider endpoints.
- Data residency and logging - Confirm where AI providers store or process data; configure “no log” / “zero retention” modes when available. - Ensure local logs/metrics do not contain identifiers and are retained only as long as necessary for QI/monitoring.
- Policy and governance alignment - Obtain sign‑off from clinical governance, digital safety, and (where applicable) IRB/QI boards before using with real patient data. - Document which AI features are enabled for which user groups (e.g., trainees vs. consultants) and under what supervision.
These points are conceptual guidance and must be adapted to local policies, threat models, and regulatory requirements.
This section summarizes the clinical concepts embedded in the codebase and clarifies how they are used as scaffolding rather than as automated decision makers.
MBC refers to the routine use of validated instruments to measure symptom burden and treatment response over time. In SynapseCore, calculators are deterministic and transparent (src/features/psychiatry/mbc/calculators.ts). Each function:
- Coerces and clamps item inputs to valid ranges.
- Computes a total score.
- Maps that score to a severity band.
- Emits structured flags that prompt clinician review (e.g., PHQ‑9 item 9 > 0 → consider discussing safety and supports).
Brief conceptual summaries of the included scales:
| Scale | Construct (conceptual) | Example neutral question it supports |
|---|---|---|
| PHQ‑9 | Depressive symptom burden over 2 weeks | Is the patient’s depressive symptom burden stable, improving, or worsening relative to prior visits? |
| GAD‑7 | Generalized anxiety severity | Are anxiety symptoms remaining functionally impairing despite current treatment? |
| PCL‑5 | Trauma‑related symptoms (PTSD clusters) | Are trauma‑related intrusions/avoidance/hyperarousal persisting at a level that warrants targeted intervention? |
| Y‑BOCS | Obsessive‑compulsive severity | Has the severity of obsessions/compulsions changed in a way that may affect functioning or risk? |
| AUDIT‑C | Hazardous/harmful alcohol use | Is there a pattern of alcohol use that merits discussion of safety, health, or support options? |
All scoring and banding are deterministic; there is no machine learning in the calculators themselves.
To add a new validated scale in a way that is consistent with existing calculators:
- Add a calculator in
calculators.ts:
- Define a typed function
newScaleScore(items: number[]): { total: number; band: string; flags: {...} }. - Clamp item responses to the instrument’s allowed range and compute the total using a transparent formula.
- Define severity bands and anchors:
- Encode band thresholds (e.g.,
none,mild,moderate,severe) and document anchor text based on the published instrument. - Return both the numeric band index and a human‑readable label.
- Wire into the MBC UI:
- Add the new scale to the MBC configuration (item labels, response options, scoring function) so it appears in the scale picker.
- Ensure autoscore HTML includes items, total, band, and any relevant flags.
- Update documentation tables:
- Add a new row to the MBC tables in this
READMEdescribing the construct, score range, and severity anchors.
- Integrate with export and flows (optional):
- Where clinically appropriate, make the new scale’s summary available to flows and AI prompts (e.g., brief insert into safety or longitudinal summary prompts).
For any new instrument, clinical content and anchors must be derived from the original validation literature or institutional guidance; the repository should remain transparent and deterministic about how scores are computed and interpreted.
For clarity, we can index each instrument with its own response vector and scoring function. Let
-
$x^{(phq)} = (x^{(phq)}_1, \dots, x^{(phq)}_9)$ with$x^{(phq)}_i \in \{0,1,2,3\}$ , -
$x^{(gad)} = (x^{(gad)}_1, \dots, x^{(gad)}_7)$ with$x^{(gad)}_i \in \{0,1,2,3\}$ , -
$x^{(pcl)} = (x^{(pcl)}_1, \dots, x^{(pcl)}_{20})$ with$x^{(pcl)}_i \in \{0,1,2,3,4\}$ , -
$x^{(ybocs)} = (x^{(ybocs)}_1, \dots, x^{(ybocs)}_{10})$ with$x^{(ybocs)}_i \in \{0,1,2,3,4\}$ , -
$x^{(audit)} = (x^{(audit)}_1, x^{(audit)}_2, x^{(audit)}_3)$ with$x^{(audit)}_i$ in the AUDIT-C item ranges.
Each measure
where calculators.ts uses fixed
Severity bands for each measure are intervals in the one‑dimensional score space of
The calculator returns both the numeric score
Red‑flag conditions are Boolean‑valued indicator functions over the item vectors. For example, for PHQ‑9 item 9:
and for an illustrative high‑severity Y‑BOCS band (e.g., total
In all cases, these indicator functions are implemented as flags in the result objects (e.g., hasItem9Flag, hasHeavyUseFlag) and are used only to suggest topics for discussion, not to drive automated decisions.
From a psychometric perspective, it is often useful to distinguish a (hypothetical) latent true score
where
When clinicians interpret change across visits, this conceptual decomposition reminds us that small fluctuations may reflect noise, whereas sustained, clinically coherent changes across multiple timepoints are more likely to reflect genuine change in the underlying construct.
When instruments are administered repeatedly, each measure yields a time series of observed scores
with
-
Level: the typical magnitude of
$s^{obs}_m(t)$ over a window; - Trend: whether scores are increasing, decreasing, or stable (e.g., a slope estimate from a simple linear fit);
- Variability: how much scores fluctuate around a trend line.
SynapseCore does not compute statistical trend tests or slopes; it simply provides transparent scores over time that can be plotted or inspected. Any formal time‑series modelling (e.g., estimating a slope
flowchart LR
V1["Visit 1<br/>Enter scales"] --> S1["Scores at t1"]
S1 --> B1["Severity bands"]
B1 --> D1["Clinical discussion"]
V2["Visit 2"] --> S2["Scores at t2"]
S2 --> B2["Severity bands"]
B2 --> D2["Clinical discussion"]
V3["Visit 3"] --> S3["Scores at t3"]
S3 --> B3["Severity bands"]
B3 --> D3["Clinical discussion"]
S1 -.-> T["Trends over time"]
S2 -.-> T
S3 -.-> T
T --> Dsum["Discuss trajectories"]
sequenceDiagram
autonumber
participant Pt as Patient/Staff
participant MBC as MBC UI
participant Calc as calculators
participant Clin as Clinician
participant AI as AI Panel
Pt->>MBC: Enter PHQ-9, GAD-7, etc.
MBC->>Calc: scoreAll function
Calc-->>MBC: totals + bands + flags
MBC-->>Clin: Display autoscore HTML
Clin->>Clin: Interpret in context
Clin->>AI: Send summary
AI-->>Clin: Draft narrative
Clin->>MBC: Save/export note
Flow builders (e.g., safety, capacity, agitation, catatonia, observation) encode clinically neutral phrasing that can be edited by the clinician. For example, safetyOutcome.ts constructs sentences about ideation, intent/plan, access to means, protective factors, and observation strategy with explicit disclaimers regarding scope and local policy.
These flows are intentionally designed to support:
- Risk formulation: making explicit the presence/absence of suicidal ideation, plans, means, protective factors, recent stressors, and agreed observation strategies.
- Capacity assessments: clarifying understanding, appreciation, reasoning, and expression of a choice in a particular decision context.
- Agitation management: documenting early warning signs, triggers, de‑escalation strategies, and thresholds for increased observation.
At every step, flows help the clinician say "what I actually assessed" rather than "what the model decided." They never output a disposition recommendation.
SynapseCore is designed to scaffold, not replace, clinical formulation. In many services, formulation is structured around predisposing, precipitating, perpetuating, and protective factors. We can think of a (highly simplified) formulation map as:
where:
-
$S$ aggregates symptom information (e.g., MBC scores and salient symptoms described in prose). -
$R$ aggregates risk‑relevant information (e.g., ideation, plans, means, past attempts, substance use, and protective factors documented in flows). -
$C$ aggregates capacity‑relevant information (e.g., understanding, appreciation, reasoning, choice) for a specific decision. -
$P$ aggregates contextual and psychosocial factors (e.g., recent stressors, supports, housing, medical comorbidity) captured in free text.
In practice, SynapseCore provides structured inputs to each of these components:
- MBC calculators give a transparent, reproducible
$S$ . - Safety and substance‑related flows provide structured building blocks for
$R$ . - Capacity flows provide a scaffold for documenting the elements of
$C$ . - Free‑text fields and IDE/AI tooling support narrative capture of
$P$ .
Any AI‑generated text can be viewed as a draft
This subsection describes abstracted use cases to illustrate how MBC, flows, timer, and AI orchestration interact. Examples are intentionally generic and not patient‑specific.
- Goal: Track depressive symptom burden over time and streamline narrative documentation.
- Path through the app:
- Enter PHQ‑9 and GAD‑7 responses →
phq9Score,gad7Score→renderAutoscoreHTML. - Use the safety flow if item 9 is non‑zero, documenting ideation context and supports.
- Use AI summarisation on the autoscore HTML plus safety outcome to generate draft prose for the progress note, then edit.
- Enter PHQ‑9 and GAD‑7 responses →
This supports neutral questions such as "Is symptom burden improving over time?" and "Do scores and narrative support considering a treatment adjustment?" without making that decision.
- Goal: Structure a complex assessment (e.g., delirium vs. depression vs. adjustment) and clearly document risk and capacity elements.
- Path through the app:
- Complete relevant scales (e.g., PHQ‑9, GAD‑7) if appropriate.
- Use the safety flow to document ideation, intent, and observation decisions.
- Use the capacity flow to structure documentation of understanding, appreciation, reasoning, and choice for a specific treatment decision.
- Use the session timer to track assessment/liaison time.
The result is a structured narrative that supports multidisciplinary handoff and care planning.
- Goal: Ensure that key elements of a safety assessment are documented, even under time pressure.
- Path through the app:
- Optionally capture PHQ‑9 item 9 or a brief ideation screen.
- Use the safety flow to document ideation, plan, means, protective factors, and collaborative safety steps.
- Avoid AI use if local policies restrict it; or, if allowed, use AI summarisation only on de‑identified text and edit rigorously before export.
This supports the clinician in making a clear risk formulation and describing immediate safety steps, but does not calculate risk scores or dispositions.
- Goal: Track trajectories across multiple visits and support stepped‑care decisions.
- Path through the app:
- Repeatedly enter PHQ‑9, GAD‑7, and other relevant scales at each visit.
- Export autoscore HTML or summaries to an external dashboard or EHR.
- Optionally use AI summarisation to provide a brief, structured “since‑last‑visit” narrative.
This enables visualization of trends (e.g., persistently high PTSD symptoms, improving depression) without algorithmic triage.

- Pre‑visit: patients or staff enter scale responses; autoscore HTML is generated. Outliers (e.g., PHQ‑9 item 9 > 0, very high Y‑BOCS) are flagged deterministically to prompt clinician review.
- During visit: clinician uses flows (e.g., safety, capacity) and optionally requests an AI summary of the outcome text to accelerate note‑writing.
- Post‑visit: edit, export, and archive in accordance with local policy; telemetry aids QI and research without PHI.
WARNING — NOT A MEDICAL DEVICE: This software aids documentation and standardization only. It does not replace clinical judgement, consultation, or local policy requirements and does not provide risk scores, diagnoses, or recommended dispositions.
- Human oversight: clinicians control edits and final text; generated content is always editable.
- Transparency: provider/model selection and prompts are visible; route changes are logged.
- Reversibility: flow inputs stay editable; generated text can be discarded at any time.
- Auditability: autoscore anchors and flow outputs are explicit, time‑stamped, and linked to the underlying inputs.
sequenceDiagram
participant Clin
participant MBC as calculators.ts
participant UI
participant Hook as useAiStreaming
participant Prov as Model Provider
Clin->>UI: Enter PHQ‑9 item scores
UI->>MBC: phq9Score(items)
MBC-->>UI: total + severity + flags
Clin->>UI: Request AI summary
UI->>Hook: startStreaming({ prompt: autoscoreHTML })
Hook->>Prov: SSE request
Prov-->>Hook: tokens + usage
Hook-->>UI: onDelta / onComplete
UI-->>Clin: Editable AI summary
Clin->>UI: Export note [policy‑conformant path]
src/ai/modelRegistry.tsdefines static models and capabilities (streaming, JSON mode, token limits) for providers and offerslistModelsDynamic(...)viasrc/ai/providerClients/*. IDs are normalized (e.g., Geminimodels/prefix stripping).src/ai/samplingMapper.tsprovides provider‑specific request builders (buildOpenAI,buildAnthropic,buildGemini,buildOllama,buildCustom) and returns aBuiltProviderRequestwith sanitized headers and meta.src/services/ai/param-normalizer.tsclampstemperature,topP, andmaxOutput, and maps normalized parameters to each provider JSON schema.
We can describe the AI orchestration layer in terms of abstract spaces and maps:
- Let
$P$ denote the prompt space, consisting of prompt text plus structured metadata (e.g., role, clinical vs. non-clinical context, safety notes). - Let
$\Theta$ denote a canonical sampling parameter space, e.g.,$\theta = (T, p, M, J)$ for temperature$T$ , top-p$p$ , max tokens$M$ , JSON-mode flag$J$ , etc. - For each provider
$p$ (OpenAI, Anthropic, Gemini, Ollama, ...), let$\Theta_p$ be the provider-specific parameter space (JSON schema) and let$N_p : \Theta \to \Theta_p$ be the parameter normalisation map implemented byparam-normalizer.tsplus provider-specific helpers insamplingMapper.ts.
Given a context object
- A prompt builder:
$\text{buildPrompt} : C \to P,\quad p = \text{buildPrompt}(c)$ . - A provider call map:
$\Phi_p : P \times \Theta_p \to Y$ , where$Y$ is the space of model outputs (streaming tokens, final text, usage metadata).
The overall (non-streaming) call can then be expressed as the composition
which matches the structure of buildProviderRequest followed by an adapter’s HTTP call.
flowchart LR
C["UI context"] --> BP["buildPrompt in<br/>psychiatry module"]
C --> TH["Canonical params"]
TH --> NP["Provider-specific<br/>params"]
BP --> Phi["Adapter HTTP call"]
NP --> Phi
Phi --> Y["Streaming tokens"]
In practice, providers return streaming outputs that we can model as finite sequences
where each useAiStreaming.ts exposes these as onDelta callbacks; the UI folds the partial outputs to maintain a running concatenation
Here
Many providers publish approximate per‑token prices for prompt and completion tokens. Conceptually, one can define a per‑interaction cost
where
Telemetry hooks (see below) can, in principle, aggregate such costs to support budgeting and QI, again without emitting PHI.
src/hooks/useAiStreaming.ts coordinates provider selection and failover using adapters in src/services/ai/adapters. It queues jobs, handles abort signals, emits window events during failover (ai:providerSwitch), and reports usage deltas for downstream cost/metrics. The hook exposes onDelta and onComplete callbacks designed for incremental UI rendering.
| Provider | Streaming | JSON mode | Top‑p supported | Token limit (approx.) |
|---|---|---|---|---|
| OpenAI | yes | yes | yes | 128,000 |
| Anthropic | yes | no | yes | 200,000 |
| Gemini | yes | no | yes | 1,000,000 |
| Ollama | yes | no | yes | 8,192 |
Note: tool‑calling interfaces exist in
services/ai/adapters/types.tsbut tool adapters may be deployment‑specific.
Conceptually, one can also think in terms of a simple model capability matrix that guides which family to use for a given task. This is not a routing algorithm and does not encode any clinical logic; routing is configured explicitly in code.
| Family (illustrative) | Typical context window | Strengths (non‑clinical) | Example uses in this project |
|---|---|---|---|
| OpenAI GPT‑style | up to O( |
General summarisation, rewriting, light JSON extraction | Drafting visit summaries, re‑phrasing safety narratives, generating neutral export text |
| Anthropic Claude‑style | up to O( |
Long‑context reading, cautious text generation | Summarising long notes or registry views into short, clinician‑editable bullets |
| Gemini‑style | up to O( |
Very long context, multimodal in some tiers | Experimental long‑document synthesis, research/teaching examples |
| Ollama (local) | model‑dependent (smaller) | Local experimentation, offline scenarios | Testing prompts and flows without sending content to external providers |
All numbers above are approximate and conceptual; any real deployment must consult, and keep in sync with, the actual provider documentation and organisational policies.
sequenceDiagram
participant UI
participant Hook as useAiStreaming
participant Build as samplingMapper
participant Adapt as adapters
participant Prov as Provider
UI->>Hook: startStreaming
Hook->>Build: buildProviderRequest
Build-->>Hook: BuiltProviderRequest
Hook->>Adapt: stream function
Adapt->>Prov: HTTP/SSE
Prov-->>Adapt: token delta/usage
Adapt-->>Hook: StreamEvent
Hook-->>UI: onDelta/onError/onComplete
import { buildProviderRequest } from '@/ai/samplingMapper';
const built = buildProviderRequest({
provider: 'openai',
model: 'gpt-4o',
sampling: {
temperature: 0.6,
top_p: 0.9,
max_tokens: 600,
json_mode: false,
system_prompt: 'You are a careful clinical scribe. Keep safety language neutral.'
},
apiKey: '[INJECT AT RUNTIME OR VIA SETTINGS]',
prompt: 'Generate 4 bullet points summarizing the safety outcome.',
});
// built.request: { url, method, headers, body }The MBC module (src/features/psychiatry/mbc/calculators.ts) contains typed scoring functions:
phq9Score(items): 9 items, 0–3 each; flags on item 9 > 0. Used for depression screening and severity tracking.gad7Score(items): 7 items, 0–3 each; general anxiety severity tracking.pcl5Score(items): 20 items, 0–4 each; totals ≥ 33 suggest probable PTSD (screen). Cluster checks (B/C/D/E) are reported.ybocsScore(items): 10 items, 0–4 each; severity banding for OCD symptom burden.auditCScore(items, sex): 3 items; sex‑dependent thresholds; flags heavy episodic use.
renderAutoscoreHTML(measure, answers, opts) outputs a small printable section that includes item table, totals, severity, and anchors.
For a given instrument with
where the domain
The scoring function for most scales in this repository is a simple weighted sum:
where
Severity banding is represented as a partition of the score space into intervals:
For example, for PHQ‑9:
-
$B_1$ (None/Minimal):$s(x) \in [0,4]$ -
$B_2$ (Mild):$s(x) \in [5,9]$ -
$B_3$ (Moderate):$s(x) \in [10,14]$ -
$B_4$ (Moderately severe):$s(x) \in [15,19]$ -
$B_5$ (Severe):$s(x) \in [20,27]$
The calculators return both
Red‑flag conditions are encoded as simple Boolean indicator functions. For PHQ‑9 item 9 (thoughts of death or self‑harm):
In code, this appears as a flag field on the result object rather than a direct recommendation. Similar indicator functions exist for AUDIT‑C heavy episodic use, high PCL‑5 totals, and severe Y‑BOCS bands.
These flags are designed to say "this pattern often warrants a conversation" rather than "do X".
Consider a single PHQ‑9 response vector
where each coordinate is on the usual 0–3 scale. The total score is
Comparing
For the red‑flag indicator tied to item 9 (thoughts of death or self‑harm), the same vector gives
In the implementation, this is returned as a Boolean field; UI components may render it as a visual flag or neutral text (e.g., "Item 9 > 0 — please review safety together"). The library does not suggest a particular action.
A conceptual example of what a downstream UI card might show:
| Quantity | Value | Interpretation |
|---|---|---|
| PHQ‑9 total |
11 | Falls in Moderate band (10–14) |
| Severity band |
Label: "Moderate" | |
| Item 9 response |
1 | Any value |
| Red‑flag |
1 (true) | "Consider focused safety discussion" (text, not advice) |
This example is illustrative only; actual numbers and labels are taken directly from the underlying calculator functions and published PHQ‑9 anchors.
flowchart LR
A["Raw item<br/>responses"] --> B["Clamp & sum<br/>calculators"]
B --> C["Severity band<br/>+ flags"]
C --> D["renderAutoscoreHTML"]
D --> E["Print/export"]
| Scale | Items | Range | Selected anchors |
|---|---|---|---|
| PHQ‑9 | 9 | 0–27 | None (0–4), Mild (5–9), Moderate (10–14), Moderately severe (15–19), Severe (20–27) |
| GAD‑7 | 7 | 0–21 | None (0–4), Mild (5–9), Moderate (10–14), Severe (15–21) |
| PCL‑5 | 20 | 0–80 | Subthreshold (0–32), Probable PTSD (≥33), with B/C/D/E cluster checks |
| Y‑BOCS | 10 | 0–40 | Subclinical (0–7), Mild (8–15), Moderate (16–23), Severe (24–31), Extreme (32–40) |
| AUDIT‑C | 3 | 0–12 | Sex‑specific screen thresholds; heavy episodic flag on Q3 ≥ 4 |
FHIR/EHR integration: out of scope in this repository. Scoring outputs are intentionally transparent and could be mapped to FHIR resources in downstream systems. [ADD LINK OR POLICY IF APPLICABLE]
The following schematic summarises how autoscores, structured flows, AI summarisation, and export tooling can be composed in a single visit, while keeping the clinician in control at each stage:
- Measurement‑Based Care (MBC)
Item responses → Score calculators → Totals & bands - Structured Flows
Safety UI → Flow builders → Baseline narrative - AI Orchestration (optional)
Baseline narrative + MBC scores → Prompt → Streaming → AI summary - Export
AI summary → Clinician review/edit → Export panel / print
All arrows represent clinician‑controlled steps; AI is used only as an editable drafting aid on text the clinician has already authored or selected.
This diagram is conceptual and describes a typical composition pattern; concrete wiring is visible in feature modules under src/features and src/centerpanel/Flows.
Flow builders in src/centerpanel/Flows/builders/* assemble outcome text from normalized inputs. For example, safetyOutcome.ts derives sentences for ideation, intent/plan, access to means, protective factors, and observation, then appends a policy reminder.
Formally, we can model a flow as a finite state machine:
where:
-
$S$ is the set of states (e.g.,Intake,Ideation,Plan,Means,Protective,Observation,Outcome). -
$s_0 \in S$ is the initial state (e.g.,Intake). -
$A$ is the set of actions/inputs (clinician selections, checkboxes, free‑text snippets). -
$T: S \times A \to S$ is the transition function. -
$O: S \times A \to \text{Text}$ is the output function producing intermediate sentences.
buildSafetyOutcome(config) can be viewed as a deterministic mapping:
where
For the safety flow, one illustrative (non‑exhaustive) formalization is:
-
$S = {s_{intake}, s_{ideation}, s_{plan}, s_{means}, s_{protective}, s_{observation}, s_{outcome}}$ , -
$A$ includes actions such assetIdeationStatus,setPlanDetail,setMeansAccess,setProtectiveFactors,setObservationPlan, -
$T$ advances the state in response to actions (e.g.,$T(s_{ideation}, setPlanDetail) = s_{plan}$ ), -
$O$ appends neutral prose snippets at each step (e.g., "Ideation was described as ...").
The overall flow can also be visualised as a directed acyclic graph (DAG) over this finite state space: edges represent allowed transitions, and the outcome node has no outgoing edges. In this implementation, transitions are encoded implicitly in the builder logic and UI wiring rather than as a separate graph structure.
The safety flow guides clinicians through seven structured stages:
- Intake — Entry point where clinician begins the assessment.
- Ideation — Assess presence or absence of suicidal thoughts; document explicitly.
- Plan — Assess specificity and detail of any stated plan; record findings neutrally.
- Means — Assess access to methods and lethality; document availability and context.
- Protective Factors — Identify and document patient strengths, supports, and protective elements.
- Observation & Management — Determine and document recommended observation level and safety plan details.
- Outcome — Generate a neutral, clinician-editable narrative summarizing the assessment.
Each stage builds on prior information, and the clinician remains in control at every step. The outcome is a prose summary suitable for the clinical record.
Capacity assessment follows a structured framework based on established clinical principles:
- Clinical Context — Establish the decision to be made and the clinical circumstances.
- Understanding — Assess whether the patient can comprehend relevant information presented in clear language.
- Appreciation — Assess whether the patient can apply that information to their own situation.
- Reasoning — Assess whether the patient can logically deliberate among options and consequences.
- Expression of Choice — Assess whether the patient can communicate a clear, consistent choice.
- Outcome Documentation — Record findings neutrally without binary judgment; the clinician makes the final determination.
The capacity builder (capacityOutcome.ts) supports documenting each of these elements in text and generating a structured narrative. It does not output a binary "capacity" or "incapacity" decision; that judgement remains with the clinician based on local standards and policy.
Ignoring clinical semantics and focusing only on how often clinicians move between screens, one can (conceptually) treat the safety or capacity flow as a Markov chain over the state set
For a given flow, define an
This matrix Observation to Plan to adjust wording). This Markov‑chain analysis is conceptual only and is not implemented in this repository. Any such analysis would have to be done on anonymised aggregated telemetry under appropriate governance.
To reiterate and make the boundaries explicit, SynapseCore tooling is designed to assist documentation and structured note‑taking. It does not perform clinical reasoning or make treatment decisions.
| This system does | This system does not |
|---|---|
| Compute transparent symptom scale totals and bands (e.g., PHQ‑9, GAD‑7) | Diagnose depression, anxiety, PTSD, OCD, SUD, or any other condition |
| Help structure risk and capacity narratives into neutral prose | Compute or output suicide risk scores, probabilities, or recommendations |
| Generate draft summaries from explicitly provided text/configuration (via AI orchestration) | Decide on involuntary holds, level of care, medication changes, or legal actions |
| Provide timers and segment labels to organise sessions | Track or infer identity, demographics, or longitudinal outcomes outside the local browser context |
| Support exports (e.g., print‑ready notes) controlled and edited by the clinician | Replace clinical judgement, supervision, or institutional policy |
Any deployment in a clinical environment must treat this as documentation tooling only and ensure local policies, consent, and governance are followed.
The timer stack (src/centerpanel/timerHooks/*) tracks segments, laps, and pauses and persists to localStorage. Conceptually, a session is a finite sequence of labeled time segments:
where Assessment, Psychoeducation, MedicationReview, Supervision).
useSessionML.ts defines a lightweight neural model (TensorFlow.js) that, if enabled, learns a mapping from recent history to suggested next segment type/duration:
where History is a pseudonymized sequence of past session segments and
flowchart LR
H["Past sessions<br/>local data"] --> M["useSessionML<br/>TF.js model"]
M --> Sugg["Suggested next<br/>segment"]
Sugg --> Clin[Clinician may accept/ignore]
From the raw segments useSessionML-style models:
-
Total duration:
$T_{\mathrm{total}} = \sum_{i=1}^{N} t_i$ . -
Per-label time allocation
For each label
$\ell$ in the label set$L$ , define$T_\ell = \sum_{i:\,\ell_i=\ell} t_i$ and$p(\ell) = \frac{T_\ell}{T_{\mathrm{total}}}$ with the condition$T_{\mathrm{total}} > 0$ .The vector
$v = \big(T_{\mathrm{total}}, \{T_\ell\}_{\ell \in L}, N, \{p(\ell)\}_{\ell \in L}\big)$ is a simple session feature representation capturing both absolute and relative time allocation. -
Segment-count features
Counts of segments per label
$n_\ell = |\{i : \ell_i = \ell\}|$ can be added to the feature vector to characterize how fragmented a session is.
These features are conceptual/illustrative only and are not reported to any external service by default. In principle, they could be computed locally and used as inputs to on‑device models (e.g., to predict likely next segments), but any such analytics must respect privacy constraints and institutional governance.
flowchart LR
Seg["Raw segments<br/>time & labels"] --> Agg["Aggregate<br/>totals & counts"]
Agg --> Feat["Session feature<br/>vector"]
Feat -->|Conceptual| ML["useSessionML or<br/>offline analytics"]
The in‑app IDE (src/components/ide/*) and file explorer (src/components/file-explorer/*) enable content authoring, flow editing, and quick prototyping. src/services/editorBridge.ts exposes a simple API:
insertIntoActive({ code, language? }): append to the active editor or open a new tab.openNewTab({ filename, code, language? }): create a new file and tab, with language detection.replaceSelection({ code, language? }): replace active tab content (with undo stack support).
flowchart TD
A["Open IDE panel"] --> B["Search for flow files"]
B --> C["Edit builder wording"]
C --> D["Preview in UI"]
D --> E["Call AI for summaries"]
E --> F["Export/commit changes"]
- Node.js 20+ (ESM, Vite 6, React 19).
- Modern Chromium/Firefox/Safari browsers.
- Optional: local Ollama (
http://localhost:11434) for on‑device models. - Cloud provider API keys as applicable (entered via your deployment’s settings pattern or env wiring).
# clone
git clone [ADD REPO URL]
cd coder-app
# dev server
npm run dev
# typecheck and build
npm run type-check
npm run build
# preview (static server)
npm run preview- Install Node.js 20+.
- Run
npm run dev. - In the UI, use local features: MBC calculators, flows, and the session timer.
- Optionally enable local models with Ollama; leave cloud providers unset.
- Configure environment (see next section) for your providers and telemetry.
- Start dev server:
npm run dev. - Explore AI orchestration hooks in
src/hooks/useAiStreaming.tsand provider mappings insrc/ai/samplingMapper.ts. - Add new flows under
src/centerpanel/Flows/builders/*and new scales insrc/features/psychiatry/mbc/calculators.ts.
| Key | Source | Effect |
|---|---|---|
VITE_PROFILE |
env | App profile: dev/staging/prod (src/config/env.ts). Influences tracing sampling ratios and rate limits. |
VITE_OTLP_HTTP |
env | OTLP HTTP endpoint for telemetry export (src/config/env.ts). Optional. |
VITE_E2E |
env | Enables test‑mode timeouts for streaming (src/config/flags.ts). |
VITE_SIMPLE_STREAM |
env/URL/localStorage | Toggles simplified streaming mode (src/config/flags.ts). |
VITE_CONSULTON_AI / VITE_FEATURE_CONSULTON_AI |
env | Primary AI feature flag gate (src/config/flags.ts). |
VITE_CONSULTON_CANARY / VITE_CONSULTON_CANARY_PERCENT |
env | Canary rollout control with stable bucketing (src/config/flags.ts). |
VITE_CONSULTON_DISABLE |
env | Global kill switch (disables Consulton AI features) (src/config/flags.ts). |
Flags also accept URL query and localStorage overrides; see
src/config/flags.ts.
src/theme/synapse.ts defines a synapseTheme with color tokens, radii, spacing, shadows, and a focus ring helper.
import { synapseTheme } from '@/theme/synapse';
const ring = synapseTheme.focusRing(2); // CSS helper for accessibility focusAdditional tokens live in src/ui/theme/* (typography scales, spacing). Override --syn-* CSS variables or extend the theme object to brand deployments.
- Initialization:
src/observability/otel.tsbinds tracer/meter fromwindow.__otel_setup(AppConfig)if present; otherwise no‑ops. - Span taxonomy: UI events and AI calls should wrap in
withSpan(name, attrs, fn)(src/observability/spans.ts). - Metrics: histograms/counters include
req_latency_ms,tokens_prompt,tokens_completion,cost_usd,errors_total,rate_limit_hits,cache_hits. - Route changes:
src/observability/aiRouteTelemetry.tsdebounces provider/model switches and emits events, optionally surfacing toasts for operator awareness.
Telemetry metrics allow researchers and QI teams (in appropriately governed deployments) to study system behaviour without accessing PHI. Conceptually, one can define session‑ or interaction‑level evaluation functions from the available counters.
Let:
-
$L$ = average request latency in milliseconds (req_latency_ms). -
$T_p$ = number of prompt tokens (tokens_prompt). -
$T_c$ = number of completion tokens (tokens_completion). -
$E$ = error rate over some window (errors_total/requests_total).
In a fully anonymised research context, one might define a conceptual utility function such as:
where notesSimplificationScore is a human‑rated or automatically scored outcome (e.g., readability or length reduction). This is not implemented directly in the codebase but illustrates how telemetry could be combined with external ratings.
We can also treat telemetry fields as random variables over repeated interactions:
-
$L$ as a latency random variable with empirical mean$\mathbb{E}[L]$ and variance$\mathrm{Var}(L)$ estimated fromreq_latency_ms. -
$T_p$ and$T_c$ as token-count random variables with$\mathbb{E}[T_p]$ ,$\mathbb{E}[T_c]$ and associated variances. -
$E$ as a Bernoulli error indicator (1 if an error occurred, 0 otherwise) with error probability$P(E = 1) = \mathbb{E}[E] \approx \frac{\mathrm{errors\_total}}{\mathrm{requests\_total}}$ .
Using these, a conceptual system reliability metric over a sliding window of recent calls can be written as
where
Any such analysis must:
- Exclude PHI from spans, logs, and metrics.
- Use pseudonymized or aggregate identifiers when linking events across time.
- Be approved by relevant governance bodies (e.g., IRB, QI boards) when used for research.
Under appropriate governance and with fully anonymised data, telemetry could be used to study questions such as:
-
Readability and concision of notes: combine
tokens_completion, approximate readability scores on exported text, and human ratings to estimate how well AI‑assisted drafts reduce verbosity without losing key content. -
Latency vs. error trade‑offs: relate
$L$ and$E$ across different providers or configurations to understand where timeouts, retries, or model choices materially affect technical reliability$R_{sys}$ . - Adoption of MBC and flows: track how often autoscore panels and structured flows are opened/completed (counts only, no PHI) to evaluate whether the tooling actually increases structured documentation.
These examples are conceptual and do not imply that such analyses are implemented in this repository. Any real QI or research use must undergo local review (e.g., QI committees, IRB) and adhere to institutional and regulatory requirements.
- Do not emit PHI in spans or logs by default.
- Use
src/services/ai/guardrails/redact.tsto scrub secrets and PII‑like patterns before emitting telemetry. - You may route telemetry to a local collector via
VITE_OTLP_HTTPor disable viaCONFIG.flags.enableTracing=falseandenableMetrics=false.
flowchart LR
App["SynapseCore<br/>Browser"] -->|spans/metrics| Shims["otel.ts<br/>tracer/meter"]
Shims -->|OTLP HTTP| Collector["OTel<br/>Collector"]
Collector --> APM["APM/<br/>Data Lake"]
flowchart LR
subgraph Metrics["Evaluation Metrics"]
m1["req_latency_ms"]
m2["tokens_prompt"]
m3["tokens_completion"]
m4["errors_total"]
end
m1 --> U["Utility<br/>Analysis"]
m2 --> U
m3 --> U
m4 --> U
Active research/workbench suitable for pilots, education, and QI projects. Production hardening (auth, EHR integration) is deployment‑specific.
- EHR/FHIR integration and CDS hooks: enable structured export and clinical decision support pathways aligned with institutional governance.
- Longitudinal MBC dashboards and cohorts: track symptom trajectories across visits; support panel management and stepped care.
- Expanded scale library: include additional validated measures (sleep, mania, clinician‑rated PTSD) with explicit anchors and flags.
- Prompt templates and provenance: standardize AI prompts with versioned templates and display provenance in UI.
- Offline‑first and encrypted local persistence: allow constrained environments to use MBC/flows without network connectivity.
- Formal test harness for flows/guardrails: unit and snapshot tests for calculator correctness and redaction stability.
- Open an issue describing scope, clinical rationale, and technical plan.
- Coding style: strict TypeScript, ESLint/Prettier; run
npm run lintandnpm run type-check. - Testing: add unit tests for calculators and deterministic flow builders where feasible.
- Privacy/safety: use non‑PHI sample data and maintain neutral, policy‑conformant wording.
- New scales: follow the
coerce/sum/bandspattern incalculators.ts; document anchors and flags.
License: see LICENSE ([ADD LICENSE TYPE IF MISSING]).
Suggested citation text:
SynapseCore: A Digital Psychiatry Workbench for Measurement‑Based Care and Multi‑Model AI Orchestration. Version [ADD VERSION], commit [ADD SHORT SHA], [YEAR]. URL: [ADD REPO URL].
BibTeX example:
@software{synapsecore_workbench,
title = {SynapseCore: Digital Psychiatry Workbench},
author = {[ADD AUTHORS]},
year = {[YEAR]},
version = {[ADD VERSION]},
note = {Commit [ADD SHORT SHA]; Measurement-based care; AI orchestration; React/TypeScript},
url = {[ADD REPO URL]}
}Teaching and research contexts: suitable for OSCE training, digital psychiatry seminars, and methods papers on AI‑assisted documentation and structured flows when used with appropriate safety governance.
- MBC: Measurement‑Based Care — routine use of validated scales to guide treatment.
- PHQ‑9 / GAD‑7 / PCL‑5 / Y‑BOCS / AUDIT‑C: Common validated scales for depression, anxiety, PTSD symptoms, OCD severity, and alcohol use.
- LLM: Large Language Model.
- JSON mode: Provider feature to bias outputs toward strict JSON (supported by OpenAI in this repo).
- Streaming: Server‑sent token deltas enabling low‑latency UI updates.
- OTEL/OTLP: OpenTelemetry instrumentation and its HTTP/gRPC export protocol.
- FHIR: Fast Healthcare Interoperability Resources (EHR data standard).
-
How can I plug in a new AI provider?
- Implement
Adapterinsrc/services/ai/adapters(seetypes.ts). 2) Add a provider client insrc/ai/providerClients/*if you need dynamic model listing. 3) Register capability insrc/ai/modelRegistry.ts(caps + default models). 4) ExtendbuildProviderRequestinsrc/ai/samplingMapper.ts. 5) Add parameter mapping insrc/services/ai/param-normalizer.tsas needed.
- Implement
-
How can I add a new psychometric scale? Follow the pattern in
calculators.ts: coerce/clamp inputs, compute total, definebands, return{ total, severity, bands, flags }, and extendMeasureIdplusrenderAutoscoreHTMLto support HTML rendering and anchors. -
How do I completely disable network calls for AI? Use local Ollama only or disable AI features via the kill switch: set
VITE_CONSULTON_DISABLE=1. Ensure no cloud provider keys are present;useAiStreamingwill skip providers without keys and favor local runtime options. -
Where are provider keys stored? This repo does not hard‑code key storage. Keys can be injected via environment in some deployments or entered in a settings store. [DESCRIBE LOCAL DEPLOYMENT POLICY HERE]
-
Does SynapseCore give treatment recommendations or risk scores?
No. The MBC engine provides transparent scores and severity bands; flows provide structured wording. Neither produce treatment plans, medication recommendations, or numerical risk scores. -
Can I adapt the wording of flows to local policies?
Yes. Flow builders undersrc/centerpanel/Flows/builders/*can be edited to align with institutional language, provided that changes are reviewed for clarity and neutrality. -
How should I interpret a PHQ‑9 or GAD‑7 score in this app?
Interpretation should follow published guidance and local policy. The app’s role is to compute scores and label severity bands; it does not interpret scores for an individual patient. -
Is the session ML model a predictor of clinical outcome or risk?
No. TheuseSessionMLhook models time‑allocation patterns (e.g., typical segment sequences) for convenience only; it does not encode or predict clinical risk, response, or outcomes.
