Skip to content

Fireline-Science/Wilderness-Medical-Simulation-Training

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 

Repository files navigation

Wilderness Medicine Training Simulator

MedGemma + Gemini Powered Multi-Agent Clinical Training Platform

An interactive web-based wilderness medicine training simulator that uses Google's MedGemma-27B as the primary clinical scene simulation agent, Gemini 2.5 Flash for patient/companion roleplay, and a second MedGemma-27B instance for real-time medical coaching and SOAP report generation. Built for training wilderness first responders based on WMS 2024 Clinical Practice Guidelines and NOLS Wilderness Medicine 7th Edition.


Table of Contents


System Architecture

┌─────────────────────────────────────────────────────────────────────────────-┐
│                              USER'S MAC (Local)                              │
│                                                                              │
│  ┌──────────────┐     ┌──────────────────┐     ┌────────────────────────┐    │
│  │   Browser     │────▶│  React Frontend  │────▶│   Flask Backend        │   │
│  │  :3000 (dev)  │◀────│  (port 3000/7860)│◀────│   (port 7860)          │   │
│  │  :7860 (prod) │     │                  │     │                        │   │
│  └──────────────┘     └──────────────────┘     │  ┌──────────────────┐   │   │
│                                                 │  │  heat_simulator  │  │   │
│                                                 │  │  coach.py        │  │   │
│                                                 │  │  soap_generator  │  │   │
│                                                 │  │  evaluation.py   │  │   │
│                                                 │  │  database.py     │  │   │
│                                                 │  └──────────────────┘  │   │
│                                                 └───┬──────┬──────┬──────┘   │
│                                                     │      │      │          │
│                    SSH Tunnel :11434 ◀───────────────┘      │      │         │
│                    SSH Tunnel :11435 ◀──────────────────────┘      │         │
│                    HTTPS (Google API) ◀────────────────────────────┘         │
└─────────────────────┬──────────────────────────┬─────────────────────────────┘
                      │                          │
          ┌───────────▼──────────┐   ┌───────────▼──────────┐
          │  AWS EC2 Instance #1 │   │  AWS EC2 Instance #2 │
          │  (g5.xlarge)         │   │  (g5.xlarge)         │
          │                      │   │                      │
          │  Ollama Server       │   │  Ollama Server       │
          │  MedGemma-27B        │   │  MedGemma-27B        │
          │  (Q4_K_M quant)      │   │  (Q4_K_M quant)      │
          │  Port 11434          │   │  Port 11434          │
          │                      │   │                      │
          │  Role: Scene Sim     │   │  Role: Coach + SOAP  │
          └──────────────────────┘   └──────────────────────┘

                                              ┌──────────────────┐
                                              │  Google Cloud     │
                                              │  Gemini 2.5 Flash │
                                              │  (API Key)        │
                                              │                   │
                                              │  Role: Patient    │
                                              │  Roleplay + TTS   │
                                              └──────────────────┘

Three-Agent Design

┌───────────────────────────────────────────────────────────────────┐
│                        AGENT ARCHITECTURE                         │
├───────────────────┬───────────────────┬───────────────────────────┤
│                   │                   │                           │
│   MEDGEMMA 1      │    GEMINI         │    MEDGEMMA 2             │
│   (Scene Engine)  │    (Roleplay)     │    (Coach & Reporter)     │
│                   │                   │                           │
│   medgemma.py     │    gemini.py      │    medgemma2.py           │
│                   │                   │                           │
│   ┌─────────────┐ │ ┌───────────────┐ │ ┌───────────────────────┐ │
│   │ Narrate the │ │ │ Patient voice │ │ │ Real-time medical     │ │
│   │ scene &     │ │ │ (Sarah, Joey, │ │ │ coaching guidance      │ │
│   │ environment │ │ │ Hiker, Runner)│ │ │ (WMS/NOLS protocols)  │ │
│   ├─────────────┤ │ ├───────────────┤ │ ├───────────────────────┤ │
│   │ Classify    │ │ │ Companion     │ │ │ SOAP/PAS report       │ │
│   │ treatment   │ │ │ voice (Buddy, │ │ │ generation from       │ │
│   │ state       │ │ │ Bill)         │ │ │ simulation data       │ │
│   ├─────────────┤ │ ├───────────────┤ │ ├───────────────────────┤ │
│   │ Trigger     │ │ │ Text-to-      │ │ │ 86-point rubric       │ │
│   │ vitals via  │ │ │ Speech (TTS)  │ │ │ evaluation support    │ │
│   │ function    │ │ │ audio gen     │ │ │                       │ │
│   │ calls       │ │ │ (optional)    │ │ │                       │ │
│   ├─────────────┤ │ └───────────────┘ │ └───────────────────────┘ │
│   │ Clinical    │ │                   │                           │
│   │ assessment  │ │                   │                           │
│   │ reports     │ │                   │                           │
│   ├─────────────┤ │                   │                           │
│   │ Performance │ │                   │                           │
│   │ evaluation  │ │                   │                           │
│   └─────────────┘ │                   │                           │
│                   │                   │                           │
│  AWS EC2 #1       │  Google Cloud     │  AWS EC2 #2               │
│  :11434 (tunnel)  │  (HTTPS API)      │  :11435 (tunnel)          │
└───────────────────┴───────────────────┴───────────────────────────┘

Cross-Model Interaction

   Student types action
          │
          ▼
   ┌──────────────┐   scene narration   ┌──────────────┐
   │  MEDGEMMA 1   │──────────────────▶│  Student sees  │
   │  (Scene Sim)  │                    │  scene update  │
   └──────┬───────┘                    └──────────────┘
          │
          │ context: "student did X"
          ▼
   ┌──────────────┐   patient reaction  ┌──────────────┐
   │   GEMINI      │──────────────────▶│  Student sees  │
   │  (Roleplay)   │                    │  dialog        │
   └──────┬───────┘                    └──────────────┘
          │
          │ reaction fed back to MedGemma1
          ▼
   ┌──────────────┐   scene reaction    ┌──────────────┐
   │  MEDGEMMA 1   │──────────────────▶│  Student sees  │
   │  (narrates    │                    │  updated scene │
   │   patient     │                    └──────────────┘
   │   behavior)   │
   └──────────────┘

   ───── Meanwhile (on demand) ─────

   Student clicks "Coach"
          │
          ▼
   ┌──────────────┐   reads session     ┌──────────────┐
   │  MEDGEMMA 2   │◀─────────────────│  Conversation  │
   │  (Coach)      │   history + vitals │  History       │
   └──────┬───────┘                    └──────────────┘
          │
          │ medical guidance
          ▼
   ┌──────────────┐
   │  Coach Panel  │  (DO THIS NOW, ASSESSMENT GAPS,
   │  (sidebar)    │   I SUSPECT, TREATMENT PROTOCOL,
   └──────────────┘   RED FLAGS)

Data Flow

Simulation Turn Flow

┌─────────┐   POST /api/send_action   ┌─────────────┐
│ Browser  │─────────────────────────▶│  Flask App   │
│          │   {session_id, action,    │              │
│          │    speaker_tag}           │  Spawns      │
│          │                           │  background  │
│          │◀─────────────────────────│  thread      │
│          │   {action_id}             └──────┬──────┘
│          │                                  │
│          │   GET /api/action_status          │
│          │─────────────────────────▶        │
│          │   (polls every 2 sec)            ▼
│          │                           ┌──────────────┐
│          │                           │ heat_simulator│
│          │                           │ .py           │
│          │                           │               │
│          │                           │ 1. Classify   │
│          │                           │    treatment  │─── MedGemma1
│          │                           │    state      │    (LLM call)
│          │                           │               │
│          │                           │ 2. Generate   │
│          │                           │    scene      │─── MedGemma1
│          │                           │    response   │    (LLM call)
│          │                           │               │
│          │                           │ 3. Check for  │
│          │                           │    function   │
│          │                           │    calls      │
│          │                           │    (vitals)   │─── Deterministic
│          │                           │               │    JSON data
│          │                           │ 4. Patient    │
│          │                           │    reaction   │─── Gemini
│          │                           │               │    (LLM call)
│          │                           │ 5. Companion  │
│          │                           │    reaction   │─── Gemini
│          │                           │               │    (LLM call)
│          │                           │ 6. Update     │
│          │                           │    assessment │─── MedGemma1
│          │                           │    report     │    (LLM call)
│          │                           └──────┬───────┘
│          │                                  │
│          │◀─────────────────────────────────┘
│          │   {status: "done", messages: [
│          │     {speaker: "state", data: {...}},
│          │     {speaker: "vitals", data: {...}},
│          │     {speaker: "simulation", text: "..."},
│          │     {speaker: "patient", text: "..."},
│          │     {speaker: "companion", text: "..."},
│          │     {speaker: "report", text: "..."}
│          │   ]}
└─────────┘

Vitals System (Deterministic)

   Treatment State Classification (MedGemma1 LLM)
          │
          │  e.g. "TACO" / "AIC" / "COOLING_AND_FLUIDS" / "NONE"
          ▼
   ┌──────────────────┐
   │  Scenario JSON    │  (e.g. vitals.json, out_of_town_scouts_vitals.json)
   │                   │
   │  treatment_methods│
   │   ├── TACO        │
   │   │   └── vitals[]│  ◀── Array of vitals at each assessment
   │   │       ├── [0] │      (HR, RR, mental_state, skin, BP,
   │   │       ├── [1] │       llm_patient_direction)
   │   │       └── [2] │
   │   ├── AIC         │
   │   │   └── vitals[]│
   │   └── NONE        │
   │       └── vitals[]│
   └──────────────────┘
          │
          │  Lookup by treatment_state + turns_in_current_method
          ▼
   ┌──────────────────┐
   │  Vitals Output    │
   │                   │
   │  heart_rate: 78   │
   │  respiratory: 16  │
   │  mental: A&Ox4    │
   │  skin: warm...    │
   │  bp: strong...    │
   │  treatment: TACO  │
   └──────────────────┘

Simulation Lifecycle

 ┌──────────┐    ┌──────────────┐    ┌──────────────┐   ┌──────────────┐
 │  Welcome  │──▶│  Scenario    │──▶│  Simulation   │──▶│  PAS Sheet   │
 │  Page     │   │  Brief       │   │  (main loop)  │   │  (SOAP)      │
 │           │   │              │   │               │   │              │
 │ • Mode    │   │ • Dispatch   │   │ • Chat        │   │ • Auto-fill  │
 │   toggle  │   │   info       │   │ • Vitals      │   │   from sim   │
 │   (Stu/   │   │ • Scene      │   │ • Treatment   │   │ • Edit fields│
 │   Teach)  │   │   image      │   │   state       │   │ • Save PDF   │
 │ • Random  │   │ • Characters │   │ • Assessment  │   │ • Submit for │
 │   scenario│   │ • Equipment  │   │   report      │   │   review     │
 │   assign  │   │ • Begin btn  │   │ • Coach       │   └──────┬───────┘
 └──────────┘   └──────────────┘   │   (guided     │          │
                                    │   mode toggle)│          │ (Teacher mode)
                                    │ • Evaluation  │          ▼
                                    │ • End sim btn │   ┌──────────────┐
                                    └──────────────┘   │  Teacher      │
                                                        │  Dashboard    │
                                                        │               │
                                                        │ • Submissions │
                                                        │   list        │
                                                        │ • Filter by   │
                                                        │   status      │
                                                        └──────┬───────┘
                                                               │
                                                               ▼
                                                        ┌──────────────┐
                                                        │  Grading     │
                                                        │  View        │
                                                        │              │
                                                        │  Left: PAS   │
                                                        │  (read-only) │
                                                        │              │
                                                        │  Right: 86pt │
                                                        │  rubric      │
                                                        │  (M/P/C)     │
                                                        └──────────────┘

Project Structure

Medgemma_Heatsim/
│
├── app.py                          # Flask server — all API routes
├── heat_simulator.py               # Core simulation engine — sessions, turns, vitals
├── medgemma.py                     # MedGemma1 LLM interface (Ollama / Vertex AI)
├── medgemma2.py                    # MedGemma2 LLM interface (Coach Ollama instance)
├── gemini.py                       # Gemini LLM interface (patient/companion roleplay)
├── gemini_tts.py                   # Gemini TTS audio generation (optional)
├── coach.py                        # Coach guidance logic (WMS/NOLS protocols)
├── soap_generator.py               # SOAP/PAS report generator
├── evaluation.py                   # Performance evaluation against protocols
├── database.py                     # SQLite — student submissions & teacher grades
├── auth.py                         # GCP Vertex AI authentication (optional)
├── cache.py                        # DiskCache wrapper for LLM response caching
├── assessment_template.txt         # Clinical assessment report template
├── requirements.txt                # Python dependencies
├── Dockerfile                      # Multi-stage Docker build
├── run_local.sh                    # Docker run script
├── env.example                     # Environment variable template
├── prompts.md                      # First responder prompt guide (WFR-aligned)
├── pm.md                           # Project manager summary
│
├── data/                           # Scenario & medical data (JSON)
│   ├── scenarios_registry.json     # Master registry of all scenarios
│   ├── new_heat.json               # Heat Illness scenario definition
│   ├── vitals.json                 # Heat Illness vitals progression
│   ├── desert_mirage.json          # Diabetic Hypoglycemia scenario
│   ├── desert_mirage_vitals.json   # Desert Mirage vitals progression
│   ├── running_on_water.json       # Hyponatremia scenario
│   ├── running_on_water_vitals.json# Running on Water vitals progression
│   ├── out_of_town_scouts.json     # Pediatric Heat Exhaustion scenario
│   ├── out_of_town_scouts_vitals.json # Out of Town Scouts vitals progression
│   ├── heat_background.json        # Medical knowledge base
│   ├── heat_protocol.json          # WMS protocol steps
│   └── pas_rubric.json             # 86-point PAS grading rubric
│
├── frontend/                       # React 18 SPA
│   ├── package.json
│   ├── public/
│   │   ├── index.html
│   │   └── assets/                 # Scenario images
│   │       ├── heat_illness.jpg
│   │       ├── desert_mirage.jpg
│   │       ├── running_on_water.jpg
│   │       └── out_of_town_scouts.jpg
│   └── src/
│       ├── App.js                  # Root component — routing, mode, state
│       ├── index.js                # React entry point
│       ├── shared/Style.css        # Global styles
│       └── components/
│           ├── WelcomePage/        # Landing — mode toggle, random scenario
│           ├── ScenarioBrief/      # Dispatch briefing with scene image
│           ├── Simulation/         # Main sim — chat, vitals, guided toggle
│           ├── CoachPanel/         # MedGemma2 coach sidebar
│           ├── PASSheet/           # Interactive PAS form + PDF export
│           ├── DetailsPopup/       # Session details overlay
│           ├── TeacherDashboard/   # Submission list for teachers
│           └── GradingView/        # Split-panel rubric grading
│
├── logs/                           # Saved SOAP reports (JSON)
├── .cache/                         # DiskCache SQLite store
└── medgemma_heatsim.db             # SQLite DB (submissions + grades)

Technology Stack

Layer Technology Purpose
Frontend React 18, CSS3 Single-page application with dark theme
Backend Python 3.11+, Flask, Flask-CORS REST API server
Primary AI MedGemma-27B (Q4_K_M via Ollama) Scene simulation, vitals, assessment
Secondary AI Gemini 2.5 Flash (Google API) Patient/companion roleplay
Coach AI MedGemma-27B (Q4_K_M via Ollama) Medical coaching, SOAP reports
Infrastructure AWS EC2 g5.xlarge (x2) GPU compute for MedGemma
Tunneling SSH port forwarding Secure local access to EC2 Ollama
Database SQLite Student submissions, teacher grades
Caching DiskCache (SQLite-backed) LLM response memoization
PDF Export jsPDF + browser print PAS report download
TTS Gemini 2.5 Flash Preview TTS Optional voice audio

Features

Multi-Scenario Simulation

  • 4 wilderness medicine scenarios randomly assigned (hidden from student)
  • Deterministic vitals progression — research-based, not LLM-generated
  • Dynamic treatment classification — each scenario defines its own methods
  • Function calling for vitals — automated JSON function calls trigger vitals checks

Guided Mode (Coach Copilot)

  • Toggle ON/OFF during simulation via header switch
  • MedGemma2 analyzes conversation history and vitals
  • Returns structured guidance: DO THIS NOW, ASSESSMENT GAPS, I SUSPECT, TREATMENT PROTOCOL, RED FLAGS
  • Independent from hidden scenario data — acts as remote medical director
  • Based on WMS 2024 and NOLS protocols

SOAP/PAS Report

  • MedGemma2 auto-fills complete PAS sheet from simulation data
  • Interactive form — every field editable (human-in-the-loop)
  • Pyramid diagram with ABCDE framework
  • Save as PDF via browser print dialog
  • Submit for teacher review

Student/Teacher Modes

  • Student Mode: Simulation + PAS sheet + Submit for Review
  • Teacher Mode: Canvas-style grading dashboard
    • View all submissions (filterable by status)
    • Split-panel grading: read-only PAS on left, 86-point rubric on right
    • M (Missed) / P (Partially Demonstrated) / C (Completed) toggles
    • Teacher notes and score summary

Scenarios

All symptoms based on NOLS Wilderness Medicine 7th Edition by Tod Schimelpfenig 2021.

Scenario Primary Condition Location Characters Treatment Methods Difficulty
Heat Illness Sim Heat Stroke Piestewa Peak Trail Sarah + Buddy (dog) AIC, TACO, NONE Advanced
Desert Mirage Diabetic Hypoglycemia National Trail, South Mountain The Hiker SUGAR_TREATMENT, FLUIDS_ONLY, NONE Intermediate
Running on Water Exercise-Associated Hyponatremia Fat Man's Pass The Runner SALTY_SNACKS, FLUIDS_ONLY, NONE Advanced
Out of Town Scouts Pediatric Heat Exhaustion Desert Classic Trail Joey + Bill (scout leader) COOLING_AND_FLUIDS, FLUIDS_ONLY, NONE Intermediate

Setup & Installation

Prerequisites

Requirement Details
macOS Apple Silicon or Intel Mac
Python 3.11 or higher
Node.js 18 or higher
npm Comes with Node.js
AWS EC2 2 instances, g5.xlarge minimum (NVIDIA A10G, 24GB VRAM)
SSH Key ~/.ssh/medgemma-impact-challenge.pem
Gemini API Key Fromaistudio.google.com/apikey

Step 1: Clone / Copy the Project

# Copy the project folder to your machine
cp -r Medgemma_Heatsim ~/Desktop/Medgemma/Medgemma_Heatsim
cp -r aws ~/Desktop/Medgemma/aws

Ensure the SSH key is in place:

cp ~/Desktop/Medgemma/aws/medgemma-impact-challenge.pem ~/.ssh/
chmod 400 ~/.ssh/medgemma-impact-challenge.pem

Step 2: Install Python Dependencies

cd ~/Desktop/Medgemma/Medgemma_Heatsim
pip3 install flask flask-cors requests diskcache google-generativeai

Step 3: Install Frontend Dependencies & Build

cd ~/Desktop/Medgemma/Medgemma_Heatsim/frontend
npm install
npm run build
cd ..

Step 4: Verify AWS EC2 Instances

Ensure both EC2 instances are running and Ollama is active with MedGemma loaded:

# Test MedGemma1
ssh -i ~/.ssh/medgemma-impact-challenge.pem ec2-user@44.213.196.166 "curl -s http://localhost:11434/api/tags | head -5"

# Test MedGemma2
ssh -i ~/.ssh/medgemma-impact-challenge.pem ec2-user@52.0.203.245 "curl -s http://localhost:11434/api/tags | head -5"

You should see JSON output listing the model hf.co/unsloth/medgemma-27b-text-it-GGUF:Q4_K_M.


Running the Application

You need 4 terminal windows. Open them all, then run each command.

Terminal 1 — SSH Tunnel to MedGemma1 (Scene Simulator)

ssh -i ~/.ssh/medgemma-impact-challenge.pem \
    -N -L 11434:127.0.0.1:11434 \
    ec2-user@44.213.196.166

This maps localhost:11434 on your Mac to the Ollama server on EC2 instance #1. The terminal will appear to hang — that means it's working. Leave it open.

Verify (in another terminal):

curl http://localhost:11434/api/tags

Terminal 2 — SSH Tunnel to MedGemma2 (Coach & SOAP)

ssh -i ~/.ssh/medgemma-impact-challenge.pem \
    -N -L 11435:127.0.0.1:11434 \
    ec2-user@52.0.203.245

This maps localhost:11435 on your Mac to the Ollama server on EC2 instance #2. Leave it open.

Verify (in another terminal):

curl http://localhost:11435/api/tags

Terminal 3 — Start the Backend

cd ~/Desktop/Medgemma/Medgemma_Heatsim

LLM_BACKEND=ollama \
OLLAMA_URL=http://localhost:11434/api/chat \
OLLAMA_MODEL="hf.co/unsloth/medgemma-27b-text-it-GGUF:Q4_K_M" \
SECONDARY_LLM_BACKEND=gemini_api \
GEMINI_API_KEY="YOUR_GEMINI_API_KEY_HERE" \
COACH_OLLAMA_URL=http://localhost:11435/api/chat \
COACH_OLLAMA_MODEL="hf.co/unsloth/medgemma-27b-text-it-GGUF:Q4_K_M" \
OLLAMA_KEEP_ALIVE=3600 \
COACH_OLLAMA_KEEP_ALIVE=3600 \
GENERATE_SPEECH=false \
FRONTEND_BUILD=frontend/build \
python3 app.py

You should see:

 * Serving Flask app 'app'
 * Running on http://0.0.0.0:7860

Terminal 4 — (Optional) Frontend Dev Server

Only needed if you're making frontend changes. Otherwise, the Flask server serves the built frontend directly.

cd ~/Desktop/Medgemma/Medgemma_Heatsim/frontend
npm start

Dev server runs on http://localhost:3000 and proxies API calls to :7860.

Open the Application

http://localhost:7860

First request takes 1-3 minutes as MedGemma loads into GPU memory. Subsequent requests take 10-30 seconds.


API Reference

Session Management

Method Endpoint Body / Query Returns
GET /api/list_scenarios {scenarios: [{id, title, difficulty, ...}]}
POST /api/create_session {scenario_id} Session metadata, characters, scenario data
GET /api/get_session_state ?session_id= Turn count, treatment state, assessments, report

Simulation

Method Endpoint Body / Query Returns
POST /api/start_simulation {session_id} {status: "running"}
GET /api/simulation_status ?session_id= {status, messages[], error}
POST /api/send_action {session_id, action, speaker_tag} {action_id, status}
GET /api/action_status ?action_id= {status, messages[], error}

Coach (MedGemma2)

Method Endpoint Body / Query Returns
POST /api/coach_hint {session_id} {hint_id, status}
GET /api/coach_status ?hint_id= {status, guidance, error}

SOAP/PAS Report (MedGemma2)

Method Endpoint Body / Query Returns
POST /api/generate_soap {session_id} {status: "running"}
GET /api/soap_status ?session_id= {status, soap_data, error}
POST /api/save_soap {session_id, soap_data} {status, filepath}

Evaluation & Logging

Method Endpoint Body / Query Returns
POST /api/evaluate_performance {session_id, report} {evaluation}
POST /api/save_log {session_id} {status, filepath}

Student/Teacher

Method Endpoint Body / Query Returns
POST /api/submit_for_review {session_id, student_name, scenario_id, scenario_title, soap_data} {status, submission_id}
GET /api/submissions ?status=pending|reviewed {submissions: [...]}
GET /api/submission/<id> Full submission with SOAP data
GET /api/rubric 86-point PAS rubric template
POST /api/save_grade {submission_id, rubric_data, teacher_notes} {status}
GET /api/grade/<id> Saved grade with rubric data

Database Schema

SQLite database at medgemma_heatsim.db:

-- Student submissions
CREATE TABLE submissions (
    id              INTEGER PRIMARY KEY AUTOINCREMENT,
    session_id      TEXT NOT NULL,
    student_name    TEXT NOT NULL DEFAULT '',
    scenario_id     TEXT,
    scenario_title  TEXT,
    soap_data       TEXT NOT NULL,       -- JSON string of PAS form
    submitted_at    TEXT NOT NULL,       -- ISO 8601 timestamp
    status          TEXT NOT NULL DEFAULT 'pending'  -- 'pending' | 'reviewed'
);

-- Teacher grades
CREATE TABLE grades (
    id              INTEGER PRIMARY KEY AUTOINCREMENT,
    submission_id   INTEGER NOT NULL UNIQUE REFERENCES submissions(id),
    rubric_data     TEXT NOT NULL,       -- JSON array of 86 rubric items
    total_completed INTEGER NOT NULL DEFAULT 0,
    total_partial   INTEGER NOT NULL DEFAULT 0,
    total_missed    INTEGER NOT NULL DEFAULT 0,
    teacher_notes   TEXT NOT NULL DEFAULT '',
    graded_at       TEXT NOT NULL        -- ISO 8601 timestamp
);

Environment Variables

Variable Default Description
LLM_BACKEND ollama MedGemma1 backend:ollama or vertex_ai
OLLAMA_URL http://localhost:11434/api/chat Ollama API URL for MedGemma1
OLLAMA_MODEL medgemma Ollama model name for MedGemma1
OLLAMA_KEEP_ALIVE 1h Keep model loaded (duration string or seconds integer)
SECONDARY_LLM_BACKEND ollama Gemini roleplay backend:ollama or gemini_api
OLLAMA_SECONDARY_MODEL (same as OLLAMA_MODEL) Ollama model for Gemini roleplay
GEMINI_API_KEY Google Gemini API key (for roleplay and optional TTS)
COACH_OLLAMA_URL http://localhost:11435/api/chat Ollama API URL for MedGemma2
COACH_OLLAMA_MODEL (same as OLLAMA_MODEL) Ollama model name for MedGemma2
COACH_OLLAMA_KEEP_ALIVE 1h Keep coach model loaded
GENERATE_SPEECH false Enable Gemini TTS audio generation
FRONTEND_BUILD frontend/build Path to built React frontend
GCP_MEDGEMMA_ENDPOINT Vertex AI endpoint (if using vertex_ai backend)
GCP_MEDGEMMA_SERVICE_ACCOUNT_KEY GCP service account JSON (if using vertex_ai)

Performance Notes

Metric Value
Model load time 30–120 seconds (cold start, first request)
MedGemma response 10–30 seconds per turn
Gemini API response 2–4 seconds
Coach guidance 15–30 seconds
SOAP generation 30–60 seconds
Recommended GPU NVIDIA A10G (24GB VRAM) — fits Q4_K_M quantization
Minimum instance AWS g5.xlarge (1x A10G, 24GB VRAM, 16 vCPU, 64GB RAM)
Keep-alive Set OLLAMA_KEEP_ALIVE=3600 to keep model in memory for 1 hour

Troubleshooting

Symptom Cause Fix
Stuck on "Connecting to MedGemma..." SSH tunnel not running Open Terminal 1 & 2, run SSH commands
400 Bad Request from Ollama Wrong model name Run curl localhost:11434/api/tags to get exact model name
400 Bad Request with keep_alive String vs integer type Use OLLAMA_KEEP_ALIVE=3600 (integer) or OLLAMA_KEEP_ALIVE=1h (string)
Coach not responding MedGemma2 tunnel down Check Terminal 2 SSH tunnel is alive
Blank PDF export Browser popup blocked Allow popups for localhost in browser settings
Model response very slow Cold start Wait 1-3 min for first request; model stays loaded per keep_alive
Connection refused :11434 EC2 instance stopped Start the EC2 instance in AWS console

1. Install Python deps

cd Medgemma_Heatsim pip3 install flask flask-cors requests diskcache google-generativeai

2. Reinstall node_modules (platform-specific binaries won't transfer)

cd frontend && rm -rf node_modules && npm install && npm run build && cd ..

3. Copy the SSH key

cp /path/to/medgemma-impact-challenge.pem ~/.ssh/ chmod 400 ~/.ssh/medgemma-impact-challenge.pem


Medical Disclaimer

This is an AI training simulation for educational purposes only. It is not a finished or approved medical product and must not be used for actual clinical decisions. Symptoms are based on NOLS Wilderness Medicine 7th Edition by Tod Schimelpfenig (2021). Protocols reference the Wilderness Medical Society 2024 Clinical Practice Guidelines for heat illness prevention and treatment.

About

Wilderness Medicine Training , Simulator , Coach and Assistant powered by Google's MedGemma Models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors