-
Notifications
You must be signed in to change notification settings - Fork 1
Description
Implementation Plan: Machine-Hosted MeticAI PWA
Branch:
feat/machine-hosted-pwa(frommainafter v2.3.0 merges)
Milestone: 2.4
Related: #305 (Feasibility Study), #253 (Capacitor App)
Overview
Port MeticAI's React frontend to run directly on the Meticulous machine as a static PWA served by Tornado's StaticFileHandler. The frontend communicates with the machine API directly via @meticulous-home/espresso-api and with Google Gemini directly via @google/genai. No Python backend required for the machine-hosted deployment.
Resource impact: ~6.5 MB disk (~3% of free space), zero additional RAM, negligible CPU.
Code sharing with Capacitor app (#253): ~85%. Phases 1–6 are fully reusable by both deployment targets. Only Phase 7 (install scripts) and Phase 9 (service worker) are PWA-specific.
Architecture
Current (Docker)
┌────────────┐ ┌─────────────────────────────────┐ ┌──────────────┐
│ Browser │────▶│ MeticAI Docker Container │────▶│ Meticulous │
│ │ │ nginx → FastAPI → pyMeticulous │ │ Machine │
│ │◀────│ → MQTT Bridge → Socket.IO │◀────│ REST + SIO │
└────────────┘ └─────────────────────────────────┘ │ :8080 │
Port 3550 └──────────────┘
Target (Machine-Hosted PWA)
┌────────────┐ ┌──────────────────┐
│ Browser │────▶│ Meticulous │
│ │ │ Machine │ ← Same origin, no CORS/PNA issues
│ React App │◀────│ Tornado :8080 │
│ + SIO │ │ /meticai/* (static)│
│ + Gemini │ │ /api/v1/* (REST) │
│ │ │ /socket.io (SIO) │
└────────────┘ └──────────────────┘
│
│ HTTPS (cross-origin, allowed)
▼
┌──────────────┐
│ Google Gemini│
│ Cloud API │
└──────────────┘
Phase 1: MachineService Abstraction Layer
Create an adapter pattern that both the machine-hosted PWA and future Capacitor app share.
1.1 — MachineService Interface
File: src/services/machine/MachineService.ts
interface MachineService {
// Connection
connect(url: string): Promise<void>
disconnect(): void
isConnected(): boolean
onConnectionChange(cb: (connected: boolean) => void): void
// Profiles
listProfiles(): Promise<ProfileIdent[]>
fetchAllProfiles(): Promise<Profile[]>
getProfile(id: string): Promise<Profile>
saveProfile(profile: Profile): Promise<void>
deleteProfile(id: string): Promise<void>
loadProfileById(id: string): Promise<void>
loadProfileFromJSON(profile: object): Promise<void>
// Actions
executeAction(action: ActionType): Promise<void>
// Telemetry (real-time)
onStatus(cb: (data: StatusData) => void): () => void
onActuators(cb: (data: Actuators) => void): () => void
onNotification(cb: (data: Notification) => void): () => void
// History
getHistoryListing(): Promise<HistoryEntry[]>
getShotData(date: string, filename: string): Promise<ShotData>
getLastShot(): Promise<HistoryEntry | null>
// Settings
getSettings(): Promise<Settings>
updateSetting(key: string, value: unknown): Promise<void>
getDeviceInfo(): Promise<DeviceInfo>
}1.2 — ProxyAdapter
File: src/services/machine/ProxyAdapter.ts
- Wraps current
fetch()calls to MeticAI FastAPI backend - Used by Docker deployment (existing architecture, zero behavior change)
- Acts as a drop-in replacement for the current scattered fetch calls
1.3 — DirectAdapter
File: src/services/machine/DirectAdapter.ts
- Wraps
@meticulous-home/espresso-api(v0.10.11) - HTTP via axios (profiles, actions, history, settings)
- Socket.IO via socket.io-client (telemetry: status, actuators, notifications)
- Used by machine-hosted PWA and Capacitor app
- Handles .zst decompression via
fzstdlibrary for shot data
1.4 — Provider & Selection
File: src/services/machine/MachineServiceProvider.tsx
- React context provider:
<MachineServiceProvider> - Hook:
useMachineService(): MachineService - Selection logic:
- Build-time:
VITE_MACHINE_MODE=direct|proxy - Runtime fallback: if
window.location.port === '8080'→ direct mode
- Build-time:
1.5 — Type Definitions
- Import
Profile,ProfileIdent,StatusData,Actuatorsetc. from@meticulous-home/espresso-profile - Create mapping layer between espresso-api types and our existing
MachineProfile,MachineStatetypes
Phase 2: AI Service Abstraction
2.1 — AIService Interface
File: src/services/ai/AIService.ts
interface AIService {
isConfigured(): boolean
// Profile generation
generateProfile(
image: Blob | null,
preferences: ProfilePreferences,
onProgress?: (event: ProgressEvent) => void
): Promise<ProfileResult>
// Shot analysis
analyzeShot(
shotData: ShotData,
profileName: string,
profileDescription?: string
): Promise<string>
// Image generation
generateImage(
profileName: string,
style: string,
profileData?: object
): Promise<Blob>
// Recommendations
getRecommendations(context: RecommendationContext): Promise<Recommendation[]>
}2.2 — ProxyAIService
File: src/services/ai/ProxyAIService.ts
- Wraps current backend endpoints (
/api/analyze_and_profile,/api/shots/analyze-llm, etc.) - Used by Docker deployment
2.3 — BrowserAIService
File: src/services/ai/BrowserAIService.ts
- Uses
@google/genaiJavaScript SDK directly in the browser - User provides own Gemini API key (stored in localStorage with clear disclosure)
- Streaming support via
generateContentStream()for progress events - Image generation via Imagen model endpoint
- API key security: Gemini API keys can be restricted to specific HTTP referrers via Google Cloud Console, mitigating exposure risk
2.4 — Prompt Builders (TypeScript Port)
Port the Python prompt_builder.py (732 lines) to TypeScript:
| Python Module | TypeScript File | Purpose |
|---|---|---|
prompt_builder.py build() |
src/services/ai/prompts/ProfilePromptBuilder.ts |
Profile generation system prompt |
prompt_builder.py CORE_SAFETY_CONSTRAINTS |
src/services/ai/prompts/ImagePromptBuilder.ts |
Imagen prompt construction |
| Shot analysis prompt (in routes) | src/services/ai/prompts/AnalysisPromptBuilder.ts |
Shot analysis system prompt |
| Recommendation prompt (in routes) | src/services/ai/prompts/RecommendationPromptBuilder.ts |
Profile recommendations |
Each prompt builder mirrors the Python version exactly to ensure identical AI output quality.
2.5 — API Key Management
- Settings page: text input for Gemini API key (already exists)
- Storage: localStorage (consistent with current UX where user enters key)
- Warning banner: "Your API key is stored locally in this browser"
- Optional "session-only" toggle: key stored in memory only, cleared on tab close
- Validation: test call to Gemini on save, show error if invalid
Phase 3: IndexedDB Persistence Layer
3.1 — Database Schema
File: src/services/storage/AppDatabase.ts
Using idb library (lightweight IndexedDB wrapper, ~1.2 KB gzipped):
interface AppDB extends DBSchema {
settings: {
key: string
value: { key: string; value: unknown; updatedAt: number }
}
'shot-annotations': {
key: string // "{date}/{filename}"
value: {
shotKey: string
rating: number | null
notes: string
tags: string[]
updatedAt: number
}
indexes: { 'by-rating': number }
}
'ai-cache': {
key: string // hash of shot data
value: {
cacheKey: string
analysis: string
createdAt: number
expiresAt: number
}
indexes: { 'by-expiry': number }
}
'pour-over-state': {
key: string // "default"
value: {
coffeeWeight: number
brewRatio: number
bloomAmount: number
bloomTime: number
updatedAt: number
}
}
'dial-in-sessions': {
key: string // session ID
value: {
id: string
coffee: CoffeeDetails
steps: DialInStep[]
createdAt: number
}
indexes: { 'by-date': number }
}
'profile-images': {
key: string // profile ID
value: {
profileId: string
imageBlob: Blob
updatedAt: number
}
}
}3.2 — Cache Management
- AI cache: TTL-based expiry (7 days default), auto-cleanup on app start
- Profile images: LRU eviction when total exceeds 50 MB
- Estimate total IndexedDB usage: 10–100 MB depending on cached images
3.3 — Migration Hook
useStorageMigration()hook runs on app mount- Detects first-run (no IndexedDB) → initializes defaults
- Future: version-based schema migrations
Phase 4: Direct Socket.IO Telemetry
4.1 — Replace WebSocket Chain
Current: Machine(SIO :8080) → Bridge(MQTT) → Server(WS) → Browser
New: Machine(SIO :8080) → Browser (direct via espresso-api)
4.2 — Hook Adaptation
File: src/hooks/useWebSocket.ts → refactor to src/hooks/useMachineTelemetry.ts
function useMachineTelemetry(): MachineState {
const machine = useMachineService()
const [state, setState] = useState<MachineState>(initialState)
useEffect(() => {
const unsub1 = machine.onStatus((data) => {
setState(prev => ({
...prev,
temperature: data.temperature,
pressure: data.pressure,
weight: data.weight,
flow: data.flow_rate,
// ... field mapping
}))
})
const unsub2 = machine.onActuators((data) => {
setState(prev => ({ ...prev, power: data.power }))
})
return () => { unsub1(); unsub2() }
}, [machine])
return state
}4.3 — Field Mapping
The machine's Socket.IO status event fields may differ slightly from our MachineState type. Create a mapping layer:
| Machine (StatusData) | MeticAI (MachineState) |
|---|---|
temperature |
temperature |
pressure |
pressure |
weight |
weight |
flow_rate |
flow |
shot_time |
timer |
state |
machineState |
4.4 — Connection Status
- Expose connection state:
connected | disconnected | reconnecting - Auto-reconnect with exponential backoff (built into socket.io-client)
- Visual indicator in header/footer bar
Phase 5: Feature Triage
✅ KEEP (works in browser, no server needed)
| Feature | How | Notes |
|---|---|---|
| Profile CRUD | espresso-api REST |
Direct to machine |
| Start/Stop/Preheat | espresso-api.executeAction() |
Direct to machine |
| Real-time telemetry | espresso-api Socket.IO |
Direct to machine |
| Shot history browsing | espresso-api.getHistoryShortListing() |
+ fzstd for .zst |
| Shot data visualization | Client-side Recharts | Already browser-only |
| Profile generation (AI) | @google/genai direct |
User's own API key |
| Shot analysis (AI) | @google/genai direct |
Cached in IndexedDB |
| Image generation (AI) | Imagen via @google/genai |
User's own API key |
| Profile recommendations | Token-free engine + Gemini | Local scoring + optional AI |
| Pour-over mode | Timer + espresso-api commands |
State in IndexedDB |
| Espresso compass | Client-side + Gemini | Direct AI calls |
| Variable adjustments | espresso-api.loadProfileFromJSON() |
Ephemeral loading |
| Profile validation | @meticulous-home/espresso-profile |
Pure TS, zero deps |
| Shot annotations | IndexedDB | Local storage |
| Profile image cache | IndexedDB blob store | LRU eviction |
| i18n (6 languages) | Static JSON bundles | Already client-side |
| Dark/light theme | CSS/localStorage | Already client-side |
| PWA install prompt | manifest.json (exists) + service worker | Add SW |
❌ REMOVE (requires server infrastructure)
| Feature | Reason | Impact |
|---|---|---|
| mDNS auto-discovery | Browsers can't do mDNS | Low — user enters IP or uses meticulous.local |
| Scheduled/recurring shots | No persistent scheduler in browser | Low — manual shot start |
| Profile cloud sync | No server-side storage | None — machine is source of truth |
| System restart/update | Requires OS access | None — use machine's own UI |
| Tailscale configuration | CLI tool | None — not core feature |
| MCP server | Server-side tool integration | None — not user-facing |
🟡 DEGRADE (partial functionality change)
| Feature | Change | Solution |
|---|---|---|
| .zst shot decompression | Python zstd → JS | fzstd library (2.4 KB gzipped) |
| Generation progress | SSE from backend → Gemini streaming | generateContentStream() native |
| AI analysis caching | Server DB → browser | IndexedDB with 7-day TTL |
| Machine connectivity check | Server-side health check | Browser fetch with timeout |
Phase 6: Build Configuration
6.1 — package.json Scripts
{
"scripts": {
"build": "tsc -b --noCheck && vite build",
"build:machine": "VITE_MACHINE_MODE=direct tsc -b --noCheck && vite build",
"build:docker": "VITE_MACHINE_MODE=proxy tsc -b --noCheck && vite build",
"test:run": "vitest run",
"test:direct": "VITE_MACHINE_MODE=direct vitest run"
}
}6.2 — New Dependencies
| Package | Version | Size (gzipped) | Purpose |
|---|---|---|---|
@meticulous-home/espresso-api |
^0.10.11 | ~15 KB | Machine API client |
@meticulous-home/espresso-profile |
^0.4.2 | ~5 KB | Profile types & validation |
@google/genai |
^1.46.0 | ~25 KB | Gemini JS SDK |
idb |
^8.x | ~1.2 KB | IndexedDB wrapper |
fzstd |
^0.1.x | ~2.4 KB | Zstandard decompression |
vite-plugin-pwa |
^0.21.x | dev only | Service worker generation |
Build size impact: ~50–80 KB additional gzipped. Total: ~800 KB gzipped, ~6.5 MB on disk.
6.3 — Vite Environment Variables
VITE_MACHINE_MODE=direct|proxy # Adapter selection
VITE_DEFAULT_MACHINE_URL=http://meticulous.local # Machine URL default
6.4 — Tree Shaking
proxybuild excludes: espresso-api, espresso-profile, @google/genai, fzstddirectbuild excludes: ProxyAdapter, ProxyAIService- Conditional imports via dynamic
import()keyed onVITE_MACHINE_MODE
6.5 — Service Worker
File: src/sw.ts (generated via vite-plugin-pwa)
Strategies:
- Static assets (JS, CSS, images, locale JSON): CacheFirst with versioned cache names
- Machine API responses (profiles, settings): NetworkFirst with 5s timeout → fall back to cache
- AI responses: NetworkOnly (no caching of Gemini calls in SW)
- Offline mode: Show cached profiles and history, disable machine commands with "offline" badge
Phase 7: Machine-Side Installation
7.1 — Install Script (install-meticai.sh)
#!/bin/bash
set -euo pipefail
METICAI_VERSION="${1:-latest}"
INSTALL_DIR="/opt/meticai-web"
echo "=== MeticAI PWA Installer v${METICAI_VERSION} ==="
# Pre-flight resource checks
FREE_DISK_MB=$(df -m / | awk 'NR==2{print $4}')
FREE_RAM_MB=$(free -m | awk '/Mem:/{print $7}')
echo "Free disk: ${FREE_DISK_MB} MB | Free RAM: ${FREE_RAM_MB} MB"
if [ "$FREE_DISK_MB" -lt 20 ]; then
echo "ERROR: Need ≥20 MB free disk (have ${FREE_DISK_MB} MB)"; exit 1
fi
# Download release artifact
if [ "$METICAI_VERSION" = "latest" ]; then
RELEASE_URL=$(curl -fsSL https://api.github.com/repos/hessius/MeticAI/releases/latest \
| grep "browser_download_url.*meticai-web.tar.gz" | cut -d'"' -f4)
else
RELEASE_URL="https://github.com/hessius/MeticAI/releases/download/v${METICAI_VERSION}/meticai-web.tar.gz"
fi
echo "Downloading from: ${RELEASE_URL}"
curl -fsSL "$RELEASE_URL" -o /tmp/meticai-web.tar.gz
DOWNLOAD_SIZE=$(du -m /tmp/meticai-web.tar.gz | cut -f1)
echo "Download: ${DOWNLOAD_SIZE} MB"
# Backup existing
[ -d "$INSTALL_DIR" ] && mv "$INSTALL_DIR" "${INSTALL_DIR}.bak.$(date +%s)"
# Extract
mkdir -p "$INSTALL_DIR"
tar -xzf /tmp/meticai-web.tar.gz -C "$INSTALL_DIR"
rm /tmp/meticai-web.tar.gz
# Report
FILE_COUNT=$(find "$INSTALL_DIR" -type f | wc -l)
INSTALL_SIZE=$(du -sm "$INSTALL_DIR" | cut -f1)
FREE_AFTER=$(df -m / | awk 'NR==2{print $4}')
echo ""
echo "Installed: ${FILE_COUNT} files, ${INSTALL_SIZE} MB"
echo "Free disk after: ${FREE_AFTER} MB"
echo ""
echo "=== Next: Add Tornado route (see README) ==="
echo "Access at: http://meticulous.local:8080/meticai/"7.2 — Tornado Route Configuration
Add to meticulous-backend/api/web_ui.py:
METICAI_HANDLER = [
(r"/meticai", tornado.web.RedirectHandler, {"url": "/meticai/"}),
(r"/meticai/(.*)", tornado.web.StaticFileHandler, {
"default_filename": "index.html",
"path": "/opt/meticai-web",
}),
]
# In WEB_UI_HANDLER list, add:
# WEB_UI_HANDLER.extend(METICAI_HANDLER)This follows the exact same pattern as the existing /debug/* static file handler.
7.3 — Validation Script (validate-meticai.sh)
#!/bin/bash
echo "=== MeticAI Installation Validation ==="
echo ""
# Disk
echo "--- Disk ---"
df -h / | awk 'NR==2{printf "Total: %s | Used: %s | Free: %s | Use: %s\n",$2,$3,$4,$5}'
du -sh /opt/meticai-web 2>/dev/null && echo "" || echo "NOT INSTALLED"
# Memory
echo "--- Memory ---"
free -m | awk '/Mem:/{printf "Total: %s MB | Used: %s MB | Available: %s MB\n",$2,$3,$7}'
echo ""
# Files
echo "--- Files ---"
if [ -f "/opt/meticai-web/index.html" ]; then
echo "index.html: OK"
echo "Total files: $(find /opt/meticai-web -type f | wc -l)"
echo "JS chunks: $(find /opt/meticai-web -name '*.js' | wc -l)"
echo "CSS files: $(find /opt/meticai-web -name '*.css' | wc -l)"
else
echo "ERROR: index.html not found!"
fi
echo ""
# Routes
echo "--- Routes ---"
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/meticai/ 2>/dev/null)
echo "GET /meticai/: HTTP $HTTP_CODE"
[ "$HTTP_CODE" = "200" ] && echo " ✅ Route working" || echo " ❌ Route not configured"
# Machine API
echo ""
echo "--- Machine API ---"
API_CODE=$(curl -s -o /dev/null -w "%{http_code}" http://localhost:8080/api/v1/profile 2>/dev/null)
echo "GET /api/v1/profile: HTTP $API_CODE"
echo ""
echo "=== Done ==="7.4 — Update Script (update-meticai.sh)
Same as install but with automatic backup + cleanup of old backups (keep last 2).
7.5 — Uninstall Script (uninstall-meticai.sh)
#!/bin/bash
echo "Removing MeticAI PWA..."
rm -rf /opt/meticai-web /opt/meticai-web.bak.*
echo "Removed. Remember to remove the Tornado route from web_ui.py."Phase 8: CI/CD Pipeline
8.1 — Release Workflow Addition
# In .github/workflows/release.yml (or new workflow)
build-machine-pwa:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: oven-sh/setup-bun@v2
- run: cd apps/web && bun install
- run: cd apps/web && VITE_MACHINE_MODE=direct bun run build
- run: tar -czf meticai-web.tar.gz -C apps/web/dist .
- uses: softprops/action-gh-release@v2
with:
files: meticai-web.tar.gz8.2 — Test Matrix
- Existing test suite validates proxy mode (Docker) — unchanged
- New test file:
DirectAdapter.test.ts— mocksespresso-apiresponses - New test file:
BrowserAIService.test.ts— mocks@google/genai - New test file:
AppDatabase.test.ts— usesfake-indexeddbfor IndexedDB tests - CI runs both
test:runandtest:direct
Development Sequence & Dependencies
Phase 1.1 (Interface) ──▶ Phase 1.2 (Proxy) ──┐
│ │
└──▶ Phase 1.3 (Direct) ──▶ Phase 4 (SIO) │
│ │
Phase 2.1 (AI Interface) ──▶ Phase 2.2 (ProxyAI) ──┤
│ │
└──▶ Phase 2.3 (BrowserAI) │
└──▶ Phase 2.4 (Prompts TS) │
│
Phase 3 (IndexedDB) ──────────────────────────────────┤
│
Phase 5 (Triage) ◀────────────────────────────────────┘
│
▼
Phase 6 (Build Config) ──▶ Phase 7 (Install Scripts)
│ Phase 8 (CI/CD)
▼
Phase 9 (Service Worker)
Parallelizable: Phases 1, 2, 3 can all start simultaneously.
Shared Code with Capacitor App (#253)
| Phase | Shared? | Notes |
|---|---|---|
| 1. MachineService + DirectAdapter | ✅ 100% | Same interface, same adapter |
| 2. AIService + BrowserAI | ✅ 100% | Same prompts, same SDK |
| 3. IndexedDB persistence | ✅ 100% | Same schema, same hooks |
| 4. Socket.IO telemetry | ✅ 100% | Same direct connection |
| 5. Feature triage | ✅ 100% | Same feature set |
| 6. Build config | ✅ Partially | Capacitor adds native shell |
| 7. Install scripts | ❌ PWA only | Capacitor uses app stores |
| 8. CI/CD | ✅ Partially | Capacitor adds iOS/Android builds |
| 9. Service worker | ❌ PWA only | Capacitor doesn't need SW |
The Capacitor app (#253) becomes: DirectAdapter + Capacitor shell + mDNS plugin + app store packaging. All core logic is shared.
Open Questions / MeticulousHome Coordination
- Tornado route: Can MeticulousHome add a
/meticai/*static handler to their backend? Or do we need users to manually patchweb_ui.py? - PNA header: Should we request adding
Access-Control-Allow-Private-Network: truetobase_handler.pyfor future-proofing? - App marketplace: Is MeticulousHome planning an "apps" or "plugins" system? If so, MeticAI could be distributed through it.
- OTA updates: Could the machine's OTA system include MeticAI frontend updates?
- Port conflict: The machine uses port 8080 for both REST and Socket.IO. Confirm this is stable and won't change.