- Domain: late.sh - Terminal Clubhouse for Developers
- Primary audience: LLM agents working on this codebase, human contributors
- Last updated: 2026-05-05 (CLI details in
late-cli/CONTEXT.md; Web details inlate-web/CONTEXT.md; Rooms details inlate-ssh/src/app/rooms/CONTEXT.md; Chat details inlate-ssh/src/app/chat/CONTEXT.md; Artboard details inlate-ssh/src/app/artboard/CONTEXT.md) - Status: Active
- Stability note: Sections marked
[STABLE]should change rarely. Sections marked[VOLATILE]are expected to change often.
This file is the primary working context for the entire late.sh project.
- LLM agents should treat this as a living document and update it whenever meaningful behavior changes.
- If code and this file diverge, prefer updating this file quickly so future work stays reliable.
- Temporary or branch-specific behavior should be documented here with clear cleanup notes.
- Refresh
Last updateddate - Review
Current WorkandFuture Work - Validate
Critical Invariants - Update telemetry references if operation/event names changed
- Remove obsolete notes
- Re-review this file regularly (every 2 weeks) to prevent context drift.
A cozy terminal clubhouse for developers. Lofi beats, casual games, chat, and tech news - all via SSH.
ssh late.sh and you're in. Zero friction, terminal-first, always-on vibes.
The system is a Rust workspace with four crates (late-cli, late-core, late-ssh, late-web) backed by PostgreSQL, Icecast audio streaming, and Liquidsoap playlist management.
- Primary entry points: SSH server (russh on port 2222), HTTP API (axum on port 4000), Web server (axum on port 3000)
- Main responsibilities: Multi-screen TUI over SSH (Dashboard, Chat, The Arcade, Rooms, Artboard), public web frontend, genre voting, paired browser/CLI audio control plus visualizer, real-time chat and chat-adjacent feeds, private per-user RSS/Atom inboxes that can be shared into News, link/YouTube sharing with AI summaries/ASCII thumbnails, interactive terminal games, persistent game-backed Rooms, and a shared multi-user ASCII Artboard. Detailed CLI behavior lives in
late-cli/CONTEXT.md; detailed Web behavior lives inlate-web/CONTEXT.md; detailed Rooms/Blackjack behavior lives inlate-ssh/src/app/rooms/CONTEXT.md; detailed Chat behavior lives inlate-ssh/src/app/chat/CONTEXT.md; detailed Artboard/dartboard behavior lives inlate-ssh/src/app/artboard/CONTEXT.md. Configurable right-side panels: the global app sidebar (now playing, activity, visualizer, bonsai) plus the arcade lobby leaderboard sidebar, both default-on. Globalqopens quit confirm; pressingqagain exits andEscdismisses it. - Highest-risk areas: SSH render loop backpressure, connection limiting, chat sync consistency, paired-client WS routing/state drift
- Cover both runtime apps:
late-sshandlate-web. - Keep most tests close to code under change (small, deterministic, focused).
- Use integration/smoke tests for boundary behavior across crates/services.
Unit tests (#[cfg(test)] mod tests inside src/ files):
- MUST be pure logic only: no database, no services, no network, no async runtime required.
- Test input/output transformations, state transitions, parsing, formatting, validation math.
- If you need a
Db,Service,State, or any I/O — it is NOT a unit test. Move it totests/. - Good examples:
rate_limit.rs(in-memory limiter logic),state.rs(enum transitions),input.rs(key → action mapping). - Preferred source layout for a domain is
src/.../<domain>/mod.rsplus adjacentstate.rs,input.rs,ui.rs,svc.rsas needed.mod.rsfiles must only containpub moddeclarations — neverpub usere-exports. - Keep pure unit tests inline in those source files. Do NOT create
src/.../<domain>/tests/folders just to split unit tests.
Integration tests (late-ssh/tests/, late-web/tests/, late-core/tests/):
- MUST use testcontainers for database access — always go through
late_core::test_utils::test_db()(or thehelpers::new_test_db()wrapper inlate-ssh). - NEVER use
Db::new(&DbConfig::default())or hardcoded connection strings as a substitute for real DB access in integration tests. - Exception:
late-webroute smoke tests that instantiateAppStatebut do not exercise DB-backed routes may use an inertDb::new(&DbConfig::default()); the moment a test hits/gallery,/profiles, or any DB code path, uselate_core::test_utils::test_db(). late-core::test_utilsowns shared test infrastructure:test_db(),create_test_user(). Use these everywhere instead of rolling per-test user creation — except inlate-coremodel tests that are testingUser::createitself.late-ssh/tests/helpers/mod.rsre-exportscreate_test_userfromlate-coreand adds ssh-specific helpers (test_config,test_app_state,make_app, etc.). Domain test directories access these via#[path = "../helpers/mod.rs"] mod helpers;in theirmain.rs.- Any test that touches DB, services, network, or cross-module orchestration belongs here.
- Preferred integration layout is domain-oriented under crate
tests/, mirroring the source structure:tests/<domain>/main.rswith siblingsvc.rs,state.rs, etc. as needed.late-coretests are named after their domain (user.rs,vote.rs,chat/).
LLM enforcement:
- On every code change, check: does this need a test? If yes, classify it strictly as unit or integration per the rules above.
- LLM agents must NOT run
cargo test,cargo nextest, orcargo clippyin this repo. The human owner runs verification manually because those commands are too blocking in normal agent workflows. - Do NOT put integration-flavored tests (DB calls, service interactions, spawning tasks) inside
#[cfg(test)]module blocks insrc/files. - Do NOT invent extra source-side test directory structure when inline
#[cfg(test)] mod testsis sufficient; reserve directory splits for crate-level integration tests undertests/. - If a test is intentionally deferred (WIP/incomplete dependency), document the gap and cleanup plan in PR/context notes.
- Unit tests in module files — pure logic only, no I/O (
state.rs,input.rs,ui.rs,rate_limit.rs). - Integration tests in
late-ssh/tests/andlate-web/tests/— real DB via testcontainers, shared helpers. - Workspace-wide checks before merge (
fmt,clippy,nextest).
For late-ssh:
app/*/state.rs: unit tests for transition rules, event drains, selection/filter logic (includes profile field navigation).app/*/input.rs: unit tests for key routing and mode guards.app/*/ui.rs: unit tests for pure formatting/layout helpers only; avoid brittle pixel snapshots.app/*/{mod,state,input,ui,svc,model}.rs: keep the domain module flat and predictable; add pure unit tests inline in the relevant file instead of undersrc/app/*/tests/.app/render.rs/app/tick.rs: integration tests for orchestration (needs services/DB → goes intests/).app/*/svc.rs: integration tests intests/<domain>/svc.rs(needs real DB).- Integration test directories mirror the source domain structure:
tests/<domain>/main.rswith split files likesvc.rs,state.rsas needed. Game tests live undertests/games/<game>.rs. ssh.rs/api.rs: smoke tests intests/ssh_smoke.rs/tests/ws_smoke.rs.
For late-web:
- Handler/route behavior in
late-web/tests/*with request/response assertions. - Page/model transformations as unit tests under
src/pages/*(pure logic only). - Error mapping tests in
src/error.rsfor stable status/body behavior (pure logic only).
- LLM agents must not run tests or lint gates locally. Do not run
cargo test,cargo nextest, orcargo clippy; leave all verification to the human owner. - If code changes would normally merit verification, note the expected command(s) in handoff instead of running them.
- The human owner may still use the full CI-equivalent gate locally:
cargo fmt --all -- --check
cargo clippy --workspace --all-targets -- -D warnings
cargo nextest run --workspace --all-targets- Some integration/smoke tests require Docker/testcontainers and may fail in restricted sandboxes.
- Temporary russh crypto dependency caveat:
russh 0.60.1is currently the latest crates.io release and fixes the tracked advisory, but its dependency stack pullspkcs8 0.11.0-rc.11, which does not compile against finalpkcs5 0.8.0because the PBES2 method was renamed. The lockfile pinspkcs5to0.8.0-rc.13, matching the prerelease API expected bypkcs8. Recheck this after the nextrussh/pkcs8release and remove the pin once upstream resolves cleanly. - If a feature area is intentionally WIP, temporary lint/test gaps are acceptable only when explicitly documented and tracked for cleanup.
- Tool bootstrap: The repo now includes
.mise.tomlwithrust,mold, andcargo-nextest. Prefermise installbefore local development so the expected toolchain and test runner are available. - Cargo environment setup: For local host development, use Cargo's normal defaults, including the standard repo-local
target/directory. Docker/dev containers still use/app/targetvia container configuration.CARGO_HOME=$HOME/.cargoremains a valid override when an environment needs it, but it is not a repo-wide requirement. LATE_FORCE_ADMIN=1— dev-only escape hatch: OR'd withusers.is_adminat session init (late-ssh/src/ssh.rs), so every SSH session lands as admin. Must stay0in prod — enforced byrequired_booland hardcoded to"0"ininfra/service-ssh.tf.
flowchart LR
subgraph Server["late.sh Server"]
SSH["SSH Server<br/>(russh)"]
API["HTTP API<br/>(axum)"]
WEB["Web Server<br/>(axum)"]
IC["Icecast<br/>audio stream"]
LS["Liquidsoap<br/>playlist mgr"]
PG[(PostgreSQL)]
SR["SessionRegistry<br/>token → mpsc"]
PCR["PairedClientRegistry<br/>token → WS sender + state"]
end
SSH --> App["TUI App<br/>(ratatui)"]
API --> SR
API --> PCR
SSH --> PG
WEB --> API
LS --> IC
App --> SR
App --> PCR
Browser["Browser<br/>/connect/{token}"] <-->|"WS viz + control + state"| API
Browser -->|"audio stream"| IC
CLI["late CLI<br/>local audio"] <-->|"WS viz + control + state"| API
Terminal["User Terminal<br/>(SSH client)"] <-->|"SSH channel"| SSH
Terminal <-->|"opens URL"| Browser
sequenceDiagram
participant T as Terminal
participant S as SSH Server
participant A as App (TUI)
participant R as SessionRegistry
participant B as Paired Client
participant DB as PostgreSQL
T->>S: SSH connect
S->>S: Check conn limits (global + per-IP)
S->>DB: Find/create user by fingerprint
S->>S: Subscribe activity_feed (broadcast)
S->>S: Publish login ActivityEvent
S->>A: Create App with SessionConfig (is_new_user, activity_feed_rx)
S->>R: Register(token, mpsc::tx)
S->>T: Alt screen + render loop (15fps, splash screen + welcome overlay shown for every session)
T->>A: Keyboard input
A->>DB: Service calls (vote/chat/news)
B->>R: WS /api/ws/pair?token=...
B->>R: Viz frames + client_state
R->>A: mpsc → VizFrame
A->>B: mute / volume control
A->>T: Rendered frame bytes
flowchart TD
B["Browser / CLI paired client"] -->|"viz + client_state"| WS["WebSocket<br/>/api/ws/pair"]
WS -->|"SessionMessage::Viz"| SR["SessionRegistry"]
WS -->|"client state"| PCR["PairedClientRegistry"]
SR -->|"mpsc channel"| APP["App.tick()"]
APP --> VIZ["Visualizer.update()"]
APP -->|"m / +/-"| PCR
PCR -->|"toggle_mute / volume_up / volume_down"| WS
VIZ --> RENDER["Sidebar render<br/>thin cyan bars"]
flowchart LR
VS["VoteService"] -->|"watch"| VSS["VoteSnapshot"]
VS -->|"broadcast"| VSE["VoteEvent"]
CS["ChatService"] -->|"watch"| CSS["ChatSnapshot"]
CS -->|"broadcast"| CSE["ChatEvent"]
AS["ArticleService"] -->|"watch"| ASS["ArticleSnapshot"]
AS -->|"broadcast"| ASE["ArticleEvent"]
NS["NotificationService"] -->|"watch"| NSS["NotificationSnapshot"]
NS -->|"broadcast"| NSE["NotificationEvent"]
CS -->|"holds"| NS
PS["ProfileService"] -->|"watch"| PSS["ProfileSnapshot"]
PS -->|"broadcast"| PSE["ProfileEvent"]
RS["RoomsService"] -->|"watch"| RSS["RoomsSnapshot"]
RS -->|"broadcast"| RSE["RoomsEvent"]
BJM["BlackjackTableManager"] -->|"room id"| BJS["BlackjackService<br/>per table"]
BJS -->|"watch"| BJSS["BlackjackSnapshot"]
BJS -->|"broadcast"| BJSE["BlackjackEvent"]
AF["Activity Feed"] -->|"broadcast"| AFE["ActivityEvent"]
LB["LeaderboardService"] -->|"watch"| LBS["Arc<LeaderboardData>"]
VSS --> APP["App TUI<br/>mixed: global + per-user subscriptions"]
VSE --> APP
CSS --> APP
CSE --> APP
ASS --> APP
ASE --> APP
NSS --> APP
NSE --> APP
PSS --> APP
PSE --> APP
RSS --> APP
RSE --> APP
BJSS --> APP
BJSE --> APP
AFE --> APP
LBS --> APP
VoteService(inapp/vote/svc.rs),ChatService(inapp/chat/svc.rs),ArticleService(inapp/chat/news/svc.rs), andNotificationService(inapp/chat/notification_svc.rs) expose sharedwatchsnapshots (subscribe_state()/subscribe_snapshot()).ProfileService(inapp/profile/svc.rs) exposes per-userwatchsnapshots backed by service-owned maps (subscribe_snapshot(user_id)).LeaderboardServiceexposes a sharedwatch::Receiver<Arc<LeaderboardData>>refreshed from DB every 30s. Contains today's champions, streak leaders, per-user streak map (used for chat badges and profile achievements), all-time high scores (Tetris + 2048), and chip leaders (top balances).ChipService(inapp/games/chips/svc.rs) manages the Late Chips economy:ensure_chips(user_id)grants the daily 500-chip stipend on login,grant_daily_bonus_task(user_id, difficulty_key)awards 50/100/150 chips on daily puzzle completion. All 4 daily game services hold aChipServiceclone and call it inrecord_win_task().RoomsService(inapp/rooms/svc.rs) owns persistent game-room creation/listing/deletion overgame_rooms+ associatedchat_rooms, publishesRoomsSnapshotviawatch, and emitsRoomsEventsuccess/failure banners.BlackjackTableManager/BlackjackServiceown process-local per-room Blackjack runtime state. Detailed Rooms/Blackjack contracts live inlate-ssh/src/app/rooms/CONTEXT.md.- Events remain
broadcastfor all subscribers; targeted variants carryuser_idand are filtered in UI state.
To maintain a buttery-smooth 15-60 FPS over SSH, the architecture strictly separates synchronous UI rendering from asynchronous business logic:
- The Setup (
ssh.rs/main.rs) When a new SSH client connects, aSessionConfigis built containing global Services (likeVoteService,ArticleService, which hold DB pools and API keys). - The Initialization (
app/state.rs) InsideApp::new(), these services are used to create the UI States (e.g.,ChatStatewhich owns thenews::Stateandnotifications::State). Each UI State stores itsuser_id, subscribes to service channels, and spawns a per-user background refresh task (aborted onDrop). - The Sync Loop (
app/tick.rs) Every 66ms,App::tick()runs. It callstick()on all UI states. This:- Drains the channels to instantly update local memory state (e.g.,
Vec<Article>). User-targeted events are filtered byself.user_id.
- Drains the channels to instantly update local memory state (e.g.,
- The Paint Job (
app/render.rs->ui.rs) Immediately after the tick,App::render()runs. It passes the purely synchronous UI state directly to the draw functions. The UI just reads local memory and draws boxes. No.await, no freezing. - The User Action (
app/input.rs) SSH keystrokes now first land in a per-session unbounded queue owned by the render task (late-ssh/src/ssh.rs). Right before each render, the task drains queued bytes intoApp::handle_input(), then runstick()/render(). That keeps the input handler off the app mutex entirely for ordinary keystrokes while preserving the same synchronous UI state model. When an action requires I/O (like hittingEnterto save), the input handler fires a fire-and-forget method on the Service. The Service spawns a Tokio task to do the DB/API work, pushes the result to the channel, and the UI catches it on the next 66ms tick.
Each SSH session spawns one render task (late-ssh/src/ssh.rs) with two independent trigger sources:
- World tick — fires every
WORLD_TICK_INTERVAL(66ms). Advances animations (app.tick()), renders, ships the frame. Floor cadence ≈ 15 FPS regardless of input. - Input-driven render — fires within
MIN_RENDER_GAP(15ms) of any keystroke or terminal resize. Renders without advancing world time, so typed characters echo at near-native latency instead of waiting up to 66ms for the next world tick.
The select loop picks which branch to act on:
flowchart TD
INPUT["data() / window_change_request()<br/>(keystroke, resize)"] -->|"queue keystrokes or apply resize / set dirty=true"| SIGNAL
SIGNAL["RenderSignal<br/>dirty: AtomicBool<br/>notify: tokio::Notify"] -->|"notify_one()<br/>(after mutex released)"| LOOP
WT["world_tick.tick()<br/>every 66ms"] --> LOOP
LOOP{"biased select!"}
LOOP -->|"world tick fired"| ADVANCE["advance_world=true<br/>render"]
LOOP -->|"input_pending &&<br/>gap elapsed"| RENDER["advance_world=false<br/>render"]
LOOP -->|"notify && dirty"| ARM["input_pending=true<br/>loop"]
LOOP -->|"notify && !dirty"| DROP["eat stale permit<br/>loop"]
ADVANCE --> CLEAR["clear dirty under mutex,<br/>app.tick() + app.render()"]
RENDER --> CLEAR
CLEAR --> LOOP
biased ordering ensures the world tick wins on ties so animations aren't starved under a keystroke flood. next_render_action is extracted as a standalone async fn so the decision logic is unit-testable without a full session.
t=0 world tick fires → render, previous_render=0, dirty=false
t=3 keystroke → dirty=true, notify_one (permit stored)
t=3+ select: notify branch → dirty=true → input_pending=true, continue
t=3+ select: sleep_until(0+15ms) armed, notify disabled
t=8 keystroke → dirty=true (already), notify_one (permit stored, branch disabled)
t=15 sleep_until fires → render covers BOTH keystrokes, dirty cleared
t=15+ select: notify branch eats leftover permit → dirty=false → nothing
t=66 world tick → render, animations advance
Two keystrokes → one render at t=15. No spurious trailing frame.
tokio::sync::Notify::notify_one() stores one permit when no waiter is active. If Notify alone gated renders, permits left over from input already batched into an earlier render would fire an identical repeat frame one throttle window later. Two primitives, two jobs:
Notify— alarm clock. Wakes the task.dirty— sticky note. Source of truth for "there is unrendered state".
The input path now sets dirty immediately after enqueueing bytes for the render task, without taking the app mutex. The render task clears dirty immediately before draining that queue under the mutex. Invariant: input that lands during a render flips dirty back to true, so the current frame may miss it, but the next loop iteration must pick it up.
The stored-permit regression is locked down by ssh::tests::stale_permit_does_not_arm_throttle; the surrounding tests cover throttle timing, biased wins, and the idle/active paths.
- Throttle is per-session — one session's flood can't affect another's cadence.
- Ceiling: ~67 renders/sec per session (
1000 / MIN_RENDER_GAP_MS) — above smoothness threshold, below CPU-DoS territory. - Does not address lock contention — the app mutex is still shared between
data()and the render task; see §8.5 A. This change only closes the input-to-frame cadence gap, not the lock-held-across-tick stall.
flowchart LR
LOCAL["Local .m3u<br/>CC0/CC-BY music"] -->|"playlist"| LS
LS["Liquidsoap<br/>port 1234 telnet"] -->|"MP3 128kbps"| IC["Icecast<br/>port 8000"]
IC -->|"/stream"| WEBSTREAM["late-web<br/>/stream proxy"]
WEBSTREAM -->|"stable MP3 stream"| B["Browser / CLI audio"]
IC -->|"/status-json.xsl"| FETCH["NowPlaying fetcher<br/>(10s poll)"]
FETCH -->|"watch channel"| APP["App sidebar"]
VS["VoteService"] -->|"vibe.set genre"| LS
The audio stack is local-playlist-only. Liquidsoap reads curated local .m3u playlists backed by files in /music, then streams the result through Icecast. There are no third-party live radio upstreams in the current design.
Genres now use mksafe(local_playlist) only. Each playlist uses mode="randomize" + loop=true to shuffle all tracks and play through before re-shuffling, with check_next guards against back-to-back repeats at loop boundaries.
Migration status (April 2026):
- Lofi: DONE — 50 tracks, all CC0/CC-BY
- Ambient: DONE — 20 curated CC-BY 4.0 tracks
- Classical: DONE — 40 curated public-domain Musopen tracks
- Jazz: local-only for now; still the thinnest genre and a likely removal candidate
There are no live upstream radio sources in radio.liq.
Music binaries live in Cloudflare R2 (bucket configured via MUSIC_BUCKET GitHub var), synced to the Liquidsoap PVC at /music/ during infra deploys by the sync_music job in deploy_infra.yml. Playlists are .m3u files in infra/liquidsoap/ using Liquidsoap annotate: format and remain in git.
All music is CC0 or CC-BY licensed. CC-BY tracks require attribution — handled automatically via annotate: metadata in .m3u files flowing through ICY metadata to the sidebar "now playing" display.
Detailed track lists and source URLs live in MUSIC.md.
- Lofi: done, 50 tracks, mixed
CC0andCC-BY 4.0 - Ambient: done, 20 curated
CC-BY 4.0tracks from Amarent, Ketsa, and The Imperfectionist - Classical: done, 40 curated public-domain tracks from Musopen / Internet Archive
- Jazz: planned, source targets are HoliznaCC0, Kevin MacLeod, and Ketsa
Playlist generation uses curated manifests in scripts/fetch_cc_music.py, preserves duration in annotate: metadata, and can intentionally limit a playlist to the curated set even if older files still exist on disk.
High-potential (verified CC0/CC-BY, not yet downloaded):
- HoliznaCC0: 571 total tracks across ~50+ albums, all CC0. Full discography: https://freemusicarchive.org/music/holiznacc0/discography
- Ketsa: large catalog (lofi, jazz, soul, ambient, downtempo), CC-BY. Album "CC BY: FREE TO USE FOR ANYTHING" has 70 tracks: https://freemusicarchive.org/music/Ketsa/cc-by-free-to-use-for-anything
- John Bartmann: "Public Domain Soundtrack Music: Album One" (CC0) on Bandcamp
- Kevin MacLeod: 359 tracks (CC-BY): https://kevinmacleod.bandcamp.com/album/complete-collection-creative-commons
- FMA public domain search (9,000+ tracks): https://freemusicarchive.org/search?adv=1&music-filter-public-domain=1
Not selected for the local library:
- Pixabay: custom license, not ideal for a standalone music stream
- Chad Crouch: CC BY-NC + commercial licensing split
- Blue Dot Sessions: CC BY-NC only
- Kai Engel: mixed CC-BY/CC-BY-NC catalog, licensing instability after July 2025
- Classicals.de: license terms unclear
Music binaries live in Cloudflare R2, synced to the Liquidsoap PVC during infra deploys (sync_music job in deploy_infra.yml). Git is the source of truth for playlists, licenses, and source URLs — not for binaries. ConfigMap changes (playlists, radio.liq, icecast.xml) trigger automatic rollouts via config_hash annotations on deployment templates — no explicit restart job needed.
scripts/fetch_cc_music.py— Downloads from Bandcamp (via yt-dlp) and Internet Archive (via urllib), generates.m3uplaylists with ffprobe metadata. Supports--genreand--m3u-onlyflags.- Ambient uses a curated FMA manifest inside
scripts/fetch_cc_music.pyinstead of the older broad-source ambient target. - FMA CDN scrape pattern: FMA pages embed
fileUrlin HTML ashttps://files.freemusicarchive.org/storage-freemusicarchive-org/tracks/{hash}.mp3. These are direct-downloadable without authentication. Extract with regex on the page source (see/tmp/fetch_fma_tracks.pyfor reference). - Dependencies:
yt-dlp(installed via pipx),ffmpeg,ffprobe,python3.
Local playlist files retain full annotated metadata including duration (when present in ID3 tags). The rewrite_np_metadata function in radio.liq formats "now playing" as Artist - Title | Duration for the sidebar. Internet streams provided ICY metadata with no duration; local files may or may not have duration depending on the source.
Nonograms intentionally use an offline generation pipeline instead of generating puzzles during SSH sessions.
- Offline generation (
late-core)late-core/src/bin/gen_nonograms.rsgenerates puzzle banks by size (10x10,15x15,20x20), applies per-size difficulty profiles (10x10easy,15x15medium,20x20hard), validates every accepted candidate withnumber-loom, regenerates until each pack reaches the requested count, and writes only the final JSON assets (validation scratch files are cleaned up automatically). - Shared schema (
late-core)late-core/src/nonogram.rsowns the portable JSON contract (NonogramPuzzle,NonogramPack,NonogramPackIndex), clue derivation, pack validation, and deterministic daily puzzle selection by date. - Static assets (
late-ssh/assets/nonograms/) Generated packs live underlate-ssh/assets/nonograms/with oneindex.jsonplus one pack file per size (10x10.json,15x15.json,20x20.json). - Runtime loading (
late-ssh)late-ssh/src/app/games/nonogram/state.rsloads packs at server startup. SSH sessions only read the already-generated bank; they do not invokenumber-loomor generate puzzles on demand. - Daily selection
The runtime picks one puzzle per size deterministically from the prebuilt bank using the UTC date and the pack
size_key. This keeps the "daily" experience stable without storing generator state in Postgres. - Runtime persistence
late-sshnow persists onedailyand onepersonalslot per user andsize_keyinnonogram_games.drestores the date-based daily puzzle for the selected size,prestores that size's saved personal board, andnregenerates a fresh personal puzzle from the current pack. - Daily completion tracking
late-sshalso records a binary daily completion fact per user, size, and UTC date innonogram_daily_wins. This is intentionally separate from board state and does not track score or time.
Current invariant:
late-sshis runtime-only for nonograms: read JSON assets, select a puzzle, render/play it, and persist per-user progress. Generation belongs inlate-core/src/bin/gen_nonograms.rs, not in the SSH hot path.
late-cli builds the late companion binary. It launches the SSH TUI, plays the audio stream locally, sends visualizer frames over /api/ws/pair, and receives paired mute/volume controls from the TUI.
Root-level contracts:
late-cliis a standalone crate with nolate-coredependency.- Browser and CLI share the paired-client WebSocket schema, so the TUI can show client kind plus live mute/volume state.
- Native SSH is the default launcher path.
--ssh-mode oldremains the legacy OpenSSH-through-PTY compatibility path, and--ssh-mode opensshis the OpenSSH-managed path for hardware-backed keys. - Native and OpenSSH modes require server support for the
late-cli-token-v1SSH exec handshake. - Detailed CLI architecture, flags/env vars, audio pipeline, installer behavior, SSH modes, and fragile invariants live in
late-cli/CONTEXT.md.
The Artboard is a shared, persistent, multiplayer ASCII canvas on its own top-level screen (5, or cycle with Tab / Shift+Tab). User-facing docs say Artboard; code and upstream crates still use dartboard heavily, so search both terms.
Detailed Artboard/dartboard behavior lives in late-ssh/src/app/artboard/CONTEXT.md, including lifecycle, late-ssh/src/dartboard.rs persistence, provenance, keybindings, archive snapshots, tests, and fragile invariants.
Root-level facts:
- The server owns one in-process
dartboard_local::ServerHandlefor the wholelate-sshprocess. - The canonical canvas size is
384 x 192. - Users connect to the shared board only after opening Artboard; leaving drops that session's
LocalClientand frees the slot. - Artboard opens in
viewmode;i/Enterswitches into active edit mode. - Canvas and provenance are saved together in
artboard_snapshots; daily/monthly archives are exposed by the read-only web gallery at/gallery. - The gallery reads saved DB snapshots, not live server memory, so
maincan lag active drawing by the persistence interval.
late-sh/
├── Cargo.toml # Workspace: late-cli, late-core, late-ssh, late-web
├── CONTEXT.md # This file
├── OPEN_README.md # README for the public mirror repo
├── docker-compose.yml # Dev stack: ssh, web, postgres, icecast, liquidsoap
├── Makefile / Dockerfile # Local dev + image build entry points
├── scripts/ # Seed helpers, local CLI runner, CLI artifact builder
├── late-core/
│ └── src/
│ ├── db.rs # DB pool + migrations
│ ├── model.rs # model! + user_scoped_model! macros
│ ├── models/ # Core DB-backed domain entities
│ ├── nonogram.rs # Shared pack schema, clue derivation, daily selection
│ ├── rate_limit.rs # Sliding-window per-IP limiter
│ └── test_utils.rs # testcontainers DB helpers
├── late-ssh/
│ ├── src/
│ │ ├── main.rs # Starts SSH + API + background loops
│ │ ├── ssh.rs # russh server + render loop
│ │ ├── api.rs # /api/* + /api/ws/pair
│ │ ├── dartboard.rs # Shared Artboard server/persistence wrapper; see app/artboard/CONTEXT.md
│ │ ├── session.rs # SessionRegistry + PairedClientRegistry
│ │ ├── state.rs # Shared app state, activity, presence
│ │ └── app/
│ │ ├── ai/ # AI services: bot/graybeard + summarization
│ │ ├── artboard/ # Shared ASCII Artboard; see app/artboard/CONTEXT.md
│ │ ├── bonsai/ # Persistent bonsai tree state, service, and UI
│ │ ├── chat/ # Chat implementation; see app/chat/CONTEXT.md
│ │ ├── dashboard/ # Landing screen layout + shortcuts
│ │ ├── games/ # Arcade hub, leaderboards, and game subdomains
│ │ ├── icon_picker/ # Ctrl+] emoji + nerd font overlay (chat composer only)
│ │ ├── profile/ # Username/profile settings and stats
│ │ ├── rooms/ # Persistent game-room directory; see app/rooms/CONTEXT.md
│ │ └── vote/ # Genre vote state, service, and Liquidsoap control
│ ├── assets/nonograms/ # Prebuilt puzzle packs
│ └── tests/ # Integration/smoke tests grouped by feature
├── late-cli/
│ ├── CONTEXT.md # Companion CLI details: SSH modes, pairing, audio, installers
│ └── src/ # Standalone CLI: main + config, identity, raw_mode, pty, ssh, ws, audio/{decoder,resampler,output,decoder_thread,analyzer}
├── late-web/
│ ├── CONTEXT.md # Web routes, browser protocols, stream proxy, profiles/gallery, tests
│ ├── src/
│ │ ├── main.rs / lib.rs # Web entrypoint + router
│ │ ├── config.rs # Web config
│ │ ├── error.rs # App error mapping
│ │ └── pages/ # Connect/landing, chat, gallery, play, profiles, stream, dashboard
│ └── static/ # Tailwind output/source
└── infra/
├── icecast/icecast.xml # Icecast config
└── liquidsoap/ # Radio config + local fallback playlists
SSH API (late-ssh, port 4000):
GET /api/health- DB health checkGET /api/now-playing→NowPlayingResponse { current_track, listeners_count, started_at_ts }GET /api/status→StatusResponse { online, message, version }GET /api/ws/pair?token={token}- WebSocket upgrade for paired browser/CLI control + viz
WS payloads (client → server):
{ "event": "heartbeat" }{ "event": "viz", "position_ms": u64, "bands": [f32; 8], "rms": f32 }{ "event": "client_state", "client_kind": "browser" | "cli", "ssh_mode"?: "native" | "openssh" | "old", "platform"?: "android" | "linux" | "macos" | "windows", "muted": bool, "volume_percent": u8 }
WS payloads (server → client):
{ "event": "toggle_mute" }{ "event": "volume_up" }{ "event": "volume_down" }
Web routes (late-web, port 3000):
GET /- Landing page: late.sh branding,ssh late.shCTA, CLI install/build copy actions, and links to gallery/play/profilesGET /{token}- Audio pairing page: WS connection to terminal session, local audio playback, paired mute/volume control, Web Audio analyzer for TUI visualizerGET /status?pairing={bool}- HTMX fragment: now-playing track + listener count (fetched from SSH API internally).pairing=falsefor landing footer,pairing=truefor pairing detail view. Polled every 5s.GET /chat/{token}- Browser chat page; connects tolate-ssh/api/ws/chatGET /dashboard,/dashboard/now-playing,/dashboard/status- Internal/demo dashboard and HTMX partialsGET /gallery?key=...- Read-only Artboard snapshot gallery backed by saved DB snapshotsGET /play,/play/listeners- Browser xterm.js TUI demo throughlate-ssh/api/ws/tunnelGET /profiles,/profiles/{slug}- Public work profile index/detail pagesGET /stream-audio/mpegstream proxy to Icecast with bundled silence fallbackGET /test- Error simulation endpoint- All other routes → redirect to
/ - Detailed web route, template, runtime config, browser protocol, and stream-proxy notes live in
late-web/CONTEXT.md.
Service stream contracts (internal):
VoteService::subscribe_state()(inapp::vote::svc) → sharedwatch::Receiver<VoteSnapshot>(durable latest state)- Chat service/news/notifications/showcase/work stream contracts live in
late-ssh/src/app/chat/CONTEXT.md. ProfileService::subscribe_snapshot(user_id)→ per-userwatch::Receiver<...Snapshot>(durable latest state)ProfileService::prune_user_snapshot_channel(user_id)→ explicit cleanup hook called from UI stateDrop; removes idle per-user snapshot sendersLeaderboardService::subscribe()→watch::Receiver<Arc<LeaderboardData>>(shared, refreshed every 30s from DB; contains today's champions, streak leaders, per-user streak map for badge computation)subscribe_events() → broadcast::Receiver<...Event>- transient events/notices
- Identity: SSH key fingerprint →
userstable (User::find_by_fingerprint) - Open access:
LATE_SSH_OPEN=trueenables auth, but only public-key auth is accepted; password and keyboard-interactive are always rejected - User scoping: Votes are scoped to
user_id(FK tousers.id) - Chat scoping: Rooms visible via membership (
ChatRoom::list_for_user,ChatRoomMember) - Auto-join: Public rooms with
auto_join=trueare seeded for a user only when the user record is first created; reconnecting does not re-add rooms the user already left. The regular/public #roomuser command creates/opens an opt-in room only for the caller (auto_join=false, no bulk member add). Permanent/admin room creation still bulk-adds all existing users when the room is created/promoted. - Multi-tenant isolation: All user data queries filter by
user_id; no cross-user reads
Entities (all use UUID v7 PKs, id/created/updated built into model! macro, lists default to ORDER BY created DESC):
| Entity | Table | Key constraints |
|---|---|---|
| User | users |
fingerprint UNIQUE; is_admin and is_moderator role flags; username trimmed length 1-32, case-insensitive UNIQUE via idx_users_username_lower, format ^[A-Za-z0-9._-]+$ and no @ (canonical public handle); settings JSONB holds ignored_user_ids: [uuid] (keyed by id, not username, so renames don't drop ignores), theme_id (string), enable_background_color (bool), show_right_sidebar (bool, default-on when absent), show_games_sidebar (bool, default-on when absent), notify_kinds: [text] (desktop-notification opt-ins: dms, mentions, game_events), notify_cooldown_mins (int ≥ 0; 0 = no throttle) |
| Vote | votes |
user_id UNIQUE (one vote per user per round) |
| ChatRoom | chat_rooms |
kind IN (general, language, dm, topic), complex constraints |
| ChatRoomMember | chat_room_members |
PK (room_id, user_id), last_read_at |
| ChatMessage | chat_messages |
body 1-2000 chars, nullable reply_to_message_id self-FK for reply jumps |
| Article | articles |
url UNIQUE, user_id FK |
| ArticleFeedRead | article_feed_reads |
user_id PK/FK, per-user news read checkpoint |
| Notification | notifications |
user_id+actor_id FK to users, message_id FK to chat_messages, room_id FK to chat_rooms, read_at nullable, CHECK(user_id<>actor_id) |
| SudokuDailyWin | sudoku_daily_wins |
UNIQUE(user_id, difficulty_key, puzzle_date), score tracked |
| NonogramDailyWin | nonogram_daily_wins |
UNIQUE(user_id, size_key, puzzle_date), binary completion |
| MinesweeperGame | minesweeper_games |
UNIQUE(user_id, difficulty_key, mode), stores seeded mine_map + player_grid + lives (3-life system) |
| MinesweeperDailyWin | minesweeper_daily_wins |
UNIQUE(user_id, difficulty_key, puzzle_date), best score (lives remaining) retained |
| SolitaireGame | solitaire_games |
UNIQUE(user_id, difficulty_key, mode), stores seeded stock/waste/foundations/tableau |
| SolitaireDailyWin | solitaire_daily_wins |
UNIQUE(user_id, difficulty_key, puzzle_date), best score retained |
| BonsaiTree | bonsai_trees |
user_id UNIQUE, growth_points, last_watered DATE, seed BIGINT, is_alive BOOLEAN |
| BonsaiGrave | bonsai_graveyard |
user_id FK (not unique — multiple deaths), survived_days, died_at |
| BonsaiDailyCare | bonsai_daily_care |
UNIQUE(user_id, care_date), UTC daily care row with watered flag, generated branch goal, cut branch ids, and one-shot water/prune penalty flags |
| UserChips | user_chips |
user_id PK/FK, balance BIGINT (floor=100), last_stipend_date DATE |
| Showcase | showcases |
user_id FK; title 1-120, url 1-2000, description 1-800, tags TEXT[] (lowercased, ≤8). Listed newest-first, edit/delete restricted to author or admin |
| ShowcaseFeedRead | showcase_feed_reads |
user_id PK/FK, last_read_at timestamp cursor for per-user Showcase unread counts |
| WorkProfile | work_profiles |
user_id UNIQUE FK; slug UNIQUE (w_ + 12 lowercase alnum), headline, status (open, casual, not-looking), type/location, links, skills, summary. Listed latest-update-first, edit/delete restricted to author or admin |
| WorkFeedRead | work_feed_reads |
user_id PK/FK, last_read_at timestamp cursor for per-user Work unread counts |
| GameRoom | game_rooms |
Generic game-room registry. id UUIDv7, chat_room_id UNIQUE FK to chat_rooms, game_kind TEXT, slug UNIQUE, display_name non-empty, status IN (open, in_round, paused, closed), settings JSONB, optional created_by. GameKind is a Rust enum over text, not a Postgres enum. |
| ArtboardSnapshot | artboard_snapshots |
board_key UNIQUE (main, daily:YYYY-MM-DD, monthly:YYYY-MM), canvas JSONB, provenance JSONB. Runtime contracts live in late-ssh/src/app/artboard/CONTEXT.md. |
Key enums:
Genre:Lofi,Classic,Ambient,Jazz(vote/service/liquidsoap)Screen:Dashboard,Chat,Games,Rooms,Artboard(cycle:Dashboard -> Chat -> Games -> Rooms -> Artboard -> Dashboard; News, Mentions, Discover, Showcase, and Work are synthetic room-like entries within Chat, not separate screens. News, Mentions, Showcase, and Work each carry persisted unread state; Showcase is backed byshowcases, and Work is one public work profile per user backed bywork_profiles.)ChatRoom.kind:general(slug=general),language(slug=lang-{code}),topic(user/admin created),dm(canonical user pair),game(Rooms-backed embedded chat)ChatRoom.visibility:public,private,dmGameKind: Rust enum inlate-core::models::game_room; currentlyBlackjack. Persisted asTEXTin Postgres to keep future game-kind changes/migrations simple.
- Service errors: Propagated via
anyhow::Result, surfaced asVoteEvent/ChatEventerror variants - Chat:
SendSucceeded/SendFailedwithrequest_idfor composer feedback - Votes:
VoteEvent::Error { user_id, message }for unknown user - SSH: Connection rejected on limit exceeded; render frame drops logged
- Web:
AppError::Internal/AppError::Render→ HTTP 500 with template fallback
- Architecture: 100% native OpenTelemetry (OTLP) pipeline powered by
opentelemetryandtracingcrates, routed through an OpenTelemetry Collector into a pure VictoriaMetrics backend. - Traces (
VictoriaTraces): Distributed tracing spans generated via#[tracing::instrument]. The Collector automatically generates RED metrics (Rate, Errors, Duration) from these spans using thespanmetricsconnector. - Service graph requirement: VictoriaTraces must run with
--servicegraph.enableTask=truefor the Grafana service graph / dependencies view to populate from trace relationships. - Logs (
VictoriaLogs): Structured JSON logs bypassing stdout completely viaopentelemetry-appender-tracing. Trace IDs and Span IDs are natively embedded for full cross-correlation in Grafana. - Metrics (
VictoriaMetrics): Custom metrics (e.g., counters) pushed directly via OTLP PeriodicReader, alongside the RED metrics generated by the Collector. - HTTP server spans:
late-webwraps the router with request middleware that emitsotel.kind=serverspans and recordshttp.request.method,http.route,url.path, andhttp.response.status_code; 5xx responses setotel.status_code=ERROR. - Trace propagation:
late-core::telemetry::init_telemetry()installs the W3C Trace Context propagator.late-webinjects trace headers on outbound/api/now-playingrequests, andlate-sshextracts incoming headers on API requests so cross-service traces can form real parent/child relationships. - Web metrics:
late_web_page_views_total{page,has_token}andlate_web_now_playing_fetch_total{result}are emitted whenlate-webis built with the optionalotelfeature; metrics are no-ops without it. - Grafana provisioning invariant: The metrics datasource uses the stable UID
victoriametrics; provisioned dashboards must reference that UID instead of Grafana-generated datasource IDs. - Console Output: Local dev uses
tracing_subscriber::fmtwithRUST_LOG=info,late_web=debug,late_ssh=debug,late_core=debug. - DB health:
GET /api/healthendpoint,Db::health()method - Connection counts: Per-IP tracking in
State.conn_counts, global via semaphore. WhenLATE_SSH_PROXY_PROTOCOL=true, SSH per-IP limits use the client IP from PROXY protocol. - Presence/listener count source: TUI sidebar online/users and
/api/now-playing.listeners_countboth useState.active_users.
In progress:
- Rooms/Blackjack: Active multiplayer table-game work is documented in
late-ssh/src/app/rooms/CONTEXT.md. Root context keeps only project-wide contracts; local context owns directory, service, Blackjack runtime, rendering, dashboard slot, and known-gap details.
Future:
- Nonograms (v2): Replace random generation with pixel-art-to-nonogram pipeline or bulk-curate from webpbn.com.
- Chat upgrades: better backlog pagination, moderation polish, and richer matchmaking hooks
Known gaps/risks:
- Online/listener metrics are app-level presence (
active_users, includes @bot and @graybeard), not true Icecast listener analytics - Time remaining is approximate (up to 5s polling delay on track change)
- No external metrics or alerting system
- Single-replica assumption: Several structures are purely in-memory and not shared across processes (see multi-replica notes below)
- SSH pod drain window:
infra/service-ssh.tfsetstermination_grace_period_seconds = 21600(6h) so rolling updates can stop new connections while allowing existing SSH sessions to drain for a long window before Kubernetes sends SIGKILL. - SSH ingress reload risk:
ssh late.shcurrently reacheslate-sshthrough RKE2 ingress-nginx TCP passthrough (infra/ssh-tcp.tf, port22 -> service-ssh-sv:2222::PROXY). Long-lived SSH sessions can be dropped after any ingress-nginx config reload because old workers are terminated afterworker_shutdown_timeout(observed 2026-04-29 after cert-manager renewedservice-web-tls: reload at19:56:37Z, mass SSH/WS disconnect at20:00:38Z, matching the 240s timeout). Future infra improvement: stop routing SSH through ingress-nginx; use a dedicated TCP LoadBalancer/NodePort/host proxy for SSH so HTTP/TLS reloads cannot kill SSH sessions. Short-term mitigation: increase ingress-nginxworker-shutdown-timeout, but that only delays the disconnect. - IPv6 ingress status: RKE2/CNI
hostPortexposes the current ingress-nginx path for IPv4 only; do not switch the main ingress controller tohostNetworkwithout a rollout plan. Public IPv6 is handled by the separatekube-system/ipv6-proxyHAProxy DaemonSet ininfra/ipv6-proxy.tf, binding2a01:4f9:c013:2ae1::1on80,443, and22; HTTP(S) forwards to localhost ingress hostPorts, while SSH forwards toservice-ssh-sv:2222with PROXY protocol. Verified working externally on 2026-05-03;Network is unreachableduringssh -6 late.shmeans the client lacks IPv6 egress. - Stateful VT parsing in
late-ssh/src/app/input.rs: SSH input now runs through a persistentvte::Parser, so CSI/SS3 sequences and bracketed paste survive split russh reads instead of assuming the whole escape sequence lands in one chunk. That removes the old split-paste failure where[200~/[201~residue or embedded newlines could leak through as live keystrokes. The app still keeps two pragmatic layers on top:is_likely_pasteheuristically treats large printable unmarked chunks as paste for terminals without bracketed paste, andsanitize_paste_markers/strip_paste_markersstill scrub stored residue defensively when copying URLs from older polluted state. StandaloneEscis resolved on a short tick delay so split escape sequences are not mistaken for cancel keys.
Roadmap ideas:
- Nail one addictive loop: join -> listen -> chat -> vote -> return tomorrow.
- Pick a clear ICP first: solo devs at night vs remote teams during work hours.
Add one "reason to come back" mechanic✓ Daily streaks + badge tiers + leaderboard. Next: daily room rituals, timed events.- Keep friction near zero: ssh late.sh + optional browser pairing only when wanted.
- Measure retention early: D1/D7 return, session length, messages/user, votes/session.
Shipped:
Tetris (Ascii Drop)✓ Endless falling-block arcade, 15fps gravity, persisted runs, per-user high scores.Minesweeper✓ Classic logic puzzle with daily seeded boards and personal infinite play.2048✓Sudoku✓Nonograms✓Solitaire✓
Table Games (active buildout):
- Blackjack: Persistent rooms, per-room runtime services, embedded room chat, and chip settlement are live in the Rooms screen. Detailed runtime behavior lives in
late-ssh/src/app/rooms/CONTEXT.md. Still missing AFK/disconnect handling. - Texas Hold'em Poker (PvP): The ultimate late-night clubhouse game. Table-scoped chat, robust turn state.
Async 1v1:
- Chess: Correspondence style — make moves at your own pace over hours/days.
- Battleship: Fire a shot and check back tomorrow.
Real-time Multiplayer:
- Tron (Lightbikes): 15fps grid-based survival arena.
Card Games:
- Cribbage / Bridge / Thousand (Tysiąc): Cozy trick-taking games, deep strategy.
- Archive monthly chip leaders (top 3 get a permanent badge?)
- Reset balances to baseline at month end
- "Hall of Fame" display somewhere
- No chips needed — W/L record + rating
- Async: make a move, come back later
- Game completion counts toward daily streaks
/challenge @user chessin chat for matchmaking
- Texas Hold'em: PvP, uses chip betting
- Needs turn management, pot logic, hand evaluation
- Higher complexity — build after Blackjack validates the chip system
- Activity feed broadcast when someone sits at an empty table
/play <game>and/challenge @user <game>commands- Accept/decline prompts
| Category | Games | Win condition | Leaderboard section | Streaks | Chips |
|---|---|---|---|---|---|
| Daily puzzles | Sudoku, Nonograms, Minesweeper, Solitaire | Solve the daily | Today's Champions | Yes | +50 bonus per completion |
| High-score | Tetris, 2048 | Personal best | All-Time High Scores | No | No |
| Casino | Blackjack, Poker (future) | Grow your chip balance | Chip Leaders | Optional | Bet and win/lose |
| Strategy | Chess, Battleship (future) | Beat opponent | W/L + Rating | Yes (game completed) | No |
An always-running game where every connected SSH session is automatically a participant. The world ticks forward whether you're watching or not — drop in, make moves, drop out, come back tomorrow.
Direction: 4X / trading / economy game. Think simplified space traders or terminal-scale Civilization — explore, expand, exploit, trade. Every connected user is a player in the same persistent world.
Why it fits late.sh:
- Always-on matches the clubhouse vibe — the world is alive when you SSH in
- Scales naturally with player count (more players = richer economy/politics)
- Gives a strong "check back tomorrow" retention loop
- Integrates with Late Chips economy
- Chat becomes strategic (alliances, trade negotiation, trash talk)
Open design questions:
- Turn-based (ticks every N minutes) vs real-time with rate-limited actions?
- How much can happen while you're offline? (auto-trade, passive income, vulnerability to raids?)
- Map topology: shared grid, star map, abstract network?
- Win conditions or endless sandbox?
- Seasonal color shifts (real-world date), profile display for visitors, graveyard rendering on profile.
- Fancier renderer — possibly port/adapt
cbonsai(https://github.com/mhzawadi/homebrew-cbonsai) for richer growth animation and branching.
- Read-only dashboard widget showing PR reviews, mentions, issue updates via PAT.
- Gives solo devs a productivity reason to keep the terminal open.
- Daily/weekly rituals (lo-fi standup, shipped rollup, weekend recap)
- Ambient presence (quiet hours, listening since, typing indicator)
- Micro-collab tools (shared scratchpad, snippet paste, pairing ping)
- Cozy utilities (pomodoro, focus playlists, now-playing shoutouts)
- Community texture (rotating shoutout board, wall of thanks)
- Events (coffee breaks, AMAs, mini coding jams)
- Personalization (accent color, favorite vibe, custom tagline)
Chat-specific refresh/tail loading, commands, rendering, keybindings, synthetic entries, performance notes, and gotchas live in late-ssh/src/app/chat/CONTEXT.md.
Currently the SSH app assumes a single process. These in-memory structures would need to be externalized (Redis / Postgres) for multiple replicas:
| Structure | Location | Current | To externalize |
|---|---|---|---|
current_genre / round_id |
VoteService::ServiceState |
In-memory, resets to Lofi on restart | Persist to DB; only one replica runs the switch timer (leader election or DB lock). During pod drain today, the old pod cancels the vote loop immediately so only the new pod keeps mutating rounds/Liquidsoap. |
active_users / conn_counts |
State |
In-memory counters | Shared store (Redis or DB) |
SessionRegistry |
session.rs |
In-memory token → mpsc |
Stays local — sticky sessions route SSH + WS to same replica |
| Vote/Chat/Article events + snapshots, Profile per-user snapshots | broadcast / watch channels |
In-process only | Postgres LISTEN/NOTIFY or Redis pub/sub for cross-replica fan-out |
| @bot + @graybeard chat | GhostService |
Always-on presence + AI chat tasks; both are dedicated DB users with fixed fingerprints | Single-leader to avoid duplicate chat responses. During pod drain today, the old pod cancels bot tasks immediately. |
| Leaderboard data | LeaderboardService |
DB-backed watch channel, 30s refresh |
Already DB-backed; each replica runs its own refresh loop — duplicate work but no write conflict |
Approach: Sticky sessions (LB routes by source IP) so each SSH connection lives on one replica. Shared data via DB/Redis. Not needed yet — single replica handles thousands of concurrent SSH sessions.
- All user-data queries MUST filter by
user_id- enforced byuser_scoped_model!macro and explicit_by_usermethod variants model!macro hardcodesid: Uuid,created: DateTime<Utc>,updated: DateTime<Utc>— do NOT duplicate these in@generated; use@generatedonly for extra fields (e.g.,last_seenon User)- Chat room visibility enforced via
ChatRoom::list_for_user(membership join) - never expose rooms user hasn't joined #announcementsis read-joinable like other permanent public rooms, but only admins may post there; enforce this in the chat service send path, not only in the UI- DM rooms canonicalize user IDs (
dm_user_a < dm_user_btext order) to prevent duplicate DM pairs - DM room endpoints (
dm_user_a,dm_user_b) are durable even whenchat_room_memberschanges: if one participant leaves a DM, the next message from the other participant re-adds both endpoints before targeted delivery. Private topic rooms do not have durable endpoints and still require explicit invites/rejoins. users.usernameis the canonical public handle for chat/DM lookup; SSH login seeds it from the SSH username viaUser::next_available_username(sanitizes to[A-Za-z0-9._-], adds-Nsuffixes to stay unique onLOWER(username))- @bot and @graybeard bootstrap on app startup: ensure DB user with a fixed
username, join public rooms, and insert intoactive_users(always online). Both are dedicated users with fixed fingerprints (bot-fp-000,graybeard-fp-000) - Connection limits (global semaphore + per-IP counter) plus SSH attempt rate limit (sliding window) MUST be enforced before any auth (effective client IP is resolved from PROXY protocol when enabled)
- Chat message deletes are hard deletes; any moderation/delete path must remove rows directly rather than relying on tombstones
- UUID v7 PKs (
uuidv7()default) for time-ordered IDs across all tables - All foreign keys use
ON DELETE CASCADE- deleting a user cascades to all their data - Vote table has
UNIQUE(user_id)- one vote per user, upsert on conflict - Chat room constraints: general must have
slug='general', language must havelanguage_code, DM must have both user IDs with correct ordering auto_joincan only betruefor public rooms
Paired client control + visualizer:
- Trigger: SSH PTY request creates a session token plus the inbound
SessionRegistryroute. - Processing: Browser or CLI connects
GET /api/ws/pair?token=...; API registers an outbound paired-client sender/state slot inPairedClientRegistry. - Side effects: Paired client sends viz frames (66ms-ish) plus
client_state; viz frames route throughSessionRegistrytoApp.tick(), whileclient_stateupdates paired kind/mute/volume metadata inPairedClientRegistry. - Side effects: TUI
m,+, and-sendtoggle_mute,volume_up, andvolume_downback over the same WS to only the paired client for that token. - Failure: If the paired client disconnects, visualizer decays (rms * 0.96 per tick) and paired state disappears. If SSH disconnects, the session token unregisters on drop.
Chat flows:
Chat send/edit/delete, ignore, roster/help overlays, replies, dashboard favorites, autocomplete, synthetic entries, and chat rendering flows live in late-ssh/src/app/chat/CONTEXT.md.
Vote round switch:
- Trigger: VoteService background tick (5s) detects switch interval (default 60 min) elapsed since last switch
- Processing:
switch_to_winner()→ pick genre with most votes (or keep current) → clear all votes → incrementround_id→ sendvibe.set <genre>to Liquidsoap - Side effects: All clients detect
round_idchange → clearmy_vote. Liquidsoap switches playlist. - Failure: Liquidsoap TCP failure logged but round still switches locally.
- Rooms/Blackjack invariants live locally: directory filters/placeholders, Blackjack render tiers, service-owned stake chips, seat player hydration, dashboard Blackjack slots, and active-room chat routing are documented in
late-ssh/src/app/rooms/CONTEXT.md. - Chat invariants live locally: room ordering, composer targets, replies, reactions, pins, ignores, snapshots/tails, row caches, synthetic entries, and chat keybindings are documented in
late-ssh/src/app/chat/CONTEXT.md. - Artboard invariants live locally: dartboard lifecycle, persistence/archives, provenance, active-vs-view input routing, swatches, glyph picker, and gallery lag caveats are documented in
late-ssh/src/app/artboard/CONTEXT.md. - Render loop missed ticks: 66ms interval with
MissedTickBehavior::Skip- if a frame takes too long, next ticks are skipped rather than queued (prevents snowball lag) - SSH data timeout:
handle.datahas 50ms timeout to avoid blocking render loop on backpressure - SSH send failure is terminal for render task: if
handle.datareturnsErr(closed/broken channel),render_oncenow returns an error so the render loop stops and closes channel once, instead of logging warnings every 66ms forever - All services are singletons shared across SSH sessions.
ProfileServicesnapshots are per-user channels keyed byuser_id; events still requireuser_idfiltering in UI state. Profile snapshots include theProfileprojection plus a read-onlybonsai_treesrow when one exists, so viewing a profile can render bonsai without creating/mutating another user's tree. Per-user background refresh tasks are spawned on session init and aborted onDrop, and profile snapshot channels are pruned when receivers go away. - Web Audio
createMediaElementSourceis one-shot: Can only be called once per<audio>element. AudioContext + source node must be created once and reused across play/pause cycles. Disconnect suspends the context (audioCtx.suspend()), replay resumes it — never close and recreate. - Browser audio pairing status must not be stomped by WS: WS
onclose/onerrormust checkstatus !== 'playing'before setting'disconnected', otherwise a WS drop kills the "streaming" UI while audio is still playing fine - Paired-client control routing is latest-wins per token:
PairedClientRegistrystores one outbound sender/state entry per session token. If multiple browser/CLI clients pair against the same token, the most recent registration owns control/state until it disconnects. - Web/CLI Audio and WS Resiliency: Both paired clients use bounded retry loops for WebSocket disconnections and audio stream failures. Web Audio reconstructs elements with cache-busting
?t=URLs, and CLI stream/audio specifics live inlate-cli/CONTEXT.md. - Browser and CLI viz payloads share schema, not implementation: Both paired clients send
{ event: "viz", position_ms, bands, rms }, but the browser uses Web AudioAnalyserNodewhile the CLI uses an in-process Rust FFT over playback samples. Expect similar behavior, not identical numbers. - CLI invariants live locally: SSH modes, token handshakes, identity generation, local audio pipeline, terminal resize forwarding, and pre-token input gating are documented in
late-cli/CONTEXT.md. - Activity feed broadcast timing:
broadcast::Receiveronly sees messages sent AFTER subscription. The receiver must be created inauth_publickey(before login event is sent), stored onClientHandler, then.take()'d intoSessionConfiginpty_request. Creating the receiver later misses the user's own login event. - Leaderboard refresh is async, badges are eventually consistent:
LeaderboardServicerefreshes every 30s. A new daily win won't appear in the leaderboard or chat badges until the next refresh cycle. Activity feed callouts are immediate (fire-and-forget fromrecord_win_task). - Streak SQL uses gaps-and-islands: A streak is "current" if its last day is today or yesterday. This means a user who hasn't played today still keeps their streak visible until midnight UTC tomorrow. The
UNIONacrosssudoku_daily_winsandnonogram_daily_winsdeduplicates dates so playing both games on the same day counts as one streak day. - Game services hold
activity_feedsender:SudokuServiceandNonogramServiceboth hold a clone of thebroadcast::Sender<ActivityEvent>for win callouts. The username is looked up fromusersinside the fire-and-forget task (vialate_core::models::profile::fetch_username), not passed from the caller. - Bonsai death check runs on login:
BonsaiService::ensure_tree()checkslast_wateredagainst UTC today on every SSH session start. If 7+ days have passed, the tree is killed and a graveyard record is created. This means death is only detected when the user reconnects, not while offline. - Bonsai daily care is UTC-based: session startup ensures today's
bonsai_daily_carerow and applies unapplied penalties from prior care rows once. Missing water does not directly reduce growth, but 7+ dry days kills the tree. Missing the generated daily wrong-branch cuts costs 10 growth. The globalwopens the care modal; watering now happens inside that modal. - Bonsai passive growth is per-session: The tick counter in
BonsaiStategrants 1 growth point every ~9000 ticks (~10 min at 15fps). If a user has multiple sessions, each grants growth independently. This is acceptable — it rewards being connected, not gaming the system. - Bonsai chat glyph is current-user only: The bonsai stage glyph is only shown next to the current user's own messages: Seed
·, Sprout⚘, Sapling🌱, Young🌲, Mature🌳, Ancient🌸, Blossom🌼; Dead renders no glyph. Other users' bonsai stages are not queried or displayed in chat (would require a new cross-user lookup). - Bonsai growth stages: living stages use a simple 100-point ladder capped at 700 growth points: Seed 0-99, Sprout 100-199, Sapling 200-299, Young 300-399, Mature 400-499, Ancient 500-599, Blossom 600-700.
- Bonsai care modal owns pruning: global
wopens the care modal (w careis rendered on the Bonsai sidebar border). Inside the modal,wwaters/replants,phard-prunes the whole tree (-100 growth, rerolls seed, resets today's wrong-branch cuts),hjkl/arrows move a spatial pruning cursor,xcuts only when the cursor is on a generated wrong branch,scopies the ASCII snippet, and?opens the Bonsai help section. A wrong cut costs -10 growth immediately. Completing all daily wrong-branch cuts preserves the current shape; it no longer rerolls seed. - Bonsai seed math is stable, order-sensitive:
seed % style_countpicks the Japanese style,(seed / style_count) % shape_countpicks the hand-tuned silhouette within that style,(seed / (style_count * shape_count)) % 3picks the texture form (default / airy / dense). Reordering match arms intree_asciior inserting a new style mid-list silently remaps every existing user's tree to a different silhouette. Append new styles at the end and bump the stage'shigh_stage_style_count/high_stage_shape_count. - Bonsai music sway works in tight cards:
render_tree_art_lines()applies beat-driven horizontal sway through a small viewport helper, so the 24-column right sidebar can crop shifted canopy lines instead of clamping the motion away. The care modal and sidebar share this renderer. - Help modal (
?) intercepts all input: Whenshow_helpis true, the input handler dismisses the modal on any keypress before any other input processing. This includes?itself (toggle off) andEsc. - Desktop notifications bypass the frame diff: OSC 777 (kitty/Ghostty/rxvt-unicode/foot/wezterm/konsole/mlterm) and OSC 9 (iTerm2) payloads are written to
App::pending_terminal_commands, not into the ratatui frame.late-ssh::ssh::render_oncedrains that buffer after pushing the frame diff and sends each payload as a separatehandle.datacall. Writing them inline withwrite!(self.shared, …)would slip them into the diff and get re-emitted on every redraw. Same rule applies to OSC 52 clipboard copies. The session emits an XTVERSION probe (CSI > q) alongside the other alt-screen setup bytes and narrowsApp::notification_mode(Both→Osc777|Osc9) from the DCS reply (ESC P > | <name>(<version>) ST) — kitty/wezterm/ghostty/foot/konsole/rxvt-unicode/mlterm land onOsc777, iTerm2 onOsc9, and unknown/non-responding terminals stay onBoth(prior behavior). Replies are spliced out of the raw byte stream before the splash short-circuit so the leadingESCdoesn't dismiss the splash (input::extract_xtversion_replies); thevte::ParserDCS path (hook/put/unhook) catches the same reply again after splash andApp::set_terminal_versionis idempotent, so the double-path is intentional. - Notification pipeline is kind-tagged and throttled server-side:
ChatState::pending_notificationsholdsPendingNotification { kind: &'static str, title, body }entries drained each render.render.rspicks the first pending whosekindis inusers.settings.notify_kindsand honors the sharednotify_cooldown_minsviaApp::last_notify_at. Adding a new kind means: (1) add a matching toggle row in the settings modal UI/state, (2) enqueue it from the relevant event handler, and (3) update the render-side matcher/tests that assume the current"dms" | "mentions" | "game_events"set. No tmux DCS wrapping — tmux is explicitly unsupported. - Profile notifications default to all-off: Migration 026 merges profile fields into
users.settingswithnotify_kinds = []andnotify_cooldown_mins = 0.render.rsonly fires if the kind string is present in the user's array, so a brand-new account is silent until they opt in through the settings modal. A focus-tracking"unfocused"policy used to exist (DEC mode 1004) but was removed —notify_kindsis the whole model now. Profileis a view, not a table: Migration 026 dropped theprofilestable — username + notify settings + theme now live onusers(column +settingsJSONB).late_core::models::profile::Profileis a projection loaded viaProfile::load(client, user_id)and saved viaProfile::update(client, user_id, params), which merges intosettingswithsettings || jsonb_build_object(...)to preserve unrelated keys (theme_id, ignored_user_ids) under concurrent writes. Profile also exposes JSON-backed system fields (ide,terminal,os) plus language tags (langs, normalized to up to eight#tagvalues) andusers.createdascreated_at; the read-only profile modal renders right-sidebonsaandlate.fetchboxes when the modal is wide enough.
Repo-level finding: input now lands in a per-session queue and the render loop wakes on input, so ordinary keystrokes no longer wait on the app mutex before being queued. Remaining broad risk is render cost under high fan-out because render_once still holds the app lock across synchronous app.tick() + app.render().
Chat-specific row-cache, snapshot, unread-count, and scoped-loading performance notes live in late-ssh/src/app/chat/CONTEXT.md.
// === Database ===
let db = Db::from_env().await?;
let client = db.get().await?;
db.migrate().await?;
// === User identity ===
let user = User::find_by_fingerprint(&client, &fingerprint).await?;
user.update_last_seen(&client).await?;
// === Vote ===
Vote::upsert(&client, user_id, "lofi").await?;
let (lofi, classic, ambient, focus, jazz) = Vote::tally(&client).await?;
// === Chat ===
// See late-ssh/src/app/chat/CONTEXT.md for ChatService and model examples.
// === Services (subscribe pattern) ===
let vote_rx = vote_service.subscribe_state(); // watch::Receiver<VoteSnapshot>
let vote_ev = vote_service.subscribe_events(); // broadcast::Receiver<VoteEvent>
vote_service.cast_vote_task(user_id, Genre::Lofi);
// === Profile (view over users.username + users.settings) ===
let profile = Profile::load(&client, user_id).await?;
Profile::update(&client, user_id, ProfileParams { username, notify_kinds, notify_cooldown_mins }).await?;
User::set_theme_id(&client, user_id, "purple").await?;
// === Leaderboard ===
let lb_rx = leaderboard_service.subscribe(); // watch::Receiver<Arc<LeaderboardData>>
let data = lb_rx.borrow(); // today_champions, streak_leaders, user_streaks
let badge = BadgeTier::from_streak(streak); // None | Bronze(3+) | Silver(7+) | Gold(14+)
// === Icecast ===
let track = late_core::icecast::fetch_track(&icecast_url)?; // blocking
// === Liquidsoap ===
late_ssh::app::vote::liquidsoap::send_command(&addr, "vibe.set lofi").await?;# Start full dev stack
docker compose up -d
# Or run services individually:
# Postgres + Icecast + Liquidsoap via docker, Rust services via cargo
docker compose up -d postgres icecast liquidsoap
cargo run -p late-ssh # Needs LATE_* env vars
cargo run -p late-web # Needs LATE_WEB_* env vars# Quick connectivity check
PGPASSWORD=postgres psql -h localhost -p 5432 -U postgres -d postgres -c "select 1;"
# Seed data
sh scripts/seed_chat_rooms.sh
sh scripts/seed_chat_messages.sh
sh scripts/seed_notes.sh
Production Postgres runs as a CloudNativePG cluster in Kubernetes.
Keep this public doc generic: discover the current service name, secret name, DB name, and DB user from the live cluster or Terraform instead of hardcoding them here.
Fastest working path is to run psql from inside a Postgres pod and connect over TCP to the read-write service using credentials from the generated CNPG secret.
# 1. Find a Postgres pod
kubectl get pods -n default
# 2. Inspect the app deployment / infra to discover:
# - read-write DB service host
# - secret name holding DB credentials
# - secret keys for user/password/dbname
# 3. Decode generated credentials from the discovered secret
kubectl get secret -n default <db-secret> -o jsonpath='{.data.user}' | base64 -d; echo
kubectl get secret -n default <db-secret> -o jsonpath='{.data.password}' | base64 -d; echo
kubectl get secret -n default <db-secret> -o jsonpath='{.data.dbname}' | base64 -d; echo
# 4. Run a query from inside the pod (replace placeholders)
kubectl exec -n default <postgres-pod> -- \
env PGPASSWORD='<password>' \
psql -h <rw-service> -U <db-user> -d <db-name> -c "select 1;"Notes:
- Do not use
psql -U <db-user>over the pod-local socket without-h <rw-service>; peer auth inside the container can fail even when TCP auth works. - For ad hoc prod inspection, prefer read-only
SELECTqueries. - If the obvious pod name is unavailable, use any live CNPG Postgres pod.
# Human-only verification commands. LLM agents should not run these.
cargo fmt --all -- --check
cargo clippy --workspace --all-targets -- -D warnings
cargo nextest run --workspace --all-targetsUse narrower crate-specific cargo test / cargo nextest run commands ad hoc while iterating, but keep the workspace gate above as the canonical repo-level check.
- SSH won't connect → Check
LATE_SSH_OPEN, connection limits/rate limits, SSH key path - No audio → Check Icecast container, Liquidsoap container,
LATE_AUDIO_URL. If streams are down, verify fallback music exists on the PVC (see below) - Visualizer not updating → Check browser WS connection, token mismatch, SessionRegistry
- Votes not switching → Check Liquidsoap telnet reachability (
LATE_LIQUIDSOAP_ADDR), background tick running - Chat not syncing → Check DB connectivity, 10s refresh cadence, snapshot/event channels
- Now-playing shows "Unknown" → Check Icecast
/status-json.xsl, metadata format:"Artist - Title | Duration"(duration is absent for internet streams — this is expected) - Liquidsoap debugging →
docker run --rm savonet/liquidsoap:v2.4.0 liquidsoap -h <topic> - Music missing from PVC → Re-run infra deploy to trigger
sync_musicjob (syncs from R2). For manual recovery:aws s3 sync s3://$MUSIC_BUCKET/ ./music/ --endpoint-url $S3_ENDPOINTthenkubectl cpeach genre dir individually into the pod. - Repeated Postgres
role "root" does not existlines in GitHub Actions are often service-log noise, not the failure. They’re misleading because Actions prints service container logs after a job fails. Generally check for other errors before stopping to try and fix this probable red-herring.
| Screen | Key | Status | Description |
|---|---|---|---|
| Dashboard | 1 | Active | Now playing + vibe voting + /music hint + dashboard chat (The Lounge Hub) |
| Chat | 2 | Active | Full room-list chat screen with DMs, public/private rooms, mentions, News, Showcase, Work, and Discover synthetic entries. Detailed commands, keybindings, service flow, and gotchas live in late-ssh/src/app/chat/CONTEXT.md. |
| Games | 3 | Active | The Arcade Lobby + leaderboard sidebar (champions, streaks, all-time high scores, chip leaders, info): persisted high-score games (2048, Tetris) and daily games (Sudoku, Nonograms, Minesweeper, Solitaire). Blackjack lives in Rooms. Game list auto-scrolls (top-third anchor); ASCII header hides on small screens |
| Rooms | 4 | Active | Persistent game-room directory plus active Blackjack table/chat view. Detailed behavior is documented in late-ssh/src/app/rooms/CONTEXT.md. |
| Artboard | 5 | Active | Dedicated shared ASCII canvas screen. Opens in view mode for navigation and screen switching; i / Enter enters active edit mode; Esc returns to view mode. |
┌─ late.sh ──────────────────────────────────────────────────────────┐
│ │ ┌─ Visualizer ──────┐ │
│ Main Content Area │ │ █ █ █ █ █ █ █ █ │ │
│ (screen-dependent) │ └───────────────────┘ │
│ │ ┌─ Now Playing ─────┐ │
│ │ │ Artist - Title │ │
│ │ │ 0:57 ──●──── 3:15 │ │
│ │ └───────────────────┘ │
│ │ ┌─ Activity ────────┐ │
│ │ │ ● 12 online ? keys│ │
│ │ │ @user 2m │ │
│ ┌──────────────────────────────────────┐ │ │ joined chat │ │
│ │ ✓ Voted for Lofi │ │ └───────────────────┘ │
│ └──────────────────────────────────────┘ │ ┌─ Bonsai (42d) ───┐ │
│ │ │ .@@@. │ │
│ │ │ .@@@@@@@. │ │
│ │ │ / \ │ │
│ │ │ | | │ │
│ │ │ .|. │ │
│ │ │ [===] │ │
│ │ │ w care │ │
│ │ └───────────────────┘ │
└────────────────────────────────────────────────────────────────────┘
Toast notification is hidden by default (0 rows). When active, it appears as a 3-row bordered block (green for success, red for error) at the top-right of the content area. The settings overlay renders on top of the toast.
| Key | Context | Action |
|---|---|---|
q / Q |
Global | Open quit confirm; pressing q again exits |
? |
Global (not composing) | Open help modal (multi-slide guide). Also works inside the settings modal, which renders help on top while keeping the draft intact. |
h / l / ← / → |
Help modal | Switch slides (Overview / Chat / Music / News / Arcade / Bonsai / Settings / Architecture) |
j / k / ↑ / ↓ |
Help modal | Scroll current slide (uncapped — past the last line is blank space) |
Esc / q / ? |
Help modal | Close (returns to the underlying screen, including the settings modal if it was open) |
Tab |
Global | Cycle screens |
1 |
Global | Jump to Dashboard |
2 |
Global | Jump to Chat |
3 |
Global | Jump to Games |
4 |
Global | Jump to Rooms |
5 |
Global | Jump to Artboard |
m |
Global | Toggle mute on paired client |
+ / = |
Global | Volume up on paired client |
- / _ |
Global | Volume down on paired client |
w |
Global (not composing, active games override) | Open the Bonsai care modal |
w |
Bonsai modal | Water bonsai / replant dead tree, with a short watering animation |
p |
Bonsai modal | Hard-prune: -100 growth, reroll shape, reset today's wrong-branch cuts |
h / j / k / l / arrows |
Bonsai modal prune mode | Move spatial branch cursor |
x |
Bonsai modal prune mode | Cut branch under cursor; wrong cuts cost -10 growth, all daily cuts preserve current shape |
s |
Bonsai modal | Copy bonsai ASCII snippet to clipboard |
? |
Bonsai modal | Open help modal on the Bonsai section |
L / C / A / Z |
Dashboard | Vote genre |
b then 1 / 2 / 3 / 4 |
Dashboard | Activate a dashboard chord: Blackjack room, current daily game, current News wire article, or #announcements |
P |
Dashboard / Chat | Show browser-pairing QR (copies pairing URL) |
B |
Dashboard / Chat | Open CLI install/build-source modal |
| Dashboard chat keys | Dashboard | See late-ssh/src/app/chat/CONTEXT.md. |
Enter |
Games lobby | Launch selected game |
Esc |
Active game | Exit back to Arcade lobby |
h / j / k / l / arrows |
2048 | Move tiles |
r |
2048 game over | Start a fresh 2048 board |
h / l / arrows |
Tetris | Move active piece left / right |
j / down arrow |
Tetris | Soft drop |
k / up arrow |
Tetris | Rotate clockwise |
Space |
Tetris | Hard drop |
p |
Tetris | Pause / resume |
r |
Tetris | Start a fresh run |
r |
Sudoku (unsolved) | Reset board (clears non-fixed cells) |
r |
Nonograms (unsolved) | Reset board (clears all cells) |
h / j / k / l / arrows |
Sudoku | Move cursor |
1-9 |
Sudoku | Fill selected cell |
0 / Backspace |
Sudoku | Clear selected cell |
d |
Sudoku | Restore today's daily board |
p |
Sudoku | Open saved personal board |
n |
Sudoku | Generate a fresh personal board |
[ / ] |
Sudoku | Switch difficulty (easy / medium / hard) |
h / j / k / l / arrows |
Nonograms | Move cursor |
Space / x |
Nonograms | Toggle selected cell |
0 / Backspace / c |
Nonograms | Clear selected cell |
d |
Nonograms | Restore today's daily puzzle for the current size |
p |
Nonograms | Open saved personal puzzle for the current size |
n |
Nonograms | Generate a fresh personal puzzle for the current size |
[ / ] |
Nonograms | Switch puzzle size pack |
Esc |
Nonograms | Exit back to Arcade lobby |
| Chat keys | Chat / Dashboard chat | See late-ssh/src/app/chat/CONTEXT.md for room navigation, composer commands, message actions, synthetic entries, and icon picker behavior. |
Ctrl+O |
Global | Open the settings modal from anywhere, including active games |
↑ / ↓ / j / k |
Settings modal | Move between rows (Username, IDE, Terminal, OS, Langs, Theme, Background, Right sidebar, Games sidebar, Country, Timezone, DMs, @mentions, Game events, Bell, Cooldown, Format) |
← / → |
Settings modal | Cycle the current row's setting (theme, toggles, cooldown, notification format) |
Space / Enter / e |
Settings modal | Activate row — edit username/system fields/bio, cycle a setting, or open the country/timezone picker |
Alt+Enter / Ctrl+J |
Settings modal (bio editing) | Insert newline |
? |
Settings modal | Open help modal on top |
j / k / ↑ / ↓ |
Read-only profile modal | Scroll |
Esc / q |
Read-only profile modal | Close |
Esc |
Any modal | Close/cancel |
When modifying any keybinding, update all of the following:
- Input handler — the actual
match bytein the relevantinput.rs(screen-specific orapp/input.rsfor globals) - Help modal —
app/help_modal/data.rs(slide copy, e.g. Overview "This modal" section) andapp/help_modal/ui.rsdraw_footer()keybind line - Settings modal —
app/settings_modal/ui.rsdraw_footer()keybind line and the bordered help callout indraw_help_callout() - Sidebar hints —
app/common/sidebar.rs, e.g. the volume/mute hint line in Now Playing - Game guard —
app/input.rshandle_global_key(), where active games suppress global byte shortcuts before screen-specific game routing - This table — the keyboard shortcuts table above in CONTEXT.md
- Game info panels — per-game UI panels that show controls (check each game's
ui.rs)
- russh: https://github.com/Eugeny/russh
- ratatui: https://ratatui.rs/
- Icecast: https://icecast.org/
- Alpine.js: https://alpinejs.dev/
- HTMX: https://htmx.org/
- Liquidsoap: https://www.liquidsoap.info/