Original prompt: cant get this app to start locally
-
Investigated
npm run buildfailure on macOS. -
Root cause:
src/engine/components/Unit.tsre-exported from./unit, which can resolve ambiguously on case-insensitive filesystems because the facade itself isUnit.ts. -
Applied fix: changed the facade to re-export from
./unit/indexexplicitly. -
Verified
npm run buildsucceeds. -
Verified
npm run type-checksucceeds. -
Verified
npm run devstarts successfully; Next selectedhttp://localhost:3001because port3000was already in use locally. -
The
develop-web-gamePlaywright client could not be used because theplaywrightpackage is not installed in this environment. -
No further action required for the original startup/build issue.
-
Investigated pathfinding regression on elevated maps.
-
Reproduced the bug below the game loop: Recast paths on elevated bundled maps were truncating partway up/down ramps, while the flat test map still reached its destination.
-
Replaced the flat-per-cell navmesh geometry path with a shared ramp-aware geometry builder and wired editor validation to the same logic.
-
Removed the terrain-grid fallback after confirming the root issue was navmesh geometry, not movement execution.
-
Added ramp metadata normalization so both bundled ramps and editor-inferred flat ramps derive their direction and endpoint elevations from surrounding walkable terrain before Recast heightfields are built.
-
Replaced the fallback-specific tests with Recast connectivity regressions for
contested_frontier,crystal_caverns,titans_colosseum, and a synthetic flat-ramp editor map. -
Verified
npm run type-check, targetedrecastRampConnectivityandpathfindingSystemtests, fullnpm test(72 files / 2547 tests), andnpm run lintwith only pre-existing warnings. -
TODO: Verify full long-haul spawn-to-spawn routes on
scorched_basinandvoid_assaultif we want cross-map regression coverage beyond the local elevated-move cases. -
Updated the PWA install UI so the global bottom-right install prompt no longer renders from the app layout.
-
Reworked
src/components/pwa/InstallPrompt.tsxinto a compactInstallAppButtonthat reuses the existing install flow but renders as an icon-only control. -
Added the compact install button beside the existing mute/fullscreen controls on the home page, game setup page, and editor header.
-
Verified
npm run type-checkandnpm run buildpass after the UI change. -
Verified targeted ESLint on the touched files reports only two pre-existing warnings: the unused
eslint-disableinsrc/app/game/setup/page.tsxand the existing custom-font warning insrc/app/layout.tsx. -
Browser-level visual verification of the install button placement is still blocked here because the repo does not include a usable
playwrightruntime, and the install prompt itself depends on a browser-onlybeforeinstallpromptevent. -
Continued investigating the pathfinding stop-after-an-inch bug after gameplay reports showed it also happened on multiple bundled maps near starting bases, not just on ramps.
-
Root cause: the nested pathfinding worker could finish loading its navmesh after startup buildings and decoration collisions were already registered on the authoritative main-thread
RecastNavigationTileCache. In that case the worker never received those existing obstacles, so it planned straight through the starting HQ/decor while movement/collision stopped units almost immediately. -
Applied fix in
src/engine/systems/PathfindingSystem.ts: retain registered decoration collisions, and whenever the worker reportsnavMeshLoaded, replay all current building and decoration obstacles into the worker so worker-side path queries match the authoritative obstacle state. -
Added a regression test in
tests/engine/systems/pathfindingSystem.test.tsthat simulates late worker readiness and asserts existing building plus large decoration obstacles are replayed into the worker while tiny decorative clutter is ignored. -
Updated
docs/architecture/OVERVIEW.mdto document that worker navmesh loads/reloads now trigger a replay of dynamic obstacles. -
Verified
npm test -- tests/engine/systems/pathfindingSystem.test.tspasses. -
Verified
npm run type-checkis still blocked by a pre-existing unrelated issue intests/scripts/launch-voidstrike.test.tswhere the Vitest globals (describe,it,expect,afterEach) are not declared. -
Switched the elevated-map movement investigation to the production build path and isolated the real root cause in input projection, not Recast corridor generation.
-
Built a direct terrain probe comparing
RTSCamera.screenToWorld()against a real Three.js raycast into the rendered terrain mesh. On elevated maps such ascontested_frontier, the old heightfield iteration could snap a click from the visible upper plateau onto the lower cliff layer instead. -
Applied the fix in
src/rendering/Camera.ts: when a terrain object is registered,screenToWorld()now raycasts against the actual terrain mesh first and only falls back to heightfield iteration if no terrain object is available. -
Wired the production game camera to the terrain mesh in
src/components/game/hooks/useWebGPURenderer.ts, and also corrected the editor terrain raycast path insrc/editor/core/Editor3DCanvas.tsxto recurse into terrain chunks. -
Added
tests/rendering/Camera.test.tsas a regression usingcontested_frontier; it demonstrates the old plateau-edge miss and verifies the camera now matches the rendered terrain hit. -
Updated
docs/architecture/rendering.mdto document that click projection uses the terrain render mesh on multi-elevation maps. -
Verified
npm test -- tests/rendering/Camera.test.ts tests/engine/systems/pathfindingSystem.test.tspasses. -
Verified
npm run buildpasses with the production build path. -
Verified targeted ESLint on the touched files reports only pre-existing warnings in
src/components/game/hooks/useWebGPURenderer.tsandsrc/editor/core/Editor3DCanvas.tsx; no new lint errors were introduced. -
Continued the elevated-map stop-after-an-inch investigation after gameplay reports ruled out both ramps-only and camera-click projection.
-
Proved the actual root cause with direct TileCache experiments on
crystal_caverns: building obstacles were being inserted aty=0, so on elevated maps they only affected the ground layer while the real HQ/platform navmesh sat aroundy≈8.8. That is whytest_6p_flatworked and elevated maps did not. -
Applied the root fix in
src/engine/pathfinding/RecastNavigation.ts,src/workers/pathfinding.worker.ts, andsrc/editor/services/EditorNavigation.ts: dynamic obstacles now sample the terrain/navmesh height at their footprint before being inserted into TileCache, and obstacle updates now loop until TileCache reportsupToDate. -
Added
tests/engine/pathfinding/recastDynamicObstacleElevation.test.tsto verify an elevatedcrystal_cavernsHQ obstacle forces a reroute instead of leaving the path straight through the base footprint. -
Reverted the temporary non-root collision/camera hypothesis changes:
- removed the hard-collision margin experiment in
src/engine/systems/movement/PathfindingMovement.ts - removed the temporary camera raycast changes and deleted
tests/rendering/Camera.test.ts
- removed the hard-collision margin experiment in
-
Updated
docs/architecture/OVERVIEW.mdto document that elevated dynamic obstacles are inserted on the sampled terrain/navmesh layer rather than hard-codedy=0. -
Verified
npm test -- tests/engine/pathfinding/recastDynamicObstacleElevation.test.ts tests/engine/pathfinding/recastRampConnectivity.test.ts tests/engine/systems/pathfindingSystem.test.tspasses (23tests). -
Verified
npm run buildpasses in production mode. -
Verified targeted ESLint on the touched source/test files reports only pre-existing
EditorNavigation.tsconsole warnings; no new lint errors. -
Verified
npm run type-checkis still blocked by the pre-existing Vitest-global issue intests/launch/launch-voidstrike.test.ts. -
Installed
playwrightunder$HOME/.codex/skills/develop-web-gameso the required Playwright client can run without changing repo dependencies. -
Ran the required Playwright client twice against the production server (
output/web-game-prod/shot-0.pngandoutput/web-game-prod-2/shot-0.png). The client renders/game/setupcorrectly, but automatedStart Gamebutton clicks still do not transition into gameplay in this environment, so browser smoke verification remains blocked by the same UI automation limitation rather than the pathfinding code. -
TODO: Have the user manually retest the production build on an elevated map near the starting HQ/platform now that dynamic obstacles are on the correct nav layer.
-
Changed direction from speculative fixes to live reproduction telemetry in the actual browser production path.
-
Added a local telemetry client in
src/engine/debug/pathTelemetry.tsplus a Node route atsrc/app/api/debug/pathfinding/route.tsthat appends JSONL events tooutput/live-pathfinding.jsonl. -
Instrumented the browser input path in
src/engine/input/handlers/GameplayInputHandler.tsto log right-click screen/world targets and the exactMOVEcommands issued from the live game. -
Instrumented the authoritative worker path in
src/engine/workers/GameWorker.ts,src/engine/workers/WorkerBridge.ts,src/engine/workers/types.ts, andsrc/engine/systems/PathfindingSystem.tsso live reproductions capture command receipt, path requests/results, tracked unit snapshots, and explicit movement-stalled events from the real simulation. -
Updated
docs/architecture/OVERVIEW.mdto document the local live path telemetry flow and output file. -
Verified
npm run buildpasses with the telemetry changes. -
Verified targeted ESLint on the touched telemetry files passes.
-
Verified the telemetry sink end-to-end with a synthetic POST to
http://127.0.0.1:3001/api/debug/pathfinding, which wrote tooutput/live-pathfinding.jsonl. -
Restarted the production server on port
3001and left a live tail running onoutput/live-pathfinding.jsonlso the next manual reproduction can be inspected immediately. -
The first telemetry build regressed gameplay input because worker-side telemetry forwarding was too broad:
PathfindingSystemtraces for all background gatherer repaths were being bridged to the main thread, creating unnecessary message volume on startup. -
Fixed that in
src/engine/workers/GameWorker.tsby only forwarding system-originated path telemetry when it belongs to an actively tracked user-command trace. -
Rebuilt production, cleared the live trace file, and restarted the production server on port
3001with the reduced telemetry scope. -
After user testing still showed missing move input, trimmed
GameplayInputHandlertelemetry again so the right-click path no longer walks selected entities/components before issuing the command; the UI trace now records only screen/world click position plus selected count. -
Rebuilt production and restarted port
3001again on that simplified input-path build. -
Live production telemetry finally isolated the elevated-map stop path precisely:
- the nested worker path query returns
found:falseimmediately for elevated worker move orders MovementOrchestratorre-requests a path 10 ticks laterPathfindingSystem.queuePathRequest()then hits the failed-path cache for the same destination cell and clearstargetX/targetY, which is why units go idle after moving only a short distance
- the nested worker path query returns
-
Root fix applied in
src/engine/systems/PathfindingSystem.ts: workerfindPathrequests now resolvestartHeight/endHeightfrom the same terrain source used by navmesh generation, falling back toGameCore.getTerrainHeightAt()when no custom terrain height provider was injected. This fixes the authoritative worker case, where the terrain grid exists butterrainHeightFunctionwas null, so elevated queries were being sent to the nested path worker at height0. -
Added a regression in
tests/engine/systems/pathfindingSystem.test.tsthat verifies elevated worker path requests send nonzero terrain-derived heights even without a custom height callback. -
Updated
docs/architecture/OVERVIEW.mdto document the terrain-grid fallback for worker query heights. -
Verified
npm test -- tests/engine/systems/pathfindingSystem.test.tspasses. -
Verified
npm run buildpasses on the production path after the height fix. -
TODO: Restart the production server on
3001, clearoutput/live-pathfinding.jsonl, and have the user rerun the same elevated worker move to confirm workerpath_resultevents switch fromfound:falseto real Recast paths. -
Investigated the economy/UI desync where workers visibly mine and return cargo but the mineral counter never increases.
-
Root cause: in worker mode,
ResourceSystemwas crediting minerals into the worker's authoritativeplayerResourcesmap andGameWorker.sendRenderState()was serializing that updated resource state, butuseWorkerBridgeonly copiedgameTimeout of each render snapshot. The HUD reads from the main-thread Zustand store, so gathered minerals and worker-side supply changes never reached the UI. -
Applied fix:
- added
syncPlayerResources()tosrc/store/gameStore.tsso the main thread can atomically mirror minerals, plasma, supply, and max supply from worker authority - added
src/components/game/hooks/syncWorkerPlayerResources.tsand wireduseWorkerBridgeto copy the local player'srenderState.playerResourcesinto Zustand on every worker render update - updated
docs/architecture/OVERVIEW.mdto document that worker snapshots now drive the local HUD resource state
- added
-
Added regression coverage in
tests/components/game/hooks/syncWorkerPlayerResources.test.tsfor both successful local-player sync and no-op behavior when the player is absent/spectating. -
Verified
npm test -- tests/components/game/hooks/syncWorkerPlayerResources.test.tspasses. -
Verified
npm test -- tests/engine/systems/resourceSystem.test.ts tests/components/game/hooks/syncWorkerPlayerResources.test.tspasses (62tests). -
Verified targeted ESLint on the touched files passes cleanly.
-
npm run type-checkis still blocked by the pre-existing Vitest globals issue intests/launch/launch-voidstrike.test.ts. -
Ran the required Playwright smoke script against
http://localhost:3001/game/setupand inspectedoutput/web-game-resource-sync/shot-0.pngplusshot-1.png; automation still stayed on the setup screen after theStart Gameclick, so live gameplay verification of mining remains blocked by the existing setup-flow automation limitation rather than this resource-sync fix. -
Completed validation for the deterministic-math cutover and legacy fixed-point removal.
-
Verified
npm run type-check,npm run build, andnpm testpass after the cutover.npm run lintalso passes with warnings only; the warnings are pre-existing and outside the cutover scope. -
Browser validation against the production server on
127.0.0.1:3100succeeded in the existing Playwright session:/game/setuprenders correctlyStart Gametransitions into/game- the loading screen advances into the live HUD
- the in-game
Idleselector and command card still respond - no new browser errors appeared; console output stayed at the existing
favicon.ico404,audio/alert/not_enough_plasma.mp3404, and[GPUTimestampProfiler] Already initializedwarning
-
Fresh headless Playwright launches in this environment fail before gameplay with
THREE.WebGLRenderer: Error creating WebGL context.and the generic Next.js client error screen. That reproduces even without touching game state and is environment/WebGL related, not evidence of a regression from the deterministic-math refactor. -
While validating command flow, confirmed an existing UX/code mismatch in
src/engine/input/handlers/CommandInputHandler.ts: command-target mode executesmoveon left-click and cancels on right-click, while the UI tooltip still saysMove to location (right-click). -
Investigated the report that build-menu clicks do nothing and no scaffold/blueprint appears.
-
Headed Playwright repro on the live dev server showed the actual behavior split:
- wall commands still enter placement mode immediately (
Placing wall_segment...) - unaffordable structure commands stay in the submenu with no visible explanation because the command card only emitted audio alerts on disabled clicks
- wall commands still enter placement mode immediately (
-
Added
getDisabledCommandFeedback()in the command-card layer so disabled clicks now emit the same audio cue plus a visibleui:errorreason such asNot enough minerals,Not enough plasma,Supply blocked, orRequires <building>. -
Added regression coverage in
tests/components/game/getDisabledCommandFeedback.test.tsfor resource, supply, and requirements feedback selection. -
TODO: If players are still confused about building starts, consider adding a more explicit resource-state hint in the setup/HUD because normal starts currently begin at
50minerals while most structures cost more. -
Follow-up repro showed the real first-click placement bug still existed even with enough resources:
- after
setresources 500 0, clickingBuild Basic -> Supply Cacheentered build mode - but the preview still initialized at the preview object's default
(0,0)until amousemovearrived in building context - the first placement click therefore used stale preview coordinates, so a quick button-click -> map-click flow could cancel without placing anything
- after
-
Fixed
src/engine/input/handlers/BuildingInputHandler.tsso:onActivate()seeds the preview from the current pointer viaInputManager.containerToWorld()- the left-click placement path re-samples
event.worldPositionbefore readinggetSnappedPosition()/isPlacementValid()
-
Added
tests/engine/input/handlers/buildingInputHandler.test.tscovering both activation seeding and first-click placement using the actual click world position. -
Follow-up browser repro showed one more race: the HUD switched
isBuildingimmediately, butWebGPUGameCanvasonly changedInputManagercontext in a later React effect, so a fast menu-click -> terrain-click sequence still hitGameplayInputHandler. -
Fixed
src/components/game/CommandCard/hooks/useUnitCommands.tsso build and wall command actions switchInputManagercontext synchronously when they arm placement mode. -
Live DOM instrumentation then showed terrain clicks were reaching the canvas/container, so the remaining blocker was inside the build handler path.
-
Root cause:
useGameInput()only pushedplacementPreviewRef.current/wallPlacementPreviewRef.currentinto the handlers during a one-shot effect keyed on the ref objects, but those refs are populated later byuseWebGPURenderer. The handlers could therefore keep a permanentnullpreview reference. -
Fixed
src/components/game/hooks/useGameInput.tsto retry preview-ref wiring withrequestAnimationFrameuntil the renderer-created preview instances exist, then hand them to the building/landing/wall handlers. -
After that fix, the blueprint started following the cursor and invalid terrain clicks now cancel cleanly instead of doing nothing.
-
Found a second authoritative-state bug that explains the user's
100 mineralsreport:WorkerGame.spawnInitialEntities()was hardcoding every spawned player back to50minerals /0plasma, so the worker could still reject structure builds even when the setup UI or temporary HUD state showed more. -
Fixed
useWorkerBridge/WorkerBridge/GameWorkerto forward numeric starting-resource values into the worker spawn message and apply them to worker-side player resources at base spawn. -
Added
tests/engine/workers/gameWorker.test.tscoverage to lock worker spawn resources to the provided starting-resource payload. -
The new worker regression test initially failed because
spawnInitialEntities()now reachessendRenderState()and Vitest does not define workerpostMessage; fixed the test harness by stubbingpostMessageintests/engine/workers/gameWorker.test.ts. -
Verified
npm test -- tests/engine/workers/gameWorker.test.ts tests/engine/input/handlers/buildingInputHandler.test.ts tests/components/game/getDisabledCommandFeedback.test.tspasses. -
Verified
npm run buildpasses after the worker starting-resource fix. -
Live browser repro on
http://127.0.0.1:3101/game/setupnow succeeds end to end:- selected
Highstarting resources from the setup UI - started the game, used the
Idlebutton to select a worker, openedBuild Basic, choseSupply Cache - clicked valid terrain and confirmed the placement banner cleared, the scaffold appeared, minerals dropped from
500to400, and idle workers dropped from6to5 - artifacts:
output/live-build-verify-2/05-supply-cache-mode.png,06-before-place-click.png,07-after-place-click.png, plusresult.json
- selected
-
The required
develop-web-gamewrapper script still hangs in this environment before producing artifacts, so the successful gameplay verification used a direct Playwright script against the same live page instead. -
Investigated the lobby start regression where the first
Start Gameclick flashed/gameand dumped the player back to/game/setup. -
Reproduced the exact route bounce in
next dev:/game/setup -> /game -> /game/setupon the first click, which matched a React Strict Mode mount/unmount/remount probe rather than a form submit. -
Root cause:
src/app/game/page.tsxcalledendGame()directly from the gameplay page effect cleanup. In development, Strict Mode immediately invokes that cleanup during its remount probe, which clearedgameStartedbefore the route could settle. -
Applied fix:
- added
src/app/game/gamePageLifecycle.tsso gameplay teardown is deferred by one microtask and only runs if the page stays unmounted - updated
src/app/game/page.tsxto use that helper instead of unconditionally clearing the session on every effect cleanup - added
tests/app/game/gamePageLifecycle.test.tsto lock the immediate-remount case and the real-unmount teardown case
- added
-
Updated
docs/architecture/OVERVIEW.mdto document that/gameteardown is Strict-Mode-safe. -
Verified
npm test -- tests/app/game/gamePageLifecycle.test.tspasses. -
Verified
npm run buildpasses. -
Verified browser automation against both
http://127.0.0.1:3101/game/setup(next dev) andhttp://127.0.0.1:3102/game/setup(next start) now keeps the first click on/gameand advances into the loading screen instead of bouncing back. -
Ran the required Playwright client after the fix and captured
output/web-game-start-fix/shot-0.png. -
Re-verified the same lobby-start report against the current workspace on 2026-03-15.
-
Confirmed the existing Strict-Mode-safe
/gameteardown fix is still present insrc/app/game/page.tsxandsrc/app/game/gamePageLifecycle.ts; no additional code change was needed. -
Browser checks:
- manual Playwright probe on
http://127.0.0.1:3101/game/setup(next dev) transitions to/gameon the firstStart Gameclick - manual Playwright probe on
http://127.0.0.1:3001/game/setup(launch path) also transitions to/gameon the first click
- manual Playwright probe on
-
required
develop-web-gameclient run againsthttp://127.0.0.1:3001/game/setupcapturedoutput/web-game-lobby-start-verify/shot-0.png, which shows the in-game loading screen after a single click -
Verified
npm test -- tests/app/game/gamePageLifecycle.test.tsstill passes. -
Verified
npm run buildstill passes. -
Continued the lobby-start investigation after the manual retest still reported “click start twice.”
-
Found a second real root cause beyond the earlier
/gameteardown bounce:- the visible
Start Gamebutton could render before the setup page finished hydrating, so an early click was silently dropped because the client handler was not attached yet - regular browser sessions could also stay on a stale cached
/game/setupshell becausepublic/sw.jsserved navigation HTML with stale-while-revalidate
- the visible
-
Applied fixes:
- added
src/app/game/setup/getStartGameButtonState.tsand updatedsrc/app/game/setup/page.tsxsoStart Gamestays disabled with aPreparing lobby...hint until hydration completes - changed
public/sw.jsnavigation requests to network-first with cache fallback and bumped the service-worker cache namespace tov2, while keeping hashed static shell assets on stale-while-revalidate - added regressions in
tests/app/game/setup/getStartGameButtonState.test.tsandtests/app/serviceWorkerRouting.test.ts
- added
-
Verified:
npm test -- tests/app/game/gamePageLifecycle.test.ts tests/app/game/setup/getStartGameButtonState.test.ts tests/app/serviceWorkerRouting.test.tsnpm run build- required
develop-web-gameclient againsthttp://127.0.0.1:3200/game/setupnow capturesoutput/web-game-lobby-start-hydration-fix/shot-0.png, which shows the loading screen after a single automated click in the same workflow that previously stayed on setup - fresh browser cache inspection on
http://127.0.0.1:3200/game/setupnow reportsvoidstrike-assets-v2,voidstrike-shell-v2, andvoidstrike-data-v2
-
Observation-only repro on 2026-03-15 against a fresh
next devinstance athttp://127.0.0.1:3001. -
Setup used for the watch:
- map:
Scorched Basin - 4 players, FFA, all AI
- fog of war disabled
- observed in spectator mode via headed Playwright session
- map:
-
Observed for just over 10 minutes of in-game time (from roughly
00:27to11:18on the game clock). -
Captured screenshots:
- initial upper-right sample:
.playwright-cli/page-2026-03-15T17-22-22-429Z.png(00:55) - upper-left sample:
.playwright-cli/page-2026-03-15T17-24-07-806Z.png(02:40) - upper-right sample:
.playwright-cli/page-2026-03-15T17-25-42-498Z.png(04:15) - upper-left ramp-exit sample:
.playwright-cli/page-2026-03-15T17-28-18-587Z.png(06:51) - upper-right combat sample:
.playwright-cli/page-2026-03-15T17-31-03-878Z.png(09:36) - end-of-window sample:
.playwright-cli/page-2026-03-15T17-32-45-830Z.png(11:18)
- initial upper-right sample:
-
Result from this specific repro:
- no browser/game freeze occurred during the 10-minute watch
- both upper spawns appeared able to leave their base area
- at
06:51, the upper-left spawn clearly had multiple units moving down the ramp/off the plateau, so this run did not reproduce a general “cannot pathfind out of base” failure onScorched Basin - no new browser-console errors appeared beyond the existing startup warnings about definitions initialization
-
Current suspicion remains code-side rather than conclusively disproven:
- AI rally/recovery logic in
src/engine/systems/ai/AITacticsManager.tsstill hard-codesbasePos + (10, 10)for several army regroup/recovery paths, which could fail on some rotated/elevated spawns even though it did not fail in thisScorched Basinrun
- AI rally/recovery logic in
-
Next useful repros:
- exact map/slot that previously showed “stuck at the edge of the upper base portion”
- enable AI/pathfinding debug logging before the match if we need to catch repeated regroup commands to one coordinate
-
Audited shipped map JSONs on 2026-03-15 against the current generator pipeline.
-
Verified the game ships maps from
src/data/maps/json/index.ts, which currently loads:battle_arena,contested_frontier,crystal_caverns,scorched_basin,test_6p_flat,titans_colosseum,void_assault
-
Validation result:
npx --yes tsx scripts/validate-maps.tspassed for every bundled JSON- schema validation, deserialization, terrain dimensions, and “spawn on walkable terrain” checks all passed
-
Direct comparison against generator outputs:
- regenerated the current LLM-script maps into a temp directory and compared shipped JSONs semantically
crystal_caverns,void_assault,scorched_basin, andtitans_colosseummatched the LLM generator on gameplay-relevant top-level fields (spawns, expansions count, watch towers, ramps count, destructibles)contested_frontierdid not: shipped JSON has36ramps while current LLM generation produces49- regenerated the older script as well; shipped
contested_frontiermatches the older generator, while the other ranked maps do not
-
Implication:
- the current 4-player
Scorched Basinissue is unlikely to be caused by stale/legacy map JSON, because the shippedscorched_basin.jsonalready aligns with the LLM-generated structure and bundled pathfinding connectivity tests pass
- the current 4-player
-
Investigated and fixed the multiplayer lobby-start regression reported on 2026-03-28 where guests could join but then got stuck loading or hit
Connection Lost, while the host saw the remote player as immediately defeated. -
Root causes confirmed in live two-browser repros:
Join Gamewas only reachable from a fresh setup page after enabling public-host mode because the header action was incorrectly gated onlobbyStatus === 'hosting'- private code-join lobbies tore themselves down as soon as the last
Openslot was filled because networking enablement only looked for remainingopenslots or a public lobby flag - the
/game/setuplobby hook closed active peer/signing state during the/game/setup -> /gamenavigation, disconnecting the guest right afterStart Game - guest-side multiplayer store wiring used a synthetic host peer ID instead of the host's real signaling pubkey, which would break signed command verification once the match was running
-
Applied fixes:
- added
src/app/game/setup/lobbySessionPolicy.tsand switcheduseLobbySyncto keep networking alive for connected guest slots - updated
src/hooks/useMultiplayer.tsto preserve active lobby sessions and reconnect callbacks across the setup-to-game transition, defer real teardown to game exit, and map the guest's host peer to the real host pubkey - updated
src/store/multiplayerStore.tsandsrc/app/game/page.tsxso real multiplayer cleanup runs on actual/gameexit instead of on setup-page unmount - updated
src/app/game/setup/page.tsxsoJoin GameandBrowse Lobbiesare available from a fresh setup page without requiring public-host mode
- added
-
Added regressions:
tests/app/game/setup/lobbySessionPolicy.test.tstests/store/multiplayerStore.test.ts
-
Verified:
npm test -- tests/app/game/setup/lobbySessionPolicy.test.ts tests/app/game/setup/getStartGameButtonState.test.ts tests/app/game/gamePageLifecycle.test.ts tests/store/multiplayerStore.test.tsnpx eslint src/app/game/setup/lobbySessionPolicy.ts src/hooks/useLobbySync.ts src/hooks/useMultiplayer.ts src/store/multiplayerStore.ts src/app/game/page.tsx src/app/game/setup/page.tsx tests/app/game/setup/lobbySessionPolicy.test.ts tests/store/multiplayerStore.test.tsnpm run build- live headed Playwright two-browser verification in
next dev:- private code join:
output/playwright/multiplayer-verify-private-1774741115339/result.json - public-host code join:
output/playwright/multiplayer-verify-public-1774741179155/result.json
- private code join:
- live headed Playwright two-browser verification in production (
next start):- private code join:
output/playwright/multiplayer-verify-private-prod-1774741262397/result.json
- private code join:
-
npm run type-checkis still blocked by pre-existing unrelated test-harness errors intests/engine/input/handlers/buildingInputHandler.test.tsandtests/engine/workers/gameWorker.test.ts; the multiplayer changes themselves build and lint cleanly.contested_frontieris the only ranked shipped map that still appears to be on the older non-LLM layout
-
Additional catalog note:
battle_arenais correctly hidden from the regular lobby viaisSpecialMode: truetest_6p_flatis still bundled as a normal selectable map and is not generated by either map-regeneration script
-
Regenerated and swapped the bundled
src/data/maps/json/contested_frontier.jsonto the current output fromscripts/regenerate-maps-llm.ts. -
Post-swap validation:
npx --yes tsx scripts/validate-maps.tspassed for the full bundled map setnpm test -- tests/engine/pathfinding/recastRampConnectivity.test.tspassed after the swap
-
Important nuance in the regenerated
contested_frontier:- the new JSON adds 13 ramp entries compared with the previously shipped version (49 total vs 36)
- the validator now reports all six spawn cells as
terrain=ramp elev=220rather thanground, though they remain walkable and the existing connectivity regression still passes
-
Follow-up worth doing in a live match:
- spectate a
Contested Frontiergame after the swap to confirm AI movement around main-base exits and initial worker behavior still looks sane with spawn cells marked as ramp terrain
- spectate a
-
Investigated multiplayer command transmission and determinism under sustained live play on 2026-03-28 after the earlier lobby-start regression fix.
-
Additional root cause found during deeper multiplayer tracing:
- the UI-thread proxy
Gameinstance could still register inbound multiplayer handlers even when the worker-owned simulation was already handling them, which produced duplicate verification paths and false[Game] SECURITY:rejections for otherwise valid remote commands
- the UI-thread proxy
-
Applied follow-up multiplayer determinism fixes:
- added
src/engine/core/multiplayerMessageHandling.tsplus amultiplayerMessageHandlingownership flag insrc/engine/core/GameCore.ts - updated
src/engine/core/Game.tsandsrc/components/game/hooks/useWorkerBridge.tsso multiplayer inbound message handling is owned by the worker in worker-bridge matches and only by the main thread in direct-mode matches - added a browser debug hook in
src/components/game/hooks/useWorkerBridge.tsexposingglobalThis.__voidstrikeMultiplayerDebug__so live browser automation can request authoritative simulation checksums and read multiplayer sync state from the running worker - added
scripts/verify-multiplayer-checksum.jsto spin up a real 2-human + 2-AI match, issue commands from both humans, and verify remote action visibility plus checksum parity over time - added
tests/engine/core/multiplayerMessageHandling.test.tsto lock the ownership split between main-thread and worker-managed multiplayer sessions
- added
-
Sustained production verification passed on
http://127.0.0.1:3308:- scenario:
Scorched Basin, 2 humans + 2 AI,Highresources,Fastestspeed, fog disabled - host created a private lobby, guest joined by code, and both human players issued live commands including move/hold/stop plus repeated move orders during the five-minute run
- command visibility checks showed the same commanded remote unit state on both clients for host-issued and guest-issued actions, confirming network transmission and remote render-state updates
- authoritative checksum checkpoints matched on both clients throughout the run:
- initial: tick
20, checksum767486150 - minute 1: tick
1285, checksum282837654 - minute 2: tick
2495, checksum1162993385 - minute 3: tick
3700, checksum3528400692 - minute 4: tick
4885, checksum3938796473 - final: tick
6085, checksum2918746334
- initial: tick
- final multiplayer state on both clients remained
connectionStatus: connectedanddesyncState: synced, with the host still bound toplayer1and the guest still bound toplayer2 - no
Connection Lost, noGame Desynchronized, and no[Game] SECURITY:/CRITICALlog entries were emitted during the run
- scenario:
-
Verification artifacts:
- result bundle:
output/playwright/multiplayer-checksum-five-minute-1774744461322/ - summary JSON:
output/playwright/multiplayer-checksum-five-minute-1774744461322/result.json - logs:
output/playwright/multiplayer-checksum-five-minute-1774744461322/host.logandoutput/playwright/multiplayer-checksum-five-minute-1774744461322/guest.log - captured end-state visuals/text:
host-final.png,guest-final.png,host-final.txt,guest-final.txt
- result bundle:
-
Investigated the follow-up report that the guest seemed to start a multiplayer match without a loading progress bar on 2026-03-28.
-
Findings from live two-browser production captures:
- the in-canvas
LoadingScreenalready rendered correctly for the guest onceWebGPUGameCanvasmounted - the real UX gap was earlier in the
/gameroute handoff: both host and guest could briefly show the plain route-level black fallback before the game canvas chunk mounted, making the guest transition look like “no progress bar” if the black frame happened to last longer on that machine
- the in-canvas
-
Applied fix:
- added
src/app/game/GameLoadingFallback.tsx, a lightweight route-level loading shell with an immediate visible progress bar - updated
src/app/game/page.tsxso both the pre-hydration state and the dynamic import fallback useGameLoadingFallbackinstead of the blank black screen whenevergameStartedis true - added
tests/app/game/GameLoadingFallback.test.tsto lock the presence of the immediate loading shell and progress bar markup - updated
docs/architecture/OVERVIEW.mdto document that/gamenow shows a lightweight loading shell during the route-to-canvas handoff
- added
-
Verified:
npm test -- tests/app/game/GameLoadingFallback.test.ts tests/app/game/setup/lobbySessionPolicy.test.ts tests/app/game/setup/getStartGameButtonState.test.ts tests/app/game/gamePageLifecycle.test.ts tests/store/multiplayerStore.test.ts tests/engine/core/multiplayerMessageHandling.test.tsnpx eslint src/app/game/page.tsx src/app/game/GameLoadingFallback.tsx src/app/game/setup/lobbySessionPolicy.ts src/hooks/useLobbySync.ts src/hooks/useMultiplayer.ts src/store/multiplayerStore.ts src/components/game/hooks/useWorkerBridge.ts src/engine/core/Game.ts src/engine/core/GameCore.ts src/engine/core/multiplayerMessageHandling.ts tests/app/game/GameLoadingFallback.test.ts tests/app/game/setup/lobbySessionPolicy.test.ts tests/store/multiplayerStore.test.ts tests/engine/core/multiplayerMessageHandling.test.tsnpm run build- fresh production (
next start) two-browser loading-handoff capture onhttp://127.0.0.1:3309:- artifact bundle:
output/playwright/host-guest-loading-compare-fixed-1774745792096/ - both
host-100ms.pngandguest-100ms.pngnow show the new immediate loading shell instead of a blank screen - by
200ms, the regular in-canvas loading screen is already visible and progressing on both clients
- artifact bundle: