Memory leaks in BMP are well-known across the community, but nobody's actually measured them, so we don't know where the growth is or what's causing it.
Most recent visible incident is from the Season 3 Grand Finals between DrSpectred and Bean. After Game 1 (the 1h20m Zodiac match), DrSpectred said on stream he was going to relaunch Balatro between games of the BO5 because of memory pressure (VOD, 3h30m4s). A few seconds later, the commentators noted that the modding framework itself probably leaks too, and mentioned a "double memory lobby" scenario in passing. So whatever's going on isn't isolated to BMP: it spans BMP, Steamodded, and vanilla.
Community workarounds that have already converged: relaunch between games, play windowed, close Balatro between sessions.
Filing this as the umbrella issue for digging in.
The plan
First step is some lightweight instrumentation, since right now there's no measurement of any of this. Specifically: a small batch of MEM_DEBUG ... lines through the existing sendDebugMessage(..., "MULTIPLAYER") path at per-ante, match-start, and match-end boundaries, capturing both Lua heap (collectgarbage("count")) and GPU texture memory (love.graphics.getStats().texturememory). Plus a one-shot watchdog if networkToUi:getCount() exceeds some threshold. Format is chosen so both lib/log_parser.lua and the web parser at balatromp.com/log-parser silently ignore the lines. Opt-out via a new MP.EXPERIMENTAL.mem_debug (default true, .env-overridable). I DM'd steph about all this before opening.
Then wait ~2 weeks for uploaded logs to accumulate and see what the data says. People upload to the web parser anyway, so it's basically free.
Why both Lua heap and GPU texture memory: at least one suspected vector (below) is a pure GPU leak that collectgarbage won't see, so heap-only metrics would read clean while the user's RAM is being eaten.
Where to look
- Vanilla, full-screen related:
love.resize at src/main.lua:386 reassigns G.CANVAS without releasing the previous canvas. Zero canvas :release() calls exist anywhere in vanilla. Every fullscreen toggle, monitor switch, vsync change, or window resize leaks ~8 MB of GPU memory (1920×1080 RGBA32). Matches the community reports that windowed-mode avoids the leak. This is GPU memory specifically, so collectgarbage("count") won't catch it (hence the texture-memory metric above).
- BMP, network channel:
networkToUi is single-consumer at networking/action_handlers.lua:1214 inside Game:update(dt). If the UI thread stalls, pushes accumulate unbounded. Most plausible BMP-side culprit by reading.
- BMP, append-only state:
MP.GAME.enemy.spent_in_shop and sells_per_ante are append-only (networking/action_handlers.lua:277, 564), but MP.reset_game_states() clears them per match (core.lua:215-216). Bounded, so probably not it, but worth confirming with a counter.
- Open question: "Double memory lobby" — if anyone reading knows what this refers to, please comment.
Tasks
Not in scope: instrumenting networking-old/ (off by default via MP.EXPERIMENTAL.use_new_networking), building a new logger (existing sendDebugMessage is fine).
Memory leaks in BMP are well-known across the community, but nobody's actually measured them, so we don't know where the growth is or what's causing it.
Most recent visible incident is from the Season 3 Grand Finals between DrSpectred and Bean. After Game 1 (the 1h20m Zodiac match), DrSpectred said on stream he was going to relaunch Balatro between games of the BO5 because of memory pressure (VOD, 3h30m4s). A few seconds later, the commentators noted that the modding framework itself probably leaks too, and mentioned a "double memory lobby" scenario in passing. So whatever's going on isn't isolated to BMP: it spans BMP, Steamodded, and vanilla.
Community workarounds that have already converged: relaunch between games, play windowed, close Balatro between sessions.
Filing this as the umbrella issue for digging in.
The plan
First step is some lightweight instrumentation, since right now there's no measurement of any of this. Specifically: a small batch of
MEM_DEBUG ...lines through the existingsendDebugMessage(..., "MULTIPLAYER")path at per-ante, match-start, and match-end boundaries, capturing both Lua heap (collectgarbage("count")) and GPU texture memory (love.graphics.getStats().texturememory). Plus a one-shot watchdog ifnetworkToUi:getCount()exceeds some threshold. Format is chosen so bothlib/log_parser.luaand the web parser at balatromp.com/log-parser silently ignore the lines. Opt-out via a newMP.EXPERIMENTAL.mem_debug(default true,.env-overridable). I DM'd steph about all this before opening.Then wait ~2 weeks for uploaded logs to accumulate and see what the data says. People upload to the web parser anyway, so it's basically free.
Why both Lua heap and GPU texture memory: at least one suspected vector (below) is a pure GPU leak that
collectgarbagewon't see, so heap-only metrics would read clean while the user's RAM is being eaten.Where to look
love.resizeatsrc/main.lua:386reassignsG.CANVASwithout releasing the previous canvas. Zero canvas:release()calls exist anywhere in vanilla. Every fullscreen toggle, monitor switch, vsync change, or window resize leaks ~8 MB of GPU memory (1920×1080 RGBA32). Matches the community reports that windowed-mode avoids the leak. This is GPU memory specifically, socollectgarbage("count")won't catch it (hence the texture-memory metric above).networkToUiis single-consumer atnetworking/action_handlers.lua:1214insideGame:update(dt). If the UI thread stalls, pushes accumulate unbounded. Most plausible BMP-side culprit by reading.MP.GAME.enemy.spent_in_shopandsells_per_anteare append-only (networking/action_handlers.lua:277, 564), butMP.reset_game_states()clears them per match (core.lua:215-216). Bounded, so probably not it, but worth confirming with a counter.Tasks
mem_debugflag (heap + texture memory + channel watchdog)G.CANVASvanilla leak (Lovely patch in BMP vs. upstream bug report)Not in scope: instrumenting
networking-old/(off by default viaMP.EXPERIMENTAL.use_new_networking), building a new logger (existingsendDebugMessageis fine).