PPLNS + Group-Solo pool support#118
Merged
Merged
Conversation
Implements PPLNS (Pay Per Last N Shares) as an additional payout mode alongside existing solo mining. Miners on the PPLNS port receive proportional payouts directly in the coinbase transaction. Core engine: - PplnsService with Redis sorted set share window (N = 4 * network_difficulty) - Payout distribution calculator with dust accumulation (546 sat threshold) - Periodic float drift correction via full window recalculation - Payout history logging per block per miner Database: - PplnsBalanceEntity for pending/paid balance tracking - PplnsPayoutHistoryEntity for per-block payout audit trail - Migration for both tables with indexes API endpoints: - GET /pplns/status — window stats, miner count, enabled state - GET /pplns/distribution — current share distribution across all miners - GET /pplns/:address — pending balance, total paid, window share - GET /pplns/:address/history — payout history per block Integration: - StratumPortConfig: optional payoutMode field (solo/pplns) - ProtocolDetectorService: PPLNS port via PPLNS_PORT env variable - StratumV1Client + StratumV2Client: PPLNS share recording, shared coinbase construction, and block-found payout processing - Separate PPLNS_FEE_ADDRESS and PPLNS_FEE_PERCENT configuration Solo mining on existing ports remains completely unchanged.
Addresses with pending balance >= dust but no current window shares were included in the coinbase (via getPayoutDistribution) but never processed in onBlockFound, causing double-payment in the next block. Now onBlockFound processes all pending-only addresses after the main share loop and marks their pending as paid.
24 tests covering: - Share recording and Redis sorted set storage - Window trimming when exceeding N = 4 * network_difficulty - Payout distribution: proportional split, fee calculation, remainder - Dust filtering: sub-dust miners excluded from coinbase - Pending balance: inclusion in payout calc, pending-only addresses - Block found: paid marking, sub-dust accumulation, multi-block accumulation - Pending-only address processing (double-payment prevention) - Window stats and address status queries - Current distribution sorted output - Disabled service behavior
PplnsService now subscribes to StratumV1JobsService.newMiningJob$ to automatically update networkDifficulty on every new block template. Removes redundant per-client setNetworkDifficulty call from StratumV1Client.sendNewMiningJob. Window size is now correct from the first job, not only after the first PPLNS client builds a job.
Only the primary PM2 instance (NODE_APP_INSTANCE=0 or standalone) processes onBlockFound payouts. Non-primary instances skip payout processing to prevent double-crediting pending balances. Share recording (recordShare) remains on all instances — Redis sorted set operations are atomic and safe for concurrent writes.
5 end-to-end tests simulating the complete PPLNS flow: - Shares → distribution → real MiningJob coinbase → output verification - Block found payout processing with balance updates - Sub-dust accumulation across 20 blocks until payout - Large miner count coinbase weight validation - Consistency check: coinbase outputs match payout history Uses real bitcoinjs-lib Transaction objects and valid regtest addresses.
Distribution is cached in memory and reused for all clients until invalidated by a new share (recordShare), block found (onBlockFound), reward change, or 30-second TTL expiry. Reduces Redis ZRANGE + DB reads from once-per-client-per-job to once-per-invalidation-event.
Prevents invalid blocks when many PPLNS miners exceed the coinbase weight budget (bitcoin.conf blockreservedweight). - Calculate max miner outputs from weight budget (default 50000 WU) - When miners exceed limit, keep largest outputs, trim smallest to pending - Configurable via PPLNS_COINBASE_WEIGHT_BUDGET env variable - 5 new tests: trimming, largest-first ordering, budget limits - Updated example env with blockreservedweight documentation
onBlockFound now uses the distribution snapshot from getPayoutDistribution instead of recalculating from the current window. This ensures bookkeeping matches exactly what was put on-chain in the coinbase. - Snapshot saved when getPayoutDistribution builds a coinbase - onBlockFound consumes snapshot and processes all entries - Miners not in snapshot (sub-dust/trimmed) get their share added to pending - Falls back to window recalculation if no snapshot available
Race condition: Multiple miners finding a block simultaneously would trigger concurrent onBlockFound calls. Added in-memory lock that skips duplicate processing. Window design: Added comment explaining why the PPLNS window does NOT reset after a block — this is correct PPLNS behavior (not PROP). Shares within the window intentionally contribute to multiple blocks to protect against pool-hopping.
Master removed PM2 cluster mode (commit 3a75d3e) in favor of single-process Node. The PPLNS PM2 guard (isPrimaryInstance check in onBlockFound) is no longer needed — blockFoundInProgress still prevents concurrent processing within a single process. Also adds the pplnsService argument expectation to StratumV2Service spec that was missing after rebase.
Lets friends mine together on the pool with their rewards split among
the group's members (PROP-style: window resets on each found block).
Group membership is looked up per-address on any stratum port, so
miners don't need to reconfigure — the admin just adds their BTC
address to the group.
## Membership & auth
- `GroupService` manages groups and members, with admin-token auth
(pool-generated at creation, SHA-256 hashed, shown to creator exactly
once). Members never need to sign anything; the creator acts on their
behalf via the token.
- One address = max one group (unique constraint).
- Creator can kick members / transfer role / dissolve; non-creators can
self-leave without auth.
- Groups activate at ≥ 2 members; below that they're rejected from
recording shares.
- Creator leaving auto-transfers the creator role to the oldest
remaining member (admin token is rotated); group dissolves if alone.
## Payout engine
- `GroupSoloService` maintains per-group Redis keys
(`groupsolo:{id}:shares|counter|total`). On share accept, the miner's
address is looked up and the share lands in the matching group's
bucket.
- `getPayoutDistribution` builds a fee + members distribution
(same dust/weight-budget/pending rules as the PPLNS engine) and saves
a snapshot. `onBlockFound` uses that snapshot for bookkeeping so
on-chain payouts and DB history match exactly, then clears the group's
Redis keys (PROP reset).
## Stratum integration
- Group membership is detected at authorize (SV1) / channel open (SV2)
and stored per-session as `groupSoloGroupId`. When set, group-solo
takes precedence over the port's `payoutMode` for coinbase build,
share record, and block-found paths.
- No new port required — any port the miner uses routes them into
group-solo if their address is in an active group.
## API (`/api/pplns/groups`)
- `POST /`, `GET /`, `GET /:id`, `GET /:id/distribution`,
`GET /:id/history` — public.
- `POST /:id/members`, `DELETE /:id/members/:address`,
`POST /:id/transfer`, `DELETE /:id` — require `X-Admin-Token`.
- `DELETE /:id/members/:address/self` — unauth, non-creators only.
## Tests
- `group.service.spec.ts` — 16 unit tests (CRUD, token verify/rotate,
activation threshold, auto-transfer, cache invalidation).
- `group-solo.service.spec.ts` — 7 unit tests (share routing, PROP
distribution, round reset, sub-dust pending).
- `group-solo-regtest.spec.ts` — end-to-end regtest integration:
records shares, builds a real coinbase from the service's
distribution, submits the block to Bitcoin Core, verifies round
reset. Self-bootstraps to height ≥ 17 if needed.
Covers the PROP + pending interaction: a sub-dust miner gets credited to pendingSats in round 1, then in round 2 their new share plus the accumulated pending crosses dust and they appear in the coinbase, with pending cleared and the amount moved to totalPaidSats.
Discovered by running two Bitaxes against a regtest group: when a
miner submits shares AFTER the winning job's snapshot was built but
BEFORE the block is found, onBlockFound was crediting them to pending
in addition to paying the snapshot distribution via the coinbase. Net
effect: more than 100% of block reward being distributed (snapshot
claims the full miner cut, late shares get extra pending on top).
Fix distinguishes two "not in coinbase" classes:
- sub-dust / weight-trimmed: was in the Redis window at
snapshot-build time → credit to pending so it accumulates for
future coinbase inclusion (same as PPLNS)
- late arriver: share landed in Redis after the snapshot was built
→ under PROP rules this work is lost for the current block
(coinbase is immutable; crediting would double-count)
The snapshot now records `consideredAddresses` at build time so
onBlockFound can tell the two cases apart. Also adds:
- explicit UUID assignment on group create (PG column has no
gen_random_uuid() default, TypeORM's @PrimaryGeneratedColumn('uuid')
wasn't generating one in practice — caused a not-null violation
when creating groups via the REST API)
New unit test: "late-arriving shares (post-snapshot) are logged but
NOT credited to pending — prevents double-counting" — locks in the
PROP behavior and asserts that total coinbase payout never exceeds
the block reward.
GET /api/pplns/groups/:id now includes:
- totalHashrate: sum over all members
- members[].hashrate: per-member live hashrate
Plus a new dedicated endpoint for lightweight polling:
GET /api/pplns/groups/:id/hashrate
→ { groupId, totalHashrate, members: [{ address, hashrate }] }
Uses ClientService.getByAddress (same source /api/client/:address
uses for totalHashrate), not a Redis time-series lookup — so group
hashrate matches what the user sees on their per-address dashboard.
Drop-in compatible with /api/info/chart: same 10-minute slots, same
share-based hashrate formula (shares * DIFFICULTY_1 / 600), same
{label, data} output shape — just filtered to current PPLNS
participants so dashboards can show group-specific charts without
fetching per-address series.
Implementation sums ClientStatisticsService.getChartDataForAddress
across every address currently in the PPLNS window and merges points
with the same slot label. Sparse slots (where only some addresses
have data) are preserved without phantom zeros.
Supports range = 1d | 3d | 7d (same as per-address chart).
Adds 5 unit tests covering:
- Aggregation math (same-label sums)
- Sparse-slot handling
- Empty window returns []
- Range passthrough
- Unknown-range fallback to 1d
Also adds docs/pplns-api.md documenting all PPLNS endpoints (status,
distribution, chart, :address, :address/history).
Mirrors /api/info (blockData/userAgents/...) but restricted to addresses currently contributing to the PPLNS window. Lets dashboards show "who's mining on the PPLNS pool right now" without having to intersect /api/info.userAgents with /api/pplns/distribution client-side. Response combines /api/pplns/status (window stats + enabled flag) with a userAgents breakdown — same aggregation query as /api/info.userAgents but filtered to PPLNS addresses. Also adds ClientService.getUserAgentsForAddresses(addresses) that performs the filtered aggregation in a single SQL query. 2 new unit tests cover the populated and empty-window paths.
Returns the mining mode the given BTC address is currently in: - group-solo: address is a member of an active group - pplns: address has shares in the PPLNS window - solo: otherwise (default for any fresh address) Used by the UI to pick the right dashboard layout per user without having to cross-reference /pplns/distribution and /pplns/groups client-side. 5 unit tests cover all mode transitions including the inactive-group edge case (a group with only 1 member falls through to pplns/solo).
2 tasks
Previously returned a solo-shaped coinbase (miner at 100%, or fee+miner with dev-fee config) regardless of the miner's actual mode. For PPLNS and group-solo users that's misleading — the real coinbase is a multi-output distribution across pool/group members, not a solo payout. Now picks payoutInformation to match the real coinbase the pool would produce: - solo: unchanged (miner address, optionally fee + miner) - pplns: PplnsService.getPayoutDistribution(coinbasevalue) - group-solo: GroupSoloService.getPayoutDistribution(groupId, coinbasevalue) Response now also includes `mode` and `payoutInformation` fields so the UI can show the right coinbase shape without guessing, plus `groupId` when mode is group-solo. Falls back to solo if the PPLNS/group distribution is empty (e.g. fresh pool with no shares yet) — matches StratumV1Client behavior on an empty PPLNS port. ## Refactor Extracted mode detection into a dedicated `MiningModeService`. Both PplnsController.getMiningMode (REST endpoint) and the block-template endpoint in AppController now delegate to it instead of duplicating the group-then-pplns lookup logic. Moves the detailed mode-detection test cases to mining-mode.service.spec.ts where the logic now lives; PplnsController's spec is simplified to a delegation check. ## Impact on existing specs AppController's constructor gained three new deps (MiningModeService, PplnsService, GroupSoloService). All four spec files that instantiate AppController for HTTP-integration tests are updated to provide the new deps as simple mocks — no existing assertions change.
3 tasks
Mirrors /api/pplns/chart but restricted to a specific payout group's
members. Returns the same { label, data } 10-minute-slot format as
/api/info/chart and /api/pplns/chart so a future group-dashboard
chart widget can drop in without adapter code.
Implementation: sums ClientStatisticsService.getChartDataForAddress
across every member and merges points by label. Sparse per-member
slots are preserved without phantom zeros — a slot only appears if
at least one member contributed to it.
Supports range = 1d | 3d | 7d (same as the per-address chart).
Returns 404 for missing or dissolved groups, empty array for
member-less groups.
7 unit tests cover sum math, sparse handling, empty group,
non-existent/dissolved group, range passthrough, unknown-range
fallback to 1d.
2 tasks
Exposes the existing PPLNS_FEE_ADDRESS / PPLNS_FEE_PERCENT config (shared
by PPLNS and group-solo payout paths) as a public endpoint so the
groups-landing UI can render the current pool fee dynamically instead
of hard-coding a percentage that drifts when backend config changes.
Response: { feePercent, feeAddress, coinbaseWeightBudget }
…-group MiningModeService.getMode only returns 'group-solo' for active groups (≥ 2 members), so a creator of a freshly-created 1-member group would see "No payout group" in the UI until a second member joined. This endpoint returns group details for any address that's a member of a non-dissolved group, active or not — letting the UI open the creator's dashboard before the group is mining-eligible. Mining-side behavior is unchanged; the new endpoint is UI-facing only.
Mirrors the existing /api/pplns/groups/:id/chart pattern — same
aggregation the UI was doing client-side via forkJoin, now done once
on the pool. Saves N-1 HTTP calls per chart refresh for a group with
N members.
GET /api/pplns/groups/:id/accepted?range=1d|3d|7d
→ { slotData: [{ time, counts: { accepted } }, ...] }
GET /api/pplns/groups/:id/rejected?range=1d|3d|7d
→ { slotData: [{ time, counts: { JobNotFound, DuplicateShare,
LowDifficultyShare } }, ...] }
Shapes match the per-address /api/client/:address/{accepted,rejected}
endpoints so the UI can plug the new source in without any chart
transformation changes.
POST /api/pplns/groups/:id/members/batch
Body: { addresses: string[] }
Returns: { added: [...], skipped: [{ address, reason }] }
One token verification, one address-cache rebuild per batch — instead
of N per call. `skipped` carries benign per-address failures
(already-member / address-in-group / invalid-address / duplicate-in-batch)
so the UI can surface them in a summary toast without the client having
to reconstruct them from thrown errors.
Token/auth failures still throw as before; they're not a per-address
issue and should abort the batch.
Cached groupSoloGroupId in Stratum V1/V2 clients was set once at authorize and never refreshed. A miner whose address was added to a payout group after connecting had its shares routed as solo for the rest of the session — shares never landed in the group's Redis round, and the coinbase build used the wrong payout path, producing inconsistent coinbases across group members. - Remove the groupSoloGroupId field and the detectGroupSoloMembership helper. Replace with an activeGroupId() method that queries GroupService live at every share record, job build, and block-found dispatch. The in-memory address cache in GroupService is O(1) Map.get and is already refreshed synchronously on every addMember/removeMember call, so the live lookup is always current. - GroupSoloService: add per-address rejected counters (rejected-diff and rejected-count Redis hashes), cleared on round reset. recordReject() aggregates per address, getRoundStats() returns totalRejectedDifficulty, rejectedShareCount, and per-address rejected fields alongside accepted. - Wire reject dispatch from all Stratum reject paths: V1 gets a dispatchGroupReject helper called after each of the 4 reject blocks (duplicate-share, two job-not-found paths, low-difficulty), V2 adds the dispatch once to recordRejectedShare which covers all reject sites. - Test Redis mock extended with hash ops (hIncrByFloat, hIncrBy, hGetAll) and 3 new unit tests cover recordReject guards, per-address aggregation in getRoundStats, and round-reset clearing the rejected counters.
K2 (privacy) email controller: mask /api/email/by-address response.
Helper extracted to src/utils/email-mask.utils.ts and reused
from pplns-group-invitation.service. /app/:address is a
public URL — owner already knows their full email.
K3 (money) coinbase-distribution: feeSats now only deducted from
miner reward when the fee output is actually emitted.
Previously feeSats was always subtracted; if feeAddress
was unset (or the fee was below dust) the sats were
silently forfeited in the coinbase under-claim. +2
regression tests.
K4 (money) SV2 buildPayoutInformationAsync: empty PPLNS / group-solo
window now logs and returns null instead of falling
through to a sync solo coinbase. The 3 external call
sites no longer use ?? buildPayoutInformation() fallback.
Dead PPLNS / group guards in sync variant removed (the
method is now solo-only by contract). User chose option
A (warm-up reject). SV1 already correct, gained a parallel
log line. Audit's claim that SV1 had the same fall-through
was wrong — only SV2 was affected.
K5 (money) SV2 handleSubmitSolution (TDP path): now dispatches
pplnsService.onBlockFound / groupSoloService.onBlockFound
after SUCCESS! TemplateDistributionService.handleSubmitSolution
return type carries coinbasevalue so the SV2 client can
route without a second template lookup. Mirrors the
extended-channel routing block. +1 regression test.
M1 (ops) PoolModeHashrateService.incrementAccepted: replaced raw
Postgres-only ON CONFLICT SQL with TypeORM's
database-agnostic Repository.increment() / .insert()
flow. Race-retry on concurrent cold-slot writers via
insert-throws → re-increment. Fixes silent stat-loss on
SQLite dev environments. Tests rewritten + new race
regression.
M2 (ops) typeorm-cli.config.ts: add 5 missing entities
(AddressEmailEntity, EmailVerificationEntity,
PoolModeHashrateEntity, PplnsGroupInvitationEntity,
WorkerSharesEntity). migration:generate now sees them.
Replaces the original K1 plan (BIP-322 + Password + Email atomic registration). Re-evaluation showed the K1 attack EV is essentially zero — PROP-fairness theorem means a malicious group-add of a victim yields the same expected reward as the victim mining alone, only with lower variance for the attacker (= same benefit as joining any pool). Full BIP-322 was over-engineering for a near-zero-motivation attack. Minimal fix: * AddressEmailService.register now refuses re-binding a verified address with a different email (`409 already-bound`), and fires a notification to the bound email so the legitimate owner sees the attempt. Same-email re-register is idempotent. * AddressEmailService.verify gains a defense-in-depth check: a stale pending token (e.g. from before this fix was deployed) cannot overwrite a verified binding with a different email — it's refused with the same `already-bound` code. * EmailService.sendBindingChangeAttempt: themed inline-HTML + plaintext alert sent to the currently-bound email when a rebind is refused. The attempted email is masked (`a***@domain`) before being shown to avoid leaking attacker identity. * EmailController maps `already-bound` to HTTP 409. * Notification dispatch is fire-and-forget — its SMTP success or failure cannot change the refusal outcome, so the response shape doesn't depend on email-deliverability (otherwise the response would leak binding-presence on probing). Residual risk: a user who has not yet registered an email can be front-run by an attacker who registers first. Accepted because (a) the attack EV is ~0 and (b) any user concerned about it can register their own email at any time. Self-leave from a group remains a known UX limitation — current advice is to repoint the miner to a different address or contact the group admin. Documented as such. Tests: 8 new specs (FCFS-lock, same-email idempotent, case-insensitive match, notification-decoupled-from-refusal, pending-row-doesn't-lock, defense-in-depth in verify). 11 existing invitation tests still green.
…claude/, add launch announcement * Remove POSTGRESQL_MIGRATION_SUMMARY.md (PG migration is long-completed, the file was a transient task-summary not reference material). * Ignore .claude/ — Claude Code local config dir, user-private state, not for sharing. * Track docs/announcement-pplns-group-solo.md — drei Sprachvarianten (CH-DE, DE-allgemein, EN), je Kurz- und Langversion für die PPLNS + Group-Solo Launch-Kommunikation.
Adds five nullable columns to pplns_group: roundResetIntervalDays INT NULL roundResetHourLocal INT NULL (0-23 in group's TZ) roundResetTimezone VARCHAR(64) NULL (IANA, e.g. 'Europe/Berlin') lastRoundResetAt TIMESTAMPTZ NULL finderBonusSats BIGINT NULL All NULL = feature off, existing groups behave unchanged. Admin opts in via PATCH /pplns/groups/:id/settings (Stage 5). Subsequent stages add the cron service, scheduled-reset wipe, finder-bonus coinbase math, and the admin API.
Adds optional `finderBonusSats` + `finderAddress` to
CoinbaseDistributionInput. Bonus is paid as a SEPARATE coinbase
output to the finder, on top of the normal proportional share.
Pipeline:
1. Bonus subtracted from rewardForMiners BEFORE the proportional
split, so the rest of the group is split on (reward - fee - bonus)
2. Bonus capped at 95 % of post-fee miner-cut at runtime —
defends against post-halving overshoot if admin's configured
value is too large for current subsidy
3. If bonus < minPayout, suppressed entirely (rewardForMiners
restored, no output emitted)
4. Bonus output counts against the coinbase weight budget, so the
trim ceiling is correct when the budget is tight
5. Bonus output is its own entry — visible/auditable on-chain even
when finder is also in addressShares
Disabled when `finderBonusSats` or `finderAddress` is unset/0 — all
existing PPLNS / group-solo callers continue to receive identical
distributions.
Tests: 7 new specs (basic emission, runtime cap, sub-dust suppression,
0/undefined/negative no-op, missing-address no-op, lone-finder, tight
weight budget). 37/37 coinbase-distribution tests green.
… B wipe)
Adds GroupSoloService.scheduledRoundReset(groupId) — the timer-driven
counterpart to onBlockFound. Variant B semantics chosen by the user:
"alles wird resetted" — full wipe of round state AND every member's
pending balance.
What gets wiped:
- Redis live shares, counter, total, rejectedShares
- Redis lastShareAt (members start with fresh inactivity timers)
- Redis distribution snapshot
- All pplns_group_balance rows for the group (positive AND negative
pending — symmetric, ledger-neutral within the group)
What survives:
- pplns_group_block_history (audit trail)
- pplns_group row + member roster
Anti-double-fire guards:
- Skipped if a block-found is in flight on the same group
- Skipped if a reset (block-found OR scheduled) ran in the last 60 s
Block-found behaviour is unchanged. Stage 3 will wire the cron service
that drives this method per the group's configured interval / hour /
TZ; until then the method is reachable only via direct unit-test
calls or via Stage 5's admin API.
Constructor gained an injected `PplnsGroupEntity` repo (for the
lastRoundResetAt write + dissolved-check). 5 spec files updated
to pass a mock groupRepo. New tests: full-wipe happy path,
block-found-lock guard, recent-reset debounce, dissolved-group
skip. 45/45 group + onblockfound unit tests green.
- Each regtest spec's beforeAll now lists wallets and unloads any non-default one. Fixes "Multiple wallets are loaded" RPC errors that broke v1-solo / v2-standard / v2-extended specs whenever the mempool spec had run earlier and left mempool_test_wallet attached. - group-solo-regtest-mempool: afterAll unloads its test wallet so subsequent specs see a single-wallet node. ensureChainHeight also tops up the wallet's spendable balance (mines 110 blocks to it when balance is empty), so the spec works even when the chain is already past height 120 from earlier specs. - Result: full suite green at 698/698 in any spec order.
…resets Introduces the time-based variant of the Group-Solo "wipe everything" round reset. Each group with a non-NULL roundResetIntervalDays gets its own cron job, registered via @nestjs/schedule's SchedulerRegistry, that fires daily at roundResetHourLocal in roundResetTimezone. Why daily-fire + elapsed-check rather than "every N days" cron: cron natively supports "every Sunday" / "1st of the month" but not arbitrary "every N days from creation". Firing daily and checking elapsed-since-last-reset is the simplest way to support any integer interval uniformly. Lifecycle: - onApplicationBootstrap: arms cron jobs for every active group with configured interval. One bad group (e.g. invalid TZ) no longer prevents the others from being scheduled. - applyConfig(group): idempotent — tears down the existing job before arming the new one, so settings changes pick up cleanly. - unschedule(groupId): no-op when no job exists. - fireIfDue (cron callback): re-loads the group fresh on every tick so settings changes take effect without re-arming. Self-unschedules when the group has been dissolved or had its interval cleared between firings (avoids hot-loop into a deleted group). DST tolerance: 12h skew off the configured interval is allowed before firing. Covers DST transitions where the daily fire-time lands 23h or 25h after the previous one. 17 unit tests cover: validation gates (dissolved/interval/hour/TZ), idempotent re-apply, unschedule no-op + actual delete, never-reset, too-early skip, interval-elapsed fire, DST-tolerance edges (6.6d fires, 6.4d skips), dissolved + cleared-interval self-cleanup, bootstrap loads all groups, bootstrap survives a single bad group. Wired into AppModule providers; PplnsGroupEntity is already in the forFeature import list, so groupRepo injects without further setup. Stage 5 (admin PATCH endpoint that calls applyConfig) follows next.
…point
Closes the loop on Stage 1-4: the migration, entity columns, coinbase
math, scheduledRoundReset, and GroupRoundResetService are all in place;
this commit gives admins an HTTP surface to actually configure them.
GroupService.updateRoundResetConfig:
- Admin-token auth (reuses requireAdminToken — same code path as
transfer/dissolve/etc.)
- PATCH semantics: undefined leaves the column alone, null clears it,
value sets it. JSON-friendly: finderBonusSats accepts string or number
since raw JSON has no bigint.
- Validation rules:
- intervalDays: integer in [1, 365] or null
- hourLocal: integer in [0, 23]
- timezone: must be a valid IANA zone (verified via Intl.DateTimeFormat)
- finderBonusSats: non-negative integer ≤ 100M sats (1 BTC). Cap is a
finger-fat guard — even 1 BTC bonus is absurd; this just prevents a
typo from stranding the entire reward.
- "incomplete-schedule": fail-fast when intervalDays is set but
hourLocal/timezone aren't (would otherwise silently skip cron arming).
- Persists, then idempotently re-arms the per-group cron via
roundResetService.applyConfig — clearing the interval naturally maps to
unschedule because applyConfig short-circuits on a null interval.
Lifecycle correctness:
- dissolveInternal now also calls roundResetService.unschedule(groupId)
so a freshly-dissolved group doesn't keep firing its cron (the
fireIfDue callback would self-clean on the next tick, but that's up
to 24h of pointless wakeups).
Controller:
- @patch(':id/settings') wraps updateRoundResetConfig with admin-token
header + the standard toHttpError mapping (BAD_REQUEST for the new
invalid-* / incomplete-schedule codes).
Tests (12 new in group.service.spec.ts, 29 total):
- Auth (missing + invalid token both rejected)
- Full valid config persists + applyConfig invoked
- intervalDays=null short-circuits to applyConfig (which unschedules)
- Validation: invalid interval/hour/timezone/bonus codes
- incomplete-schedule when interval is set but hour/tz missing
- PATCH semantics: undefined leaves columns alone
- finderBonusSats null → 0; accepts string + number
- dissolveGroup also unschedules the cron
…test halvings The "every sub-dust miner has pending" assertion silently broke once the local regtest chain crossed 3 halvings (≈ block 450). Mechanism: - 30 tiny miners with weight=0.01 each, total weight ≈ 6.5M - tiny percent-share ≈ 1.54e-9 - At regtest's initial 50-BTC reward → 7.7 sats per tiny miner (above the 1-sat rounding floor, below 546-sat dust → pending row created) - After 3 halvings → 6.25-BTC reward → 0.96 sats per tiny miner → Math.floor(0.96) = 0 → no pending row → assertion fails Code is correct. Sub-1-sat shares legitimately round to 0 — emitting a pending row for a literal zero would be an accounting ghost. The test just specified the input poorly: weight=0.01 was chosen for a fixed 50-BTC reward that doesn't survive a long regtest session. Bumping the tiny weight to 1 keeps it deeply sub-dust (≈ 6 sats at the 8th-halving floor of 0.39 BTC) while staying comfortably above 1 sat. Power-law shape is preserved — heavy : tiny ratio is now 1e6 instead of 1e8, but the sub-dust filter still gets exercised on all 30 tiny miners. Comment block updated to document why this wasn't done earlier.
The Stage-5 PATCH endpoint accepts these fields but the GET view never returned them, so the UI couldn't render the member-card schedule display or initialise the admin settings form. Adding them to publicGroupView (used by both /pplns/groups/:id and /by-address/:address): - roundResetIntervalDays (number | null) - roundResetHourLocal (number | null) - roundResetTimezone (string | null) - finderBonusSats (number, defaults to 0) - lastRoundResetAt (Date | null) — anchor for the next-reset countdown All public-readable on purpose: every member needs them to render the "Resets every Xd, next in Yd Zh" badge and the finder-bonus row in their dashboard. No auth-leaking concern — the values are configuration the admin chose to publish, not membership/auth state.
The finder-bonus column existed on the entity but was never passed through
to buildCoinbaseDistribution — coinbases never contained a bonus output.
User caught this on real hardware.
Wires the existing math layer to actual stratum sessions:
- GroupSoloService.getPayoutDistribution(groupId, reward, finderAddress?)
reads finderBonusSats from the live group entity and forwards it together
with the miner's address to buildCoinbaseDistribution
- StratumV1Client + StratumV2Client (standard + extended) pass their
authorized address as finderAddress, so each session's coinbase template
names that miner as the bonus recipient
- app.controller.ts /client/:address/block-template endpoint also passes
the address so the UI's preview matches what the stratum layer sends
- Snapshots become per-finder: groupsolo:{groupId}:snapshot:{finderAddress}.
onBlockFound reads the snapshot for the actual finder so the booked
history rows match the on-chain coinbase exactly. Legacy '__none__' key
is used for unauthorized sessions.
- deleteAllSnapshots(groupId) walks the snapshot prefix via Redis SCAN
and is called on every onBlockFound + scheduledRoundReset + dissolve
Mid-round changes: every getPayoutDistribution call reads the live entity,
so enabling/disabling the bonus applies on the next template (~30s).
Disabling reverts to a bit-identical original PROP path because
buildCoinbaseDistribution gates the entire bonus branch on
'bonusSats >= minPayout'.
Tests:
- 7 new finder-bonus unit tests (per-miner snapshots, snapshot-cleanup,
bonus-cap interaction, no-bonus passthrough)
- New regtest in group-solo-regtest.spec.ts: 4-output coinbase
(fee + bonus + alice-prop + bob-prop) submitted to real Bitcoin Core 29.0,
block accepted, snapshots wiped after onBlockFound
- Mock Redis updated in 3 spec files: array-del + cursor-scan support
The previous schema mapped daily/weekly/monthly to '1/7/30 days from last
reset', not real calendar boundaries. User wants the actual end-of-day,
end-of-Sunday, last-day-of-actual-month — in the admin's browser timezone.
Schema:
- New roundResetPreset column ('daily'|'weekly'|'monthly'|'custom'|null)
is now authoritative; roundResetIntervalDays is only meaningful when
preset='custom'.
- Migration 1778000000000 backfills preset='custom' for any group that
had intervalDays set before the column existed (semantics-preserving).
- roundResetHourLocal is locked to 0 (= midnight); kept on the entity
for backward compat but no longer admin-configurable.
Cron expressions (group-round-reset.service.ts):
- daily '0 0 0 * * *' in admin TZ
- weekly '0 0 0 * * 1' in admin TZ (Mon 00:00 = end of Sunday)
- monthly '0 0 0 1 * *' in admin TZ (1st 00:00 = end of previous month,
respects 28/29/30/31-day months)
- custom '0 0 0 * * *' in admin TZ + fireIfDue elapsed-check
Calendar presets fire unconditionally on every cron tick; only custom
keeps the elapsed-check. The 60s anti-double-fire guard inside
GroupSoloService.scheduledRoundReset still gates against block-found
overlap.
New computeNextResetAt(group) helper exported from the service: uses a
non-started CronJob to ask cron for the next fire time, walks forward
through daily fires for custom (picking the first one ≥ elapsed
threshold). Exposed in publicGroupView.nextResetAt so the UI just shows
a live countdown — no client-side TZ math needed.
GroupService validation:
- accepts 'preset' in GroupRoundResetSettings (replaces hourLocal field)
- calendar presets clear intervalDays automatically; custom requires it
- timezone is mandatory for any preset
Tests: 18 new (preset cron expressions, computeNextResetAt for each
preset including custom-with-elapsed-threshold, applyConfig short-
circuits for invalid config). 753 backend tests green incl. 5 regtest
specs against real Bitcoin Core 29.0.
Briefs an external reviewer cold on the two features that landed in this branch (per-miner coinbase + calendar-aligned presets) and asks them to find functional bugs, edge cases, security issues. The known sub-second snapshot-vs-job race is explicitly flagged as out-of-scope so the reviewer doesn't waste time rediscovering it.
…, exhaustive cron switch - group.service: reject finderBonusSats > 0 below pool minPayoutSats (would silently be cleared at coinbase build time, leaving admin to think their config was active) - group.service: reject preset+intervalDays inconsistency (intervalDays only meaningful for preset='custom'; non-custom presets with intervalDays would persist dead state) - group-solo.service: removeMemberState now also deletes the kicked member's per-finder snapshot — would otherwise live until 1h TTL or next block-found wipe - group-solo.service: onBlockFound clears the share window before deleting snapshots — concurrent getPayoutDistribution between the two steps now hits the empty-window early-exit and writes no stale snapshot - group-solo.service: scheduledRoundReset docstring corrected — the 60s lastRoundResetAt guard only catches scheduled-vs-scheduled (the actual race protection against onBlockFound is the in-process blockFoundLocks Set, not the DB column) - group-round-reset.service: cronExprForPreset switch gets an exhaustive default that throws — prevents silent undefined return on out-of-range enum drift - group-solo.service: getMinPayoutSats() exposes the pool minPayout floor to GroupService for the bonus-rejection check - regression tests: cover the two new validation rules + mock update for GroupSoloService.getMinPayoutSats
…when smallest credit-claimer absorbs floor-rounding residual
A 500-trial property-based fuzz test (added in this commit) caught a real
consensus-level bug: when the abandoned-debtor solvency cap fired with
multiple small credit-claimers whose summed balance ≈ overshoot, the
DESCENDING-sort + last-absorbs-residual pattern could push the SMALLEST
claimer's cut beyond their balanceOld. Their c.onChain went negative;
Phase 6's emission filter dropped them; the Phase 5b residuum check
included the negative onChain in its sum-balance assumption — so the
EMITTED total ended up at rewardForMiners + 1 sat, which Bitcoin Core
rejects with bad-cb-amount.
Concrete trial that triggered it (block reward = 1.5625 BTC = post-2028
subsidy, 5 active miners + 1 pending-only with +5640 credit):
overshoot = 30306, claimers: addr3=17274, addr4=7394, addr5=5640
descending cuts: 17042 + 7393 + 5641 = 30076 (last gets residual)
→ addr5.onChain = 5640 - 5641 = -1
→ emitted sum = rewardForMiners + 1
→ bad-cb-amount
Fix:
- Sort credit-claimers ASCENDING by balanceOld so the LARGEST absorbs the
floor-rounding residual at the end of the loop. Largest claimer always
has the most headroom for the residual (≤ N-1 sats).
- Defence-in-depth clamp `cut = min(cut, balanceOld)` for any pathological
configuration that ascending sort doesn't already prevent.
- Fall back to fee-100 % if the clamp swallowed sats (`applied < overshoot`)
rather than build an invalid block. Mathematically unreachable given the
totalCredit >= overshoot precondition + ascending sort, but the safety
net is cheap and closes the door for any future refactor.
Tests:
- New 500-trial fuzz test in coinbase-distribution.spec.ts that randomizes
the entire input surface (rewards across halvings, shares, balances,
feePercent, feeAddress optional, suppressMatchingDebits, finderBonus,
budget tightness) and asserts BOTH consensus-critical invariants:
1. sum(payouts.sats) ≤ blockReward (no bad-cb-amount)
2. coinbase weight ≤ configured budget (no oversize)
Plus: every emitted output has positive sats. Runs ~50 ms.
- pplns-regtest.spec.ts: removed brittle "Charlie must appear in coinbase"
assertion that was coupled to the old descending-sort behaviour. The
test now asserts only the consensus-meaningful invariants (fee + active
miners present, total ≤ blockReward, Core accepts). Pending-only
miners may legitimately drop out of a single block's coinbase when
their balance is fully absorbed by the solvency cap — their carry-
forward credit is preserved per Phase 5a.5 abandoned-debtor semantics.
…ldCoinbaseDistribution Pre-refactor onBlockFoundFromWindow had its own simplified payout math that silently dropped finder-bonus, weight-budget trim, Phase 5a.5 solvency-cap, and Phase 5b residuum. The snapshot path used the central buildCoinbaseDistribution; the fallback didn't — so the two paths could book different ledger states for the same block input. Both paths now route through buildCoinbaseDistribution (same params: suppressMatchingDebits, finderBonusSats, finderAddress, weight budget, min-payout) and share a new private applyDistributionTx for persistence. Also threads finderAddress from onBlockFound into the fallback so the bonus output reaches the right miner. Adds a CRITICAL RECOMPUTE warning mirroring PPLNS so operators can reconcile against the explorer when the recompute kicks in. Regression test locks in finder-bonus parity between snapshot and fallback paths.
inCoinbase (boolean) was duplicated state alongside rowType (enum: 'coinbase' | 'pending' | 'dust-sweep'). Every writer set both columns in lockstep, but only rowType was ever read — UI styles + filters by rowType exclusively, no backend code reads inCoinbase. Each new history-write path was one mistake away from the two columns disagreeing. Migration 1779000000000 drops the column from both pplns_payout_history and pplns_group_block_history. Reversible: down() re-adds the column and backfills inCoinbase = (rowType = 'coinbase'), preserving original semantics for any rollback scenario. Spec tests rewritten to assert rowType directly. Two history-entity docblocks updated to point at the dropped-column migration.
…-sat fee output
Pre-fix: getPayoutDistribution called this.fallback() (no arg) on the
two early-exit paths (!isEnabled, empty zRange). The fallback signature
treated blockRewardSats as optional and emitted [{feeAddress, percent: 100,
sats: blockRewardSats ?? 0}] — i.e. a 0-sat fee output. Caller-side
checks (`if (distribution.length > 0)` in StratumV1Client and
app.controller) happily accepted that array, so a 0-sat fee output
could land in the coinbase.
Post-fix: fallback's blockRewardSats parameter is required (matches
PplnsService.fallbackDistribution). Both early-exit call sites already
have the value in scope; threading it through is mechanical. Regression
test exercises the empty-window path explicitly.
addPending didn't touch lastAcceptedShareAt. The dust-sweep cron gates on (pendingSats < minPayout AND lastAcceptedShareAt past dormancy cutoff), so a kick that redistributed sats to a recipient with a stale or null timestamp left the just-credited sats eligible for absorption on the very next sweep run. Convention: any row that just received money gets a fresh dormancy anchor. The credit doesn't make the recipient "active" in the mining sense, but it resets the clock so the sweep doesn't immediately reclaim what the kick handed them. Regression test stages a stale recipient and verifies the post-kick timestamp is fresh.
PplnsService, GroupSoloService, and DustSweepService each parsed PPLNS_MIN_PAYOUT_SATS and clamped to DUST_LIMIT_SATS with literally identical code. Three sites that had to stay in sync any time the parse/clamp logic changed. Pulled into resolveMinPayoutSats() in coinbase-distribution.ts (single source of truth for min-payout policy alongside the dust constants). Pure function — takes the raw env value, no NestJS coupling, testable in isolation. Six unit tests cover the parse + clamp branches. No behavioural change in production code paths.
…te helper
Four code sites called the same Redis cleanup combination
(resetRound + optional lastShareAt del + optional deleteAllSnapshots)
with slightly different opt-ins:
onBlockFound keep lastShareAt, drop snapshots
onBlockFoundFromWindow keep lastShareAt, snapshots already gone
removeGroupState drop everything
scheduledRoundReset drop everything
Pulled into a single private wipeRoundState(groupId, opts) helper. Each
caller now states its intent declaratively
({ includeSnapshots, includeLastShareAt }) instead of stitching the
three Redis ops together by hand.
Minor behavioural change: onBlockFoundFromWindow's four early-exit /
post-TX cleanups now also call deleteAllSnapshots. Defensive — covers
the pathological case where a concurrent stratum getPayoutDistribution
slipped in between deleteAllSnapshots and the recompute TX. No-op
under normal operation.
The snapshot-mismatch pre-call (line 463) keeps its isolated
deleteAllSnapshots — semantically distinct from a round wipe, it's a
"drop the stale snapshot before recomputing" step.
…stributionEntry alias
PplnsPayoutEntry and GroupSoloPayoutEntry were structurally identical to
CoinbaseDistributionEntry — { address, percent, sats } across all three.
PplnsService even shipped a toPayoutEntry() identity-mapper to convert
between them.
Both engine-specific types are now `export type X = CoinbaseDistributionEntry`
re-exports. External callers can keep importing them by name (no breaking
change) but the underlying shape is one definition. Identity-map calls
(`result.payouts.map(this.toPayoutEntry)` / `.map(p => ({...}))`) are
gone — the math layer's output is the same shape as the API layer's
input, so no conversion is needed.
PplnsService and GroupSoloService each defined a near-identical
StoredSnapshot interface plus copy-pasted writeSnapshot / readSnapshot
methods (SET … EX with fallback, JSON parse with legacy-sats derivation,
Set/Map hydration). Drift risk on every snapshot-shape change.
Pulled the JSON wire format + Redis low-level write/read into a new
coinbase-snapshot.ts module:
- StoredCoinbaseSnapshot : JSON-serializable wire shape (one place)
- ParsedCoinbaseSnapshot : hydrated read result (Set + Map)
- writeStoredSnapshot() : SET … EX, falls back to SET+EXPIRE
- readStoredSnapshot() : JSON parse + legacy-sats derivation +
Set/Map hydration
Both services keep their service-specific wrappers but they're now
two-line pass-throughs that resolve the Redis key + delegate. No
behavioural change. Seven unit tests cover the persistence module:
roundtrip, missing-key, legacy-sats derivation, ioredis-fallback,
unparseable JSON, Set/Map hydration.
Same concept, three names:
Redis key : groupsolo:{id}:last-share-at (kebab)
JS access : keys.lastShareAt (camel)
API field : lastShareAt (camel)
DB column : lastAcceptedShareAt (camel, longer)
The DB column tracks the authoritative state for the dust-sweep cron;
the Redis hash is its hot-path cache for the admin-kick gate. Same
concept, different storage, but the naming made them look like two
distinct quantities — easy to misread when refactoring DustSweep
(e.g. "I should read the fresher Redis value" → wrong, sweep needs
DB-consistent reads).
Renamed the cache + API to mirror the column:
Redis key : groupsolo:{id}:last-accepted-share-at
JS access : keys.lastAcceptedShareAt
API field : lastAcceptedShareAt
Caches take their name from the source of truth, not the other way
round. UI sync follows in a separate commit on blitzpool-ui-master.
Test-server caveat: existing Redis hashes under the old key name are
orphaned by this rename. Cleanup is manual:
docker exec <valkey> redis-cli --scan --pattern 'groupsolo:*:last-share-at' \
| xargs -r -I{} docker exec <valkey> redis-cli DEL {}
Stats endpoints kept old `totalDifficulty`/`currentWindowDifficulty` names
even though distribution endpoints already moved to `totalShares` /
`totalRejected` weeks ago. The mismatch made the API self-inconsistent
(same concept, two names depending on which route you hit) and also
confused operators — "difficulty" suggested per-share network difficulty,
not summed share work.
- `getWindowStats()` returns `{ totalShares, windowSize, minerCount }`
(dropped unused `shareCount` — never read by any caller)
- `getAddressStatus()` returns `currentWindowShares`
- Controller doc + specs updated; UI sync follows in blitzpool-ui.
Each of the eight regtest specs carried its own copy of `rpcCall`, `buildCoinbase`, `buildBlock`, `mineBlock`, plus (in five of them) `createMockRedis` and `createMockRepo`. The mocks had drifted — group-solo specs had `zRem` + `scan` but no `zRemRangeByRank`, PPLNS specs had `zRemRangeByRank` but no `scan`/`hSet`/`hDel`/`hGet` — even though every spec was conceptually mocking the same Redis client. Centralized in `__test-helpers__/regtest-harness.ts` with superset fakes (every method anyone needed) so future specs don't have to reinvent the wheel and existing specs stay aligned. Net: ~970 lines removed, 0 behavior change, all 772 tests still green.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What this PR adds
Three payout modes to the pool, all built on the same non-custodial coinbase distribution engine:
The actual novelty: coinbase outputs go directly to the miners
The coinbase the pool builds is a multi-output transaction, one output per active miner, paying their share directly on-chain in the same transaction that mines the block. There is no custodial pool wallet between block-find and miner payout. Members of an active payout group / PPLNS window receive their proportional share as a direct coinbase output, paid at the same block height the work was done for.
Concretely:
PPLNS_MIN_PAYOUT_SATS) are deferred — they roll forward as a pending credit until they accumulate past the floor in a future block, where they ride along on that block's coinbase.sum(balances) = 0is preserved pool-wide. No silent operator skim.Result: miners receive their payouts on-chain, automatically, without the operator ever holding funds. The pool's job is purely to schedule who gets which slice; the satoshis themselves never visit a pool wallet.
Per-miner coinbase with finder-bonus
Each miner's stratum session gets its own block template with their address as the optional finder-bonus recipient. Whoever finds the block has the bonus already in the on-chain coinbase — no after-the-fact pool transfer needed. The bonus is funded out of the post-fee miner cut (capped at 95 %), so the finder gets
bonus + (their proportional share of (reward − bonus)), and the rest of the group keeps a slightly smaller proportional slice.Snapshots are stored per finder-address so
onBlockFoundreconstructs the exact distribution that was put on-chain.Group-Solo lifecycle
Tests
npx jest --runInBandruns all 755 specs)Deployment
PPLNS_*,SMTP_*,POOL_BASE_URL,GROUP_INACTIVITY_KICK_DAYSappendonly yes,appendfsync everysec,maxmemory-policy volatile-lru(so round-state Redis keys without TTL are never silently evicted)Test plan