Skip to content

Add GET /pplns/groups/:id/chart (group hashrate time-series)#122

Closed
warioishere wants to merge 28 commits into
feature/block-template-mode-awarefrom
feature/group-chart-endpoint
Closed

Add GET /pplns/groups/:id/chart (group hashrate time-series)#122
warioishere wants to merge 28 commits into
feature/block-template-mode-awarefrom
feature/group-chart-endpoint

Conversation

@warioishere
Copy link
Copy Markdown
Owner

Stacks on #121. Merge order: #118#119#120#121 → this PR.

Summary

Adds the group-specific counterpart to `/api/pplns/chart`. Was flagged as missing earlier when planning the payout-group UI — the UI in #164 doesn't currently render a chart widget, but adding this now means a future group-dashboard chart can drop in without backend work.

```
GET /api/pplns/groups/:id/chart?range=1d|3d|7d
→ [
{ "label": "2026-04-19T09:50:00.000Z", "data": 800000000000 },
{ "label": "2026-04-19T10:00:00.000Z", "data": 830000000000 },
...
]
```

Same `{ label, data }` shape as `/api/info/chart` and `/api/pplns/chart`. Same 10-minute slots. Same share-based hashrate formula (`shares × DIFFICULTY_1 / 600`).

Implementation

Sums `ClientStatisticsService.getChartDataForAddress` across every group member, then merges data points by label. Sparse slots stay sparse (no phantom zeros) — consistent with the pool-wide PPLNS chart endpoint.

Returns 404 for unknown or dissolved groups and an empty array for member-less groups.

Tests

7 unit tests (`pplns-group.controller.spec.ts`) cover:

  • sum math across multiple members
  • sparse per-member data
  • empty-group short circuit
  • 404 for not-found / dissolved
  • range passthrough + unknown-range fallback to 1d

Test plan

  • Full Jest suite green (517 tests, 61 suites)
  • Build clean

Mirrors /api/pplns/chart but restricted to a specific payout group's
members. Returns the same { label, data } 10-minute-slot format as
/api/info/chart and /api/pplns/chart so a future group-dashboard
chart widget can drop in without adapter code.

Implementation: sums ClientStatisticsService.getChartDataForAddress
across every member and merges points by label. Sparse per-member
slots are preserved without phantom zeros — a slot only appears if
at least one member contributed to it.

Supports range = 1d | 3d | 7d (same as the per-address chart).
Returns 404 for missing or dissolved groups, empty array for
member-less groups.

7 unit tests cover sum math, sparse handling, empty group,
non-existent/dissolved group, range passthrough, unknown-range
fallback to 1d.
Exposes the existing PPLNS_FEE_ADDRESS / PPLNS_FEE_PERCENT config (shared
by PPLNS and group-solo payout paths) as a public endpoint so the
groups-landing UI can render the current pool fee dynamically instead
of hard-coding a percentage that drifts when backend config changes.

Response: { feePercent, feeAddress, coinbaseWeightBudget }
…-group

MiningModeService.getMode only returns 'group-solo' for active groups
(≥ 2 members), so a creator of a freshly-created 1-member group would
see "No payout group" in the UI until a second member joined. This
endpoint returns group details for any address that's a member of a
non-dissolved group, active or not — letting the UI open the creator's
dashboard before the group is mining-eligible.

Mining-side behavior is unchanged; the new endpoint is UI-facing only.
Mirrors the existing /api/pplns/groups/:id/chart pattern — same
aggregation the UI was doing client-side via forkJoin, now done once
on the pool. Saves N-1 HTTP calls per chart refresh for a group with
N members.

GET /api/pplns/groups/:id/accepted?range=1d|3d|7d
  → { slotData: [{ time, counts: { accepted } }, ...] }

GET /api/pplns/groups/:id/rejected?range=1d|3d|7d
  → { slotData: [{ time, counts: { JobNotFound, DuplicateShare,
      LowDifficultyShare } }, ...] }

Shapes match the per-address /api/client/:address/{accepted,rejected}
endpoints so the UI can plug the new source in without any chart
transformation changes.
POST /api/pplns/groups/:id/members/batch
  Body: { addresses: string[] }
  Returns: { added: [...], skipped: [{ address, reason }] }

One token verification, one address-cache rebuild per batch — instead
of N per call. `skipped` carries benign per-address failures
(already-member / address-in-group / invalid-address / duplicate-in-batch)
so the UI can surface them in a summary toast without the client having
to reconstruct them from thrown errors.

Token/auth failures still throw as before; they're not a per-address
issue and should abort the batch.
Cached groupSoloGroupId in Stratum V1/V2 clients was set once at authorize
and never refreshed. A miner whose address was added to a payout group after
connecting had its shares routed as solo for the rest of the session — shares
never landed in the group's Redis round, and the coinbase build used the
wrong payout path, producing inconsistent coinbases across group members.

- Remove the groupSoloGroupId field and the detectGroupSoloMembership
  helper. Replace with an activeGroupId() method that queries GroupService
  live at every share record, job build, and block-found dispatch. The
  in-memory address cache in GroupService is O(1) Map.get and is already
  refreshed synchronously on every addMember/removeMember call, so the
  live lookup is always current.

- GroupSoloService: add per-address rejected counters (rejected-diff and
  rejected-count Redis hashes), cleared on round reset. recordReject()
  aggregates per address, getRoundStats() returns totalRejectedDifficulty,
  rejectedShareCount, and per-address rejected fields alongside accepted.

- Wire reject dispatch from all Stratum reject paths: V1 gets a
  dispatchGroupReject helper called after each of the 4 reject blocks
  (duplicate-share, two job-not-found paths, low-difficulty), V2 adds
  the dispatch once to recordRejectedShare which covers all reject sites.

- Test Redis mock extended with hash ops (hIncrByFloat, hIncrBy, hGetAll)
  and 3 new unit tests cover recordReject guards, per-address aggregation
  in getRoundStats, and round-reset clearing the rejected counters.
The distribution-endpoint field naming was misleading: the values are
diff-1-weighted real work, but were called 'difficulty' and parallel
'shareCount' fields suggested they were raw share counts. Rename to make
the semantic explicit.

GroupSoloService.getRoundStats response (returned by GET
/api/pplns/groups/:id/distribution):
  totalDifficulty          -> totalShares
  totalRejectedDifficulty  -> totalRejected
  shareCount               -> dropped
  rejectedShareCount       -> dropped (top-level + per-address)
  perAddress[].difficulty  -> totalShares
  perAddress[].rejectedDifficulty -> totalRejected

PplnsService.getCurrentDistribution items (GET /api/pplns/distribution):
  difficulty               -> totalShares

GroupSoloService also drops the now-unused rejected-count Redis hash;
rejected aggregation lives in a single per-address hash storing the
diff-1-weighted total. Internal redis key `rejected-diff` renamed to
`rejected-shares` to match.

New endpoint:
  GET /api/pplns/groups/:id/best-difficulty
Returns { bestDifficulty, address, time } — the highest single-share
diff submitted in the current round across all group members and the
member who submitted it. Round-based — resets on block-found together
with the rest of the round state. Drives the "Your Best Difficulty"
tile on the payout-group dashboard.

Tests updated for the renamed fields, plus 2 new tests covering
getRoundBestDifficulty (max picker + reset behaviour). 14/14 group-solo
unit tests pass.
Two production-durability hardenings on the existing Valkey 8 setup:

1. AOF was already on but fsync policy was implicit. Pin it to everysec.
   Worst-case loss on a hard crash is now bounded at ~1 second of share
   writes — well under the 5s tolerance budget for a payout pool, and
   negligible perf impact (Valkey buffers the AOF and fsyncs in a
   background thread once per second).

2. Eviction policy was allkeys-lru with maxmemory 1GB. That was a silent
   data-corruption risk: when Valkey filled up it would evict the LRU
   keys regardless of category, including share/round state in
   groupsolo:* and pplns:* (those keys are written without a TTL on
   purpose — they're durable round state, not cache). A round eviction
   would corrupt the PROP coinbase split for the next block-find, and
   for PPLNS shrink the window silently.

   Switch to volatile-lru. Only keys with an explicit TTL get evicted,
   which means application cache entries (CLIENT_INFO_*, etc., set via
   REDIS_TTL=600) shrink under pressure while share state is always
   preserved. If memory does fill up entirely with non-TTL keys the
   server returns OOM on writes, which fails fast instead of silently
   losing money-bearing data.

RDB snapshots stay at default cadence as a faster cold-start path —
AOF is the truth, RDB is just a snapshot to skip replaying the AOF on
boot.
…lent-add

Direct addMember by an admin was a silent-add attack vector: anyone with
the admin token could route an unsuspecting miner's payouts into the
group. Replace with a two-phase invitation flow whose trust anchor is a
verified email binding.

Backend additions:
- 3 new entities: AddressEmailEntity (verified address↔email binding),
  EmailVerificationEntity (pending email-verify tokens, 24h TTL),
  PplnsGroupInvitationEntity (pending group invitations, 7d TTL).
- EmailService: nodemailer transport from env (SMTP_HOST/PORT/SECURE/
  USER/PASS/FROM, plus POOL_BASE_URL for the in-email links).
  Two themed HTML templates (verification + invitation), inline-styled
  to match the dashboard's mdc-dark-indigo palette so emails feel like
  part of the app. Plain-text fallback for both.
- AddressEmailService: register (sends verification email) + verify
  (consumes token, persists binding) + getVerified lookup.
- PplnsGroupInvitationService: createInvitation (admin-token-auth, only
  succeeds if address has verified email; sends invitation email),
  getByToken (public details for accept page), accept (no auth — token
  IS the auth, since possession of it implies access to the email
  account that received it), decline (open — no incentive to decline
  on someone's behalf), listPendingForAddress (drives dashboard banner),
  listPendingForGroup + cancelInvitation (admin-only).
- GroupService.addMemberWithoutAdmin: bypasses admin-token check,
  intended only for invitation-accept which has its own proof of
  authorization.

HTTP surface:
- Removed: POST /pplns/groups/:id/members and /:id/members/batch
  (the silent-add endpoints).
- Added: POST /pplns/groups/:id/invitations,
  POST /pplns/groups/:id/invitations/batch (admin),
  GET /pplns/groups/:id/invitations,
  DELETE /pplns/groups/:id/invitations/:token,
  GET /pplns/invitations/by-address/:address,
  GET /pplns/invitations/:token,
  POST /pplns/invitations/:token/accept,
  POST /pplns/invitations/:token/decline,
  POST /email/register, GET /email/verify/:token,
  GET /email/by-address/:address.

Email addresses are masked for admin-side responses (a***@Domain) — the
admin doesn't need to read other members' emails, only know the
invitation reached someone.

Tests: 10 new unit tests cover the invitation lifecycle (no-email
rejection, success path, bad token, idempotent accept, decline →
accept refused, expiration, pending-list filtering). All 537 unit
tests pass.

New env vars (documented in full-setup/blitzpool-example.env):
SMTP_HOST, SMTP_PORT, SMTP_SECURE, SMTP_USER, SMTP_PASS, SMTP_FROM,
POOL_BASE_URL. All needed before the email flow becomes operable.

Phase 2 (UI) follows in a separate commit: settings page for email
binding, accept/decline pages, admin-side invite UI, dashboard banner
for pending invitations.
…e URLs

Two issues with the previous email design:

1. The accept/decline URLs were /invite/{token}/accept and /invite/{token}/decline,
   but the Angular router only has a /invite/:token route. Clicking those
   buttons in the email landed on a 404. Wrong path baked into the email.

2. Even with the route fixed, having two buttons in the email that both
   point at the SAME page (the only safe design — clicking a link should
   never trigger a state change without an explicit confirmation, since
   email-preview link prefetchers and one-tap email-client gestures
   exist) was confusing UX: the recipient clicks "Accept", lands on a
   page that asks "Accept or Decline?".

Replace with a single "Open invitation" button. The link goes to
/invite/{token}, the recipient sees the group context and the
inviter, and confirms there with an explicit Accept or Decline click.
Standard email-link → confirmation-page flow.

Email template + InvitationEmailContext interface simplified accordingly:
acceptUrl + declineUrl → single inviteUrl. Plain-text fallback
mirrors the change. Test updated.
Previous commit added the entities but no migration. With
DB_RUN_MIGRATIONS=true (the production path for this deployment),
entities without a matching migration don't get their tables
created. Result: first email-register call blew up with
'relation pplns_address_email does not exist'.

This migration creates the three tables with indexes:
- pplns_address_email (primary key on address)
- pplns_email_verification (primary key on token, index on address)
- pplns_group_invitation (primary key on token, indexes on groupId
  and address)

Down() drops all three in reverse order. Column types, lengths,
defaults and nullability match the entity declarations exactly.
UI ships with HashLocationStrategy (app.module.ts / electron / android
all provide { useClass: HashLocationStrategy }), so the routable path
lives in the URL fragment. Without the hash the server only ever sees
/ and the Angular router reads an empty fragment, matching the default
route — the user lands on the splash page.

Fix: prefix verification and invitation URLs with /#/. Both services
affected: AddressEmailService (verify link) and
PplnsGroupInvitationService (invite link). Test expectation updated.
listPendingForAddress was returning the invitation token alongside
group metadata. /api/pplns/invitations/by-address/:address has no
auth — any visitor to /app/:address on the UI, or anyone hitting the
endpoint directly, could read the tokens. With a token, anyone can
POST /invitations/:token/accept and hijack the acceptance flow,
defeating the whole point of the email-based trust anchor.

Drop the token from the response. Add a maskedEmail hint so the user
still knows which inbox received the invitation (helpful when they
have multiple addresses with different emails, or the invitation
went to spam). Acceptance now happens ONLY via the link in the
invitation email — whoever can read that email accepts.

maskEmail helper moved from the controller into this file for the
service to use; controller still uses its own copy for admin-side
masking of batch responses. (Duplication is trivial — kept local
to each layer for clarity.)
The admin listing at GET /pplns/groups/:id/invitations was echoing the
raw invitation token in the response. That defeated the email-as-trust-
anchor defense against silent-add: a malicious admin could list pending
invitations, pull the token, and POST /invitations/:token/accept on the
invitee's behalf before the invitee ever opened the email.

Drop the token from the admin response. Switch cancel from DELETE
/invitations/:token to DELETE /invitations/by-address/:address so the
admin never needs the secret. The token now lives only in the email
body and in the URL the invitee opens.

Incidentally: the admin already needs to contact invitees out-of-band
in group-solo use cases (friends pool), so the previously-masked email
in the admin listing + create responses is now unmasked. The listing is
admin-token-gated and the invitee address is already visible in the
member list, so showing the verified email adds no meaningful PII leak.
Two linked problems.

1. DELETE /pplns/groups/:id/members/:address/self had no authentication.
Anyone who knew a member's BTC address could remove them from their
group. Classic denial-of-payout: Mallory reads Bob's address from the
public member list, calls selfLeave, Bob is out until an admin notices.
Drop the endpoint entirely. If a miner wants out, they repoint their
miner to a different address; their stale member row gets pruned by the
admin once the inactivity window has passed.

2. The admin could unilaterally kick any member at any time, which
works against the 'we mine as a team' semantics of payout groups. Gate
removeMember on GROUP_INACTIVITY_KICK_DAYS (default 14) measured from
the member's most recent accepted share — or from joinedAt if they
never mined. Creator can no longer be kicked via removeMember at all;
they exit through transferCreator or dissolveGroup.

Behind the scenes the new flow clears the member's in-flight round
state before dropping the member row: their entries in the Redis
sorted-set window are removed, the total diff is decremented, their
rejected-shares hash slot is cleared, their lastShareAt slot is cleared,
and their pplns_group_balance row is deleted. The remaining members'
proportional share of the round grows automatically on the next block.

GroupSoloService.recordShare now writes a lastShareAt timestamp per
address into groupsolo:{gid}:last-share-at. The hash is NOT cleared on
round reset — it has to survive across blocks to power the kick gate.

dissolveInternal also calls the new removeGroupState which wipes all
round keys, balance rows, and history rows for the group. Fixes the
'dissolved groups leak Redis memory forever' side of the review.
Group name is interpolated into the invitation mail Subject and the
plain-text body. Without sanitization, a group name containing CR/LF
would split headers (adding Bcc, changing Reply-To) or inject phishing
content into multipart text viewers.

The input-side validation landed in the previous commit (group.service
createGroup rejects control chars at the create boundary). Add a
defense-in-depth sanitizer at the mail sendpoint: any caller that
forgets to validate still produces a safe header through sanitizeHeader,
which strips CR/LF/NUL and caps length at 200 chars.
addPending was keying balance rows by address alone. When a miner moved
between groups over time (and pplns_group_balance rows survived member-
removal and dissolve, which they did), the next group's addPending
would find the stale row from the prior group and mutate it — the
groupId field stayed on whatever it was. The next block in the *old*
group would then pick that row up via getPayoutDistribution's
{groupId} filter and pay the balance out from a block the miner no
longer participated in.

Move the schema to a composite primary key. At any single point in
time a miner is still a member of at most one group (enforced by the
global unique index on pplns_group_member.address), so the migration
has no duplicates to dedupe. Historical (address, groupId) pairings
now each get their own row — exactly what the money math assumed all
along.

Also scope the three balance lookups in group-solo.service to
{ address, groupId } so the code matches the schema invariant.
Captures the findings from the independent audit of the payout-group
feature (C1-C4 critical, H1-H5 high, M1-M8 medium, plus positives).
Reference material while the fixes land; safe to archive once the
feature has baked.
getByToken already bailed on dissolved groups, so the pre-click page
never rendered — but accept() itself never re-read the group, meaning
an invitee who had already fetched the page could still POST /accept
after the admin dissolved the group in between. The addMemberWithoutAdmin
call would happily write a member row into the dissolved group;
rebuildCache would filter it back out at load time, but the DB row
lingered and the UI/stratum layers disagreed about the membership
until someone noticed.

Re-check group.dissolvedAt inside accept() between the expiry check and
addMemberWithoutAdmin. New error code 'group-dissolved' maps to HTTP
410 Gone on both the admin controller and the public invite controller.
Both expireOld() on PplnsGroupInvitationService and purgeExpiredTokens()
on AddressEmailService were defined but never called — the tables grew
forever, with stale plaintext invitation tokens aging out semantically
but not in the DB. The review flagged this as H5.

Wire both behind @nestjs/schedule's @interval(1 h). ScheduleModule was
already imported in app.module, no module-side changes needed. Each
call is wrapped to swallow + log failures so a transient DB hiccup
doesn't kill the process; the next tick retries. The accompanying
migration adds the expiresAt index both methods' WHERE clauses now
depend on.
Two hardening changes bundled because they land on the same child
tables and need a single migration window.

FK cascades (review H4): pplns_group_member, pplns_group_balance,
pplns_group_block_history and pplns_group_invitation all carried a
groupId column but no FK to pplns_group. Dissolve was entirely
app-enforced, and a half-completed dissolve transaction or a forgotten
child-delete in a future code path would have stranded orphans. Wire
ON DELETE CASCADE so the DB guarantees children disappear with their
parent regardless of which path initiated the dissolve.

expiresAt indexes (review H5): the newly-scheduled expiry sweep
filters pplns_group_invitation and pplns_email_verification by
`expiresAt < NOW()`. At any real volume that's a full scan; add a
single-column index on each.

Both changes are idempotent (migration skips existing FKs/indexes) so
a pool that happened to acquire these manually won't double-create.
The in-memory snapshots Map was lost on every pool restart. A crash
between getPayoutDistribution (which built the coinbase the miner
signed against) and onBlockFound would fall back to the current-window
fallback path, whose distribution can differ from the coinbase already
in the chain — the on-chain payout is frozen in the block template,
but pplns_group_block_history + pplns_group_balance would then reflect
a different split.

Move the snapshot into Redis keyed as groupsolo:{groupId}:snapshot,
JSON-encoded, 1h TTL. AOF persistence carries it through restarts.
1 h covers normal block-find cadence plus an outage window; longer
would just pile up stale snapshots, shorter would risk losing them
before the block lands.

removeGroupState (called from group dissolve) now also drops the
snapshot key so nothing lingers past dissolve.
Prior behavior on admin kick: the member's in-round Redis shares were
dropped (remaining members' share of the round grew proportionally)
and the pplns_group_balance row was deleted outright — their
accumulated sub-dust balance from earlier rounds was forfeited.

Intent per product review: remaining members should absorb the full
value of the departure, not just the current round. Split the kicked
miner's pendingSats equally across the remaining members' pending
balance rows before deleting the row. Integer-division remainder
(<N sats where N = member count) is dropped; at real block rewards
that's single-digit sats and not worth the complexity of a carry.

GroupService.internalRemove snapshots the remaining member list
before handing off so GroupSoloService doesn't need a second DB
lookup — memberRepo is GroupService's concern, not GroupSoloService's.

New spec covers: 600-diff round with three miners, 900-sat prior
pending on the kicked one; after kick, round total drops to 400
and the two survivors get +450 pending each.
The FK migration failed on the invitation table at pool startup:

  QueryFailedError: foreign key constraint "FK_..." cannot be implemented
  at addGroupFk('pplns_group_invitation')

Postgres speak for 'the referenced column is uuid but the referencing
column is varchar — I won't bridge unequal types even if the content
is identical in practice'.

pplns_group.id has always been uuid (1776000000000). pplns_group_invitation
picked up varchar(36) in 1776200000000 by accident — the content was
always a UUID string (GroupService.createGroup uses crypto.randomUUID())
but the column type drifted. The FK migration didn't catch it until
runtime on the actual Postgres server because the types happen to
accept each other in most contexts but not in REFERENCES.

Fix: ALTER COLUMN ... TYPE uuid USING ...::uuid before createForeignKey.
The cast is safe — existing rows are already UUID-shaped strings.
Entity updated to match. The down() casts back to varchar(36) so
rollback preserves the original schema shape.
Adds @nestjs/throttler@^4.2.1 as a global APP_GUARD with a lenient
60 req/min per-IP default, plus tighter per-endpoint limits on the
paths that touch SMTP or do mail-side work:

  POST /pplns/groups/:id/invitations          10/min  (single invite)
  POST /pplns/groups/:id/invitations/batch     5/min  (each call can
                                                      send many mails)
  POST /api/email/register                     5/min  (verification
                                                      emails)
  POST /api/pplns/invitations/:token/accept   20/min  (token brute
                                                      force isn't the
                                                      real risk at 256
                                                      bits, this just
                                                      caps retry loops)
  POST /api/pplns/invitations/:token/decline  20/min

Addresses review M3. Admin-token auth alone isn't a DoS defence — a
legitimate admin with a compromised token could spam thousands of
invites and get the pool's sender domain block-listed at the SMTP
provider in minutes.
Bech32 / bech32m addresses (BIP-173, BIP-350) are protocol-specified
as lowercase on the wire. Wallets sometimes present them uppercase
for QR-code compactness, so a human could legitimately paste one
form while the miner transmits the other. Without normalization:

  1. Admin invites BC1QALICE...
  2. Alice mines as bc1qalice...
  3. GroupService.addressCache was keyed on the admin-typed form;
     the per-share lookup misses and Alice's shares route to
     solo/PPLNS instead of the intended group — silent payout
     loss with no error path.

Add a normalizeBtcAddress() helper that lowercases bech32 prefixes
(bc1 / tb1 / bcrt1 / sb1) and leaves legacy base58 (P2PKH / P2SH)
case-sensitive because those checksums are case-sensitive.

Apply at every write + read boundary:
  GroupService.createGroup / addMemberWithoutAdmin / getGroupForAddress
  AddressEmailService.register / getVerified
  PplnsGroupInvitationService.createInvitation / accept

Review M4.
Two calibration fixes to the payout-distribution trim math, both
verified against the real coinbase builder in MiningJob.ts.

1. M6 — the OP_RETURN output that carries the segwit witness
   commitment is ~188 WU on the wire (38-byte script + overhead),
   not 124 WU like a regular payout output. Earlier code reserved
   124 which under-counted by 64 WU per block. With 50 000 WU
   default budget this never materialised in practice, but a
   pathologically dense group could overshoot and produce an
   over-budget coinbase that the MAX_BLOCK_WEIGHT guard in
   MiningJob would reject. Split the two into distinct constants.

2. Also M6 — outputs are now sized at 172 WU (P2TR worst case)
   rather than 124 WU (P2WPKH). A group entirely on Taproot
   addresses is harder to fit than the P2WPKH-only assumption
   suggested. Conservative-high sizing is the right move for a
   hard block-weight limit.

3. M7 — the pool fee output now gets a dust check before being
   added to the distribution. At 3.125 BTC subsidy + 2 % fee it's
   never dust on mainnet, but a regtest / signet / future-halving
   scenario can produce fee < 546 sats and the whole block would
   be rejected by Bitcoin Core policy. When dust, we log a warn
   and silently drop the fee output — miners keep 100 % of the
   block rather than the pool losing the block itself. Applied
   in both getPayoutDistribution (snapshot path) and
   onBlockFoundFromWindow (fallback path) for consistency.
getPayoutDistribution previously did this:

    baseSats      = ratio * rewardForMiners     (98 % of blockReward)
    totalSats     = baseSats + pendingMap[addr]
    percent       = totalSats / blockRewardSats * 100

Pending balances from prior rounds were being layered ON TOP of the
block's miner cut, so the sum of coinbase outputs came out as

    rewardForMiners + totalPending + feeSats = blockReward + totalPending

— bigger than blockRewardSats. Bitcoin Core rejects that with
'bad-cb-amount' the moment the block is submitted. The unit tests
never caught it because they mock the service end; the output total
only matters when a real node validates the coinbase.

Our new regtest/lifecycle spec (group-solo-regtest-lifecycle.spec) hit
exactly this the first time it submitted a kick-redistribute block to
Core. The kick path creates pending on active miners by design, so
the latent bug surfaced immediately.

Fix: subtract totalPending from rewardForMiners before the per-miner
ratio split, so pending is paid from the same pool the base shares
come from. Each miner still gets baseSats + own-pending; total sums
to exactly blockRewardSats again. Applied to both the snapshot path
(getPayoutDistribution) and the fallback path (onBlockFoundFromWindow).

New regtests: kick-redistribute, dust-fee-gate, snapshot-persist. All
three build and submit real blocks to Core, each returns null (success)
— proving the math matches what Core will accept in the wild.

Also: updated the existing regtest's Redis mock to include hSet/hGet/
hDel/zRem/expire so it runs under the new lastShareAt + snapshot TTL
code paths.
warioishere added a commit that referenced this pull request Apr 22, 2026
All of the following features were originally stacked as separate PRs
on top of PPLNS:

  #119 feature/group-solo-mining         group-solo engine + API
  #120 feature/mining-mode-endpoint      GET /pplns/mode/:address
  #121 feature/block-template-mode-aware mode-aware block-template
  #122 feature/group-chart-endpoint      chart + all security
                                         hardening + regtests

Landing them as separate PRs would have exposed master to intermediate
vulnerable states (selfLeave DoS, silent-add token leak, pending-out-
of-coinbase math bug, etc.) until #122 closed the stack. All security
and regtest work lived exclusively on #122, so merging the lower PRs
first would ship a group-solo surface without its hardening.

Rolling everything into #118 means the whole feature lands atomically
on master with its full test + security story. PR #115 (JDP
integration) stays separate, still upstream-blocked.

# Conflicts:
#	src/controllers/pplns/pplns.controller.spec.ts
#	src/controllers/pplns/pplns.controller.ts
@warioishere
Copy link
Copy Markdown
Owner Author

Superseded by #118 — all commits from this branch were rolled into feature/pplns-pool-support in the Collapse-Stack merge commit 75987d4 on 2026-04-22. The branch on origin is preserved (not deleted) so the history is recoverable if needed. See #118 for the consolidated diff and atomic merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant