Skip to content

no connection limit allows memory exhaustion under concurrent connections #98

@enirox001

Description

@enirox001

Observed behavior

sv2-tp has no limit on the number of concurrent incoming connections. Each accepted connection spawns a thread and constructs an IPC waiter before the client does anything useful. Under concurrent connection load, this causes significant memory exhaustion.

To reproduce, I ran concurrent pool instances against a running sv2-tp:

for i in {1..N}; do
  cargo run -- -c config-examples/regtest/pool-config-local-sv2-tp-example.toml &
done
wait

Two tests were run:

  • 100 concurrent connections: memory increased from 8.5GB to 13.5GB (59% increase)
  • 200 concurrent connections: memory increased from 8.5GB to 15.4GB (81% increase),

system became unresponsive during the 2nd test

Memory only recovered after all processes exited.

Why this is concerning

In this test, connections were short-lived and memory was eventually recovered. A persistent attacker holding connections open would not release allocated resources, potentially exhausting memory with far fewer than 100 connections.

Expected behavior

sv2-tp should reject connections exceeding a configurable limit at accept(). before the Noise handshake, before a thread is spawned, and before an IPC waiter is constructed.

A maxconnection flag would give operators control over this limit depending on their use case (solo miner vs. serving multiple pools).

Environment

  • Bitcoin Core: 9ca6954a99 (compiled from source)
  • sv2-tp: 0e5d1e6 (compiled from source)
  • OS: Fedora 43

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions