Per-peer m_tx_inventory_to_send queues have CPU and memory costs that scale with both queue size and peer count. Under high transaction volume, this has previously caused severe issues (May 2023 disclosure) and still can cause measurable delays (Feb 2026 Runestone surge, with the msghand thread observed hitting 100% CPU and queue memory reaching ~95MB).
This PR replaces the per-peer rate limiting with a global queue using dual token buckets (limiting transaction by both count and serialized size). Transactions that arrive within the bucket capacity still relay nearly immediately, but excess transactions queue in a global backlog and drain as the token buckets refill.
Key parameters:
- Count bucket: 14 tx/s, 420 capacity (30s buffer)
- Size bucket: 20 kB/s (~12 MB/600s), 50 MB capacity
- Outbound peers refill faster by a factor of 2.5
Per-peer queues are retained solely for privacy batching and are always fully emptied, removing the old INVENTORY_BROADCAST_MAX cap.
This reduces the memory and CPU burden during transaction spikes when the queuing logic is engaged from O(queue * peers) to O(queue), as the queued transactions no longer need to be retained per-peer or re-sorted per-peer.
Design discussion: https://gist.github.com/ajtowns/d61bea974a07190fa6c6c8eaef3638b9