Exponentially-decaying tx inventory queue #33449

pull sipa wants to merge 1 commits into bitcoin:master from sipa:202509_exp_decay_inv_queue changing 1 files +53 −9
  1. sipa commented at 11:44 am on September 21, 2025: member

    Change the amount of transactions sent per transaction INV trickle to:

    INVENTORY_BROADCAST_TARGET + queue.size() * (1 - exp(-time_in_queue / INVENTORY_AVG_TIME_IN_QUEUE))

    where:

    • INVENTORY_BROADCAST_TARGET is the current INVENTORY_BROADCAST_PER_SECOND-derived constant 70.
    • INVENTORY_AVG_TIME_IN_QUEUE is a new constant (60s).
    • time_in_queue is the time since the last INV trickle.

    The second term here implements an exponentially-decaying queue size with time constant INVENTORY_AVG_TIME_IN_QUEUE, so under high constant load, the queue size will converge to however many transactions arrived within that time (60s).

    To account for the fact that only integral numbers of inventories can be sent, time_in_queue is adjusted so it corresponds to the since the time at which the previous actual number of transactions would have corresponding to the formula.

  2. DrahtBot commented at 11:44 am on September 21, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33449.

    Reviews

    See the guideline for information on the review process. A summary of reviews will appear here.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #30277 ([DO NOT MERGE] Erlay: bandwidth-efficient transaction relay protocol (Full implementation) by sr-gi)
    • #30116 (p2p: Fill reconciliation sets (Erlay) attempt 2 by sr-gi)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  3. sipa renamed this:
    txgraph: exponentially-decaying tx inventory queue
    Exponentially-decaying tx inventory queue
    on Sep 21, 2025
  4. net_processing: exponentially-decaying tx inventory queue
    Change the amount of transactions sent per transaction INV trickle to:
    
      INVENTORY_BROADCAST_TARGET + queue.size() * (1 - exp(-time_in_queue / INVENTORY_AVG_TIME_IN_QUEUE))
    
    where:
    * INVENTORY_BROADCAST_TARGET is the current INVENTORY_BROADCAST_PER_SECOND-derived constant 70.
    * INVENTORY_AVG_TIME_IN_QUEUE is a new constant (60s).
    * time_in_queue is the time since the last INV trickle.
    
    The second term here implements an exponentially-decaying queue size with time
    constant INVENTORY_AVG_TIME_IN_QUEUE, so under high constant load, the queue size
    will converge to however many transactions arrived within that time (60s).
    
    To account for the fact that only integral numbers of inventories can be sent, time_in_queue
    is adjusted so it corresponds to the since the time at which the previous actual number
    of transactions would have corresponding to the formula.
    d1af551e4a
  5. sipa force-pushed on Sep 21, 2025
  6. ajtowns commented at 2:25 am on September 22, 2025: contributor

    It’s not the queue that’s exponentially decaying, but the inv queue drain rate tracking the relay rate with an exponential decay? shrug

    The previous PR, #33448, may be helpful for observing the behaviour of this PR in the wild. I use the following command occassionally:

    0bitcoin-cli getpeerinfo | jq -j '.[] | (.inv_to_send, " ", .last_inv_sequence, " ", .connection_type, " ", .id, " ", .relaytxes, " ", .synced_blocks, " ", (.network|sub("not_publicly_routable";"onion")), " inv-rx:", ((.bytesrecv_per_msg.inv//0)/1024|floor), "kB inv-tx:", ((.bytessent_per_msg.inv//0)/1024|floor), "kB gd-rx:", ((.bytesrecv_per_msg.getdata//0)/1024|floor), "kB nf-tx:", ((.bytessent_per_msg.notfound//0)/1024|floor), "kB blktxn:", ((.bytessent_per_msg.blocktxn//0)/1024|floor), "kB minfee:", (.minfeefilter*1e6|floor)/10, "s/vb ", (.subver|gsub(" |(^$)"; "_")), " ", .addr, "\n")' | sort -rn | column -t
    

    I believe the behaviour we would ideally like here is for peers’ inv queues to be fairly constrained (peaking at 500-2000 eg; values closer to 20,000 or 50,000 are likely to cause observable performance problems). The inv queue is already constrained by the size of the mempool, so extreme cases will only be observable with large, non-empty mempools.

    But equally we’d like peers’ INV messages to be fairly stable in size; since the network can only confirm 7-15 txs/second, there’s not much value in relaying more txs than that on an ongoing basis, as many of the txs will simply expire or be rbf’ed before being confirmed. So having a non-zero queue to smooth out the number of txs we relay per second is helpful – even delaying relay by a short period may be enough time for that tx to be rbf’ed and never need relaying by most of the network (eg, when a public auction is occurring and people are submitting bids by rbf’ing their own bids or other bids, eg for airdrop mints and Babylon staking).

    I think for monitoring the latter, it might be good to add a per-peer map from txs-per-inv to number-of-invs and see how that behaves on mainnet over an extended period?

  7. sipa commented at 2:34 am on September 22, 2025: member

    @ajtowns What I meant is actually the queue itself whose size decays exponentially, though only in a fairly constrained setting.

    I believe the following statement holds:

    If at time $T$ the size of the queue is $N$, and after that, no more than $B$ inv elements get added to the queue per trickle, then after any trickle occurring at time $T+t$, the size of the queue is bounded by $\left\lceil N \exp(-t/A) \right\rceil$, regardless of how many trickles happened in between, or when those happened.

    (with $B$ = INVENTORY_BROADCAST_TARGET, $A$ = INVENTORY_AVG_TIME_IN_QUEUE.

  8. ajtowns commented at 5:58 am on September 22, 2025: contributor

    Oh! I hadn’t noticed this was a different formula to what (I thought) we were discussing earlier.

    I think our exponential-based inv timing implies that 1% of invs will be at least 23s after the previous inv, and the formula above will mean we immediately send 32% of our current queue at that point, which seems excessive to me? Compared to the previous sim I ran, I get 45% of the max queue size (2045 vs 4500), but 270%-580% of the max inv size.

  9. sipa commented at 10:45 am on September 22, 2025: member
    Needs more thought, closing for now.
  10. sipa closed this on Sep 22, 2025


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-09-26 15:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me