p2p: Replace per-peer transaction rate-limiting with global rate limits #34628

pull ajtowns wants to merge 11 commits into bitcoin:master from ajtowns:202602-mempool-invtosend changing 14 files +638 −107
  1. ajtowns commented at 3:09 AM on February 20, 2026: contributor

    Per-peer m_tx_inventory_to_send queues have CPU and memory costs that scale with both queue size and peer count. Under high transaction volume, this has previously caused severe issues (May 2023 disclosure) and still can cause measurable delays (Feb 2026 Runestone surge, with the msghand thread observed hitting 100% CPU and queue memory reaching ~95MB).

    This PR replaces the per-peer rate limiting with a global queue using dual token buckets (limiting transaction by both count and serialized size). Transactions that arrive within the bucket capacity still relay nearly immediately, but excess transactions queue in a global backlog and drain as the token buckets refill.

    Key parameters:

    • Count bucket: 14 tx/s, 420 capacity (30s buffer)
    • Size bucket: 20 kB/s (~12 MB/600s), 50 MB capacity
    • Outbound peers refill faster by a factor of 2.5

    Per-peer queues are retained solely for privacy batching and are always fully emptied, removing the old INVENTORY_BROADCAST_MAX cap.

    This reduces the memory and CPU burden during transaction spikes when the queuing logic is engaged from O(queue * peers) to O(queue), as the queued transactions no longer need to be retained per-peer or re-sorted per-peer.

    Design discussion: https://gist.github.com/ajtowns/d61bea974a07190fa6c6c8eaef3638b9

  2. DrahtBot added the label P2P on Feb 20, 2026
  3. DrahtBot commented at 3:10 AM on February 20, 2026: contributor

    <!--e57a25ab6845829454e8d69fc972939a-->

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    <!--006a51241073e994b41acfe9ec718e94-->

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/34628.

    <!--021abf342d371248e50ceaed478a90ca-->

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    Concept ACK 0xB10C, polespinasa, instagibbs

    If your review is incorrectly listed, please copy-paste <code>&lt;!--meta-tag:bot-skip--&gt;</code> into the comment that the bot should ignore.

    <!--174a7506f384e20aa4161008e828411d-->

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #35252 (net: send decoy transactions via private broadcast by andrewtoth)
    • #35016 (net: deduplicate private broadcast state and snapshot types by takeshikurosawaa)
    • #34824 (net: refactor: replace Peer::TxRelay RecursiveMutex instances with Mutex by w0xlt)
    • #34271 (net_processing: make m_tx_for_private_broadcast optional by vasild)
    • #31260 (scripted-diff: Type-safe settings retrieval by ryanofsky)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

    <!--5faf32d7da4f0f540f40219e4f7537a3-->

  4. DrahtBot added the label CI failed on Feb 20, 2026
  5. ajtowns force-pushed on Feb 20, 2026
  6. ajtowns commented at 12:43 PM on February 20, 2026: contributor

    CI failure is presumably either #34631 or #34387

  7. DrahtBot removed the label CI failed on Feb 24, 2026
  8. in src/util/tokenbucket.h:57 in 869a1ae012 outdated
      52 | +    }
      53 | +
      54 | +    /** Consume n tokens. Returns false if the balance dropped below m_max_debt. */
      55 | +    bool decrement(double n = 1.0)
      56 | +    {
      57 | +        m_value -= n;
    


    chriszeng1010 commented at 5:33 PM on March 2, 2026:

    Decrement can still go below m_max_debt before checking is complete.


    ajtowns commented at 12:57 PM on March 4, 2026:

    decrement() can always go below m_max_debt, it only reports when it has done so -- it leaves it up to the caller to not go further into debt.

  9. DrahtBot added the label Needs rebase on Mar 11, 2026
  10. ajtowns force-pushed on Mar 12, 2026
  11. DrahtBot removed the label Needs rebase on Mar 12, 2026
  12. 0xB10C commented at 9:45 AM on March 12, 2026: contributor

    Concept ACK!

    I've been running this for a few days now and written down a few observations on a small mass-broadcast event that happend a few hours ago: https://bnoc.xyz/t/increased-b-msghand-thread-utilization-due-to-runestone-transactions-on-2026-02-17/81/11

    The node with this patch was significantly less affected than the others running a recent master.

    I haven't set up any monitoring for the newly added getnetworkinfo fields yet.

  13. DrahtBot added the label CI failed on Mar 12, 2026
  14. DrahtBot closed this on Mar 12, 2026

  15. DrahtBot reopened this on Mar 12, 2026

  16. ajtowns added this to the milestone 32.0 on Mar 12, 2026
  17. DrahtBot removed the label CI failed on Mar 12, 2026
  18. in src/txmempool.cpp:541 in 344de4b8dd outdated
     537 | @@ -538,6 +538,55 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
     538 |          for (const auto& input: tx.vin) mempoolDuplicate.SpendCoin(input.prevout);
     539 |          AddCoins(mempoolDuplicate, tx, std::numeric_limits<int>::max());
     540 |      }
     541 | +
    


    sipa commented at 6:22 PM on March 20, 2026:

    In commit "txmempool: Add SortMiningScoreWithTopology"

    This feels more like something for a fuzz or unit test. CTxMemPool::check is for internal consistency checks in the CTxMemPool representation, I feel.


    ajtowns commented at 1:20 AM on March 21, 2026:

    .... I might have spent too much time vibecoding and caught hallucinations? I could have sworn I was replacing existing code here. EDIT: Dropped this code.

  19. in src/txmempool.cpp:605 in 344de4b8dd outdated
     601 | @@ -553,6 +602,27 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
     602 |      assert(innerUsage == cachedInnerUsage);
     603 |  }
     604 |  
     605 | +std::vector<CTxMemPool::txiter> CTxMemPool::SortMiningScoreWithTopology(std::span<const Wtxid> wtxids, size_t n) const
    


    sipa commented at 6:46 PM on March 20, 2026:

    It looks like both eventual production call sites of this function (BumpInvVecForProcessing and PeerManagerImpl::SendMessages) do a deduplication pass on the results.

    Would it make sense to do this on the fly inside this function? It can't use std::partial_sort anymore, but it can use std::make_heap and friends to implement partial sorting, with a dynamic end point until n distinct elements have been found? Something like

    std::vector<CTxMemPool::txiter> CTxMemPool::SortMiningScoreWithTopology(std::span<const Wtxid> wtxids, size_t n) const
    {
        auto cmp = [&](const auto& a, const auto& b) EXCLUSIVE_LOCKS_REQUIRED(cs) noexcept { return m_txgraph->CompareMainOrder(*a, *b) > 0; };
    
        std::vector<txiter> res;
    
        n = std::min(wtxids.size(), n);
        if (n > 0) {
            // Construct a heap with txiters for all wtxids that exist in the mempool.
            std::vector<txiter> heap;
            heap.reserve(wtxids.size());
            for (auto& wtxid : wtxids) {
                if (auto i{GetIter(wtxid)}; i.has_value()) {
                    heap.push_back(i.value());
                }
            }
            std::ranges::make_heap(heap, cmp);
    
            // Pop transactions until n distinct ones in res have been found.
            res.reserve(heap.size());
            while (res.size() < n && !heap.empty()) {
                std::ranges::pop_heap(heap, cmp);
                if (res.empty() || heap.back() != res.back()) {
                    res.push_back(heap.back());
                }
                heap.pop_back();
            }
    
            // Copy the remainder over, without sorting or deduplication.
            res.insert(res.end(), heap.begin(), heap.end());
        }
    
        return res;
    }
    

    With even more low-level code the duplicate vector can be avoided, I think. Tests don't pass with this, I haven't investigated why.


    ajtowns commented at 1:14 AM on March 21, 2026:

    Without having looked, is the comparison backwards?

    My understanding is partial_sort has two benefits:

    • it only makes a heap out of the target size, so iterates through the source array once and then does log(k) work for each element, with better locality
    • when updating the k elements in the heap with a new element from the source, it does the sift-down algorithm which is more efficient than heap_push()/heap_pop(), but isn't exposed via the STL so would mean writing your own heap implementation

    Deduping the fairly small output list as you pass through it, when duplicates are rare anyway, seemed fine to me?


    sipa commented at 2:20 PM on March 21, 2026:

    Without having looked, is the comparison backwards?

    I don't think so. It's a max heap, but I want to pop the "lowest" elements off first, so I needed to swap the comparator I think.

    My understanding is partial_sort has two benefits:

    Interesting. So it has complexity O(n log m) (with n = number of elements, m = sorted prefix size), while the approach I have in mind is O(n + (m log n)) (O(n) to construct the heap of all elements, and then m operations of each O(log n) to extract the best m elements. Complexity-wise, my approach seems better, since n > m, except it has worse memory locality. This makes me wonder if I'm missing something, since the cppreference.cpp documentation seems to imply std::partial_sort is intended for low m values.

    Deduping the fairly small output list as you pass through it, when duplicates are rare anyway, seemed fine to me?

    Yeah, it probably doesn't matter that much. It just looked like the deduplication is something that CTxMemPool::SortMiningScoreWithTopology could do internally since both call sites need it anyway. And then it seemed possible to have the count be dynamic, but only when using a different approach than what std::partial_sort seems to enable.


    ajtowns commented at 12:58 AM on March 22, 2026:

    Complexity-wise, my approach seems better, since n > m, except it has worse memory locality.

    Yeah. I think the ratio between the number of swaps each approach performs in the worst case is log2(m) : 2 -- if you give the input in exactly the wrong order, each element goes to the top of the heap with log(m) steps, whereas for the full heap, it adds up to 2. So for m~=100, that's 3.3x more swaps, but the swaps are contained to a set of 100 elements, which might get you more than a 3.3x speedup per-swap due to locality? (In the average case, for most elements you'll just compare to the top of heap element, find it's worse and do 0 swaps, and overall it reduces from O(n log(m) + m log(m)) to O(n + m log(m)) which is better than the O(n + m log(n)) from the full heap.

    Oh, hmm; in the per-peer thread we're always taking everything, so I think there we should be explicitly using std::sort then and not any of this partial business anyway (pushed). That should ensure that the m sizes we're using in practice are always about 70 (-txsendrate times inbound broadcast interval).

  20. in src/txmempool.cpp:483 in ea647debfd
     480 | @@ -481,10 +481,10 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
     481 |          const CTransaction& tx = it->GetTx();
     482 |  
     483 |          // CompareMiningScoreWithTopology should agree with GetSortedScoreWithTopology()
    


    sipa commented at 7:46 PM on March 20, 2026:

    In commit "txmempool: Drop CompareMiningScoreWithTopology"

    Comment is outdated now.

  21. in src/util/tokenbucket.h:16 in 859e8eb020
      11 | +
      12 | +/** A token bucket rate limiter.
      13 | + *
      14 | + * Tokens are added at a steady rate (m_rate per second) up to a capacity
      15 | + * cap (m_cap). Tokens are removed by calling decrement(). The balance
      16 | + * may go negative down to m_max_debt; decrement() returns false when
    


    sipa commented at 7:54 PM on March 20, 2026:

    In commit "util/tokenbucket.h: Provide a generic TokenBucket class"

    Is it useful to support debt? I believe it can be avoided by a transformation that raises both m_value and m_cap by -m_max_debt.


    ajtowns commented at 4:04 AM on March 21, 2026:

    The distinction is in InvToSendBucket::avail() which says "you can start doing stuff as long as the size_bucket's value is >=0", which would have to get incremented as well to be equivalent.

    The main effect is that when the size bucket is under pressure, you get a chance to relay at least 50kB each iteration, rather than the avail test passing as soon as you can relay 1B, then the loop ending immediately after you relay the first tx.

    I think it can be simplified a bit by moving the max_debt value to just being a parameter of decrement() though. Will update.

  22. ajtowns force-pushed on Mar 21, 2026
  23. ajtowns force-pushed on Mar 22, 2026
  24. ajtowns force-pushed on Mar 22, 2026
  25. DrahtBot added the label CI failed on Mar 22, 2026
  26. DrahtBot removed the label CI failed on Mar 22, 2026
  27. in src/net_processing.cpp:6190 in ba3a81d036 outdated
    6187 | @@ -6028,63 +6188,56 @@ bool PeerManagerImpl::SendMessages(CNode& node)
    6188 |                  // Determine transactions to relay
    6189 |                  if (fSendTrickle) {
    6190 |                      // Produce a vector with all candidates for sending
    


    xyzconstant commented at 3:04 AM on April 15, 2026:

    In commit: "net_processing: Change m_tx_inventory_to_send from set to vector" (2998d73f692059f149fa2d5a4108b172b21c8cac)

    nit: Forgot to remove this comment as well?


    ajtowns commented at 4:11 AM on April 15, 2026:

    No? inv_tx is a vector with all candidates for sending in the new code.


    xyzconstant commented at 4:33 AM on April 15, 2026:

    Yeah you're right, but now filterrate definition sits in between

        // Produce a vector with all candidates for sending
        const CFeeRate filterrate{tx_relay->m_fee_filter_received.load()};
    
        // Topologically and fee-rate sort the inventory we send for privacy and priority reasons.
        // sorted from lowest priority to highest, skipping low fee
        auto inv_tx = [&]() EXCLUSIVE_LOCKS_REQUIRED(tx_relay->m_tx_inventory_mutex) {
    

    ajtowns commented at 4:32 AM on April 26, 2026:

    Tweaked the comments a bit

  28. DrahtBot added the label Needs rebase on Apr 23, 2026
  29. ajtowns force-pushed on Apr 23, 2026
  30. DrahtBot removed the label Needs rebase on Apr 23, 2026
  31. in src/node/transaction.cpp:64 in cafa97202e outdated
      59 | @@ -60,15 +60,15 @@ TransactionError BroadcastTransaction(NodeContext& node,
      60 |              if (!existingCoin.IsSpent()) return TransactionError::ALREADY_IN_UTXO_SET;
      61 |          }
      62 |  
      63 | -        if (auto mempool_tx = node.mempool->get(txid); mempool_tx) {
      64 | +        mempool_tx = node.mempool->get(txid);
      65 | +        if (mempool_tx) {
    


    polespinasa commented at 11:38 AM on April 24, 2026:

    in cafa97202e95df278153369ccda9c1bb61880a5e I don't think we should have an if statement without code logic inside. Why not just if(!mempool_tx) and do what is inside the else code block


    ajtowns commented at 3:05 AM on April 26, 2026:

    The if block provides a place for the detailed comments on what happen in the case the tx is already in the mempool.

  32. in src/net_processing.cpp:174 in 466dccc89c
     175 | -static constexpr unsigned int INVENTORY_BROADCAST_TARGET = INVENTORY_BROADCAST_PER_SECOND * count_seconds(INBOUND_INVENTORY_BROADCAST_INTERVAL);
     176 | -/** Maximum number of inventory items to send per transmission. */
     177 | -static constexpr unsigned int INVENTORY_BROADCAST_MAX = 1000;
     178 | -static_assert(INVENTORY_BROADCAST_MAX >= INVENTORY_BROADCAST_TARGET, "INVENTORY_BROADCAST_MAX too low");
     179 | -static_assert(INVENTORY_BROADCAST_MAX <= node::MAX_PEER_TX_ANNOUNCEMENTS, "INVENTORY_BROADCAST_MAX too high");
     180 | +// static constexpr unsigned int INVENTORY_BROADCAST_TARGET = INVENTORY_BROADCAST_PER_SECOND * count_seconds(INBOUND_INVENTORY_BROADCAST_INTERVAL);
    


    polespinasa commented at 3:49 PM on April 24, 2026:

    In 466dccc89c5748c7f4ccfb10b2079c2eb54589c5 you can delete this two lines that are commented and then deleted in 0491df8e7876b9b2b86e25190823f3ff632311c6.


    ajtowns commented at 3:10 AM on April 26, 2026:

    They're commented in the earlier commit then uncommented (not deleted) in the later commit. Commenting them makes it easy to see how they're (not) changed, while still allowing the in-between commits to compile without hitting "unused variable" warnings/errors.

  33. in src/txmempool.cpp:541 in 798db790a8
     537 | @@ -538,6 +538,7 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
     538 |          for (const auto& input: tx.vin) mempoolDuplicate.SpendCoin(input.prevout);
     539 |          AddCoins(mempoolDuplicate, tx, std::numeric_limits<int>::max());
     540 |      }
     541 | +
    


    polespinasa commented at 3:57 PM on April 24, 2026:

    in 798db790a8a5b9c6b11b3e8599cb88f9d6015d87 nit: random empty line added here

  34. in src/txmempool.cpp:567 in 798db790a8 outdated
     562 | +
     563 | +    n = std::min(wtxids.size(), n);
     564 | +    if (n > 0) {
     565 | +        res.reserve(wtxids.size());
     566 | +        for (auto& wtxid : wtxids) {
     567 | +            if (auto i{GetIter(wtxid)}; i.has_value()) {
    


    polespinasa commented at 4:07 PM on April 24, 2026:

    in 798db790a8a5b9c6b11b3e8599cb88f9d6015d87

    nit: I think this can be simplified: if (auto i = GetIter(wtxid)) res.push_back(*i);


    ajtowns commented at 3:13 AM on April 26, 2026:

    The i.has_value() check is to catch if any of the txs have been removed from the mempool and to ensure that all the returned txiter's are valid. Having that check be implicit in the if is probably fine, but seems worse than writing it explicitly to me..

  35. in src/txmempool.cpp:572 in 798db790a8 outdated
     567 | +            if (auto i{GetIter(wtxid)}; i.has_value()) {
     568 | +                res.push_back(i.value());
     569 | +            }
     570 | +        }
     571 | +
     572 | +        if (n >= res.size()) {
    


    polespinasa commented at 4:47 PM on April 24, 2026:

    in 798db790a8a5b9c6b11b3e8599cb88f9d6015d87

    I think std::partial_sort gives the same result as std::sort if the mid iterator reaches the end. So changing the whole if ... else... block for:

    std::partial_sort(res.rbegin(),
                      res.rbegin() + std::min(n, res.size()),
                      res.rend(),
                      cmp);
    

    I think it should work.


    ajtowns commented at 5:14 AM on April 25, 2026:

    std::partial_sort is less efficient than std::sort when sorting the entire container, eg https://stackoverflow.com/questions/45455345/performance-of-stdpartial-sort-versus-stdsort-when-sorting-the-whole-ran

  36. polespinasa commented at 4:51 PM on April 24, 2026: member

    Concept ACK

    Code reviewed till 798db790a8a5b9c6b11b3e8599cb88f9d6015d87 will continue soon :)

    Left small comments and suggestions but nothing important, feel free to ignore.

  37. DrahtBot added the label Needs rebase on Apr 24, 2026
  38. net_processing: Pass full tx to InitiateTxBroadcastToAll()
    All callers already have the full transaction available, so just pass that
    through. This matches the signature for InitiateTxBroadcastPrivate()
    and prepares for a later commit that needs the full transaction to
    compute its serialized size for rate limiting.
    f940743fac
  39. net_processing: Remove per-peer rate-limiting
    Per-peer rate limiting introduces storage and compute costs proportional
    to the number of peers. This has caused severe bugs in the past, and
    continues to be a risk in the event of periods of extremely high rates
    of transaction submission. Avoid these problems by always completely
    emptying the m_tx_inventory_to_send queue when processing it.
    
    Note that this increases the potential size of INV messages we send
    for normal tx relay from ~1000 (limited by INVENTORY_BROADCAST_MAX)
    to potentially 50000 (limited by MAX_INV_SZ).
    b47c81af3b
  40. txmempool: Add SortMiningScoreWithTopology
    Add a method for sorting a batch of transactions (specified as a vector
    of wtxids) per mempool order, designed for transaction relay.
    3a72379141
  41. net_processing: Change m_tx_inventory_to_send from set to vector
    Change the per-peer tx relay queue from std::set to std::vector. This
    reduces the memory usage and improves locality, at the cost of not
    automatically deduping entries.
    1ace17fc1b
  42. txmempool: Drop CompareMiningScoreWithTopology
    Now unused; replaced by SortMiningScoreWithTopology.
    0fd1773804
  43. ajtowns force-pushed on Apr 26, 2026
  44. DrahtBot removed the label Needs rebase on Apr 26, 2026
  45. ajtowns force-pushed on Apr 26, 2026
  46. DrahtBot added the label CI failed on Apr 26, 2026
  47. DrahtBot removed the label CI failed on Apr 26, 2026
  48. ajtowns commented at 5:28 AM on April 26, 2026: contributor

    Rebased past #35097, addressed review comments

  49. in src/util/tokenbucket.h:43 in f632171cca
      38 | +    /** Refill tokens based on elapsed time since last call. No refill
      39 | +     *  occurs on the first call (establishes the time baseline). */
      40 | +    void increment(const time_point& now)
      41 | +    {
      42 | +        if (now > m_last_updated) {
      43 | +            if (m_value < m_cap && m_last_updated.time_since_epoch().count() > 0) {
    


    polespinasa commented at 2:53 PM on April 27, 2026:

    in f632171 I think this is a bit more clearer way to say "this is not the first call". As it is checking whether m_last_updated has been initialized or not.

    if (m_value < m_cap && m_last_updated != time_point{}) {...}
    

    ajtowns commented at 6:55 AM on April 28, 2026:

    Yeah that's nicer. Introduce a MIN_TIME to compare against instead.

  50. in src/net_processing.cpp:544 in 2a9ab8062b outdated
     539 | +        count_bucket.increment(now);
     540 | +    }
     541 | +
     542 | +    bool decrement(double size)
     543 | +    {
     544 | +        bool x = size_bucket.decrement(size, /*floor=*/-50e3);
    


    polespinasa commented at 3:00 PM on April 27, 2026:

    in 2a9ab8062bcadb5e8128572671bba762101efaa6 why negative floor?


    ajtowns commented at 6:40 AM on April 28, 2026:

    Because the comparison is m_value > floor not m_value > -floor


    polespinasa commented at 11:13 AM on April 28, 2026:

    Sorry, maybe the question was not clear. I mean why negative? Why can it go bellow 0 in the first place?


    ajtowns commented at 3:23 PM on April 28, 2026:

    So that if/when outgoing txs are bandwidth constrained (ie your bucket is often empty) you send ~50kB of tx data in each burst, rather than just the single highest priority tx in the backlog (since you'll immediately process the backlog when your size bucket goes above zero, even by just one byte).

  51. polespinasa commented at 3:12 PM on April 27, 2026: member

    code reviewed 54bbc5649c65395eadbff7f237359ad33a6862e5

    Probably this needs a release note for -txsendrate and the new info in getnetworkinfo.

    Left a small comment and a question

  52. fanquake added the label Needs release note on Apr 27, 2026
  53. util/tokenbucket.h: Provide a generic TokenBucket class
    This is a simple token bucket parameterized on clock type, used in the
    following commit.
    c4df8d78cc
  54. net_processing: add a global delay queue for sending txs
    Without the per-peer rate limiting, nodes can act as an amplifier for
    transaction spam -- receiving many transactions from one node, but
    relaying each of them to over 100 other nodes. Limit the impact of this
    by providing a global rate limit.
    
    This is implemented using dual token buckets, one that consumes a
    token for every transaction, and one that consumes a token for every
    serialized byte. This rate limits both per-tx resource usage (eg INV
    messages) and overall relay bandwidth.
    
    Main bucket parameters:
     * Count: 14tx/s rate, 420tx (30s) capacity
     * Size: 12MB/600s rate (4-6 blocks per target block interval), 50MB capacity
    
    The size bucket is expected to be large enough to almost never have an
    impact in normal usage, even during transaction storms, and is primarily
    intended to mitigate attack-like scenarios.
    
    Outbound connections get a separate pair of buckets, with rates boosted
    by a 2.5x multiplier.
    
    This avoids the excessive memory and CPU usage due to the 100x multiplier
    from the queues being per-peer.
    
    Note that this also reduces the size of INV messages we send for general
    tx relay back to a more reasonable level of under 600 txs in 99.999%
    of cases.
    b1f4efbc2e
  55. init: add -txsendrate configuration parameter
    Adds a debug-only configuration option to set the target
    transaction/second rate for relay to inbound connections. This is mostly
    intended to be set to artificially low values to aid in testing behaviour
    when a backlog occurs, but is also available in case the default 14tx/s
    target is somehow too low in practice.
    54748fd4ed
  56. rpc: report -txsendrate and bucket info via getnetworkinfo
    Add `tx_send_rate` and `inv_buckets` fields to getnetworkinfo. The latter
    has `inbound` and `outbound` entries, reporting reports backlog count,
    count tokens, and size tokens. Useful for monitoring relay behavior.
    93ac263d0e
  57. tests: basic functional test for tx rate limiting fd16ded426
  58. doc: Add release note for -txsendrate etc 3fc10abce7
  59. ajtowns force-pushed on Apr 28, 2026
  60. ajtowns commented at 6:57 AM on April 28, 2026: contributor

    Added a release note. Are we meant to remove the "Needs release note" label when there's a release note included in the PR, or is it more like an "I need to breathe, and I am breathing" arrangement where the latter doesn't negate the former?

  61. polespinasa commented at 7:14 AM on April 28, 2026: member

    Added a release note. Are we meant to remove the "Needs release note" label when there's a release note included in the PR, or is it more like an "I need to breathe, and I am breathing" arrangement where the latter doesn't negate the former?

    I think there's no policy on that, I've seen both cases in the past, #31278 got it and then removed it, #32138 got it and never removed the label even if the note was there.

    IMHO is good to keep it as a reminder in case the release note is dropped by mistake at some point, so reviewers can realize that something is missing.

  62. maflcko removed the label Needs release note on Apr 28, 2026
  63. polespinasa commented at 11:18 AM on April 28, 2026: member

    How can I help testing this? @0xB10C are you using a patch to measure it or just by enabling debug and net flag you catch inv_to_send

  64. 0xB10C commented at 3:59 PM on April 29, 2026: contributor

    @0xB10C are you using a patch to measure it or just by enabling debug and net flag you catch inv_to_send

    No, I've been running this PR. The measurements described on https://bnoc.xyz/t/increased-b-msghand-thread-utilization-due-to-runestone-transactions-on-2026-02-17/81/11 were done collecting data from a few different interfaces with peer-observer:

    • inv-to-send set sizes across multiple nodes via the getpeerinfo RPC
    • time spent in b-msghand thread via a prometheus process-exporter
    • the localhost ping-pong duration with a custom p2p client that measures the time it takes for the node to respond. This measures message backlog
    • size of the INVs the node sends us also with a custom P2P client on localhost that listens for INVs from the node.

    Not sure if this helps much.

  65. instagibbs commented at 8:11 AM on May 5, 2026: member

    concept ACK, will review

  66. in src/node/transaction.cpp:133 in f940743fac
     129 | @@ -130,7 +130,7 @@ TransactionError BroadcastTransaction(NodeContext& node,
     130 |      case TxBroadcast::MEMPOOL_NO_BROADCAST:
     131 |          break;
     132 |      case TxBroadcast::MEMPOOL_AND_BROADCAST_TO_ALL:
     133 | -        node.peerman->InitiateTxBroadcastToAll(txid, wtxid);
     134 | +        node.peerman->InitiateTxBroadcastToAll(mempool_tx ? mempool_tx : tx);
    


    instagibbs commented at 2:26 PM on May 12, 2026:

    f940743fac03f27d8cf3c9f2d8a0dd5ba36209bd

    Why not just tx unconfiditionally?


    polespinasa commented at 7:29 PM on May 14, 2026:

    I think is because we might have a tx in the mempool with same tx id but different witness. So we would be announcing a tx that we don't have in our mempool because it would conflict with our version of the tx.


    instagibbs commented at 8:26 PM on May 14, 2026:

    this is extremely non-obvious and should be documented if so


    polespinasa commented at 8:34 PM on May 14, 2026:

    It is :)

    See my other comment: https://github.com/bitcoin/bitcoin/pull/34628/changes/BASE..f940743fac03f27d8cf3c9f2d8a0dd5ba36209bd#r3137421379

    Maybe the comment could be moved removing the empty if?


    instagibbs commented at 8:37 PM on May 14, 2026:

    resolving, didnt notice the unmoved/unchanged comment in the diff

    edit: github doesnt want to let me

  67. in src/net_processing.cpp:173 in b47c81af3b
     168 | @@ -169,13 +169,9 @@ static constexpr auto INBOUND_INVENTORY_BROADCAST_INTERVAL{5s};
     169 |  static constexpr auto OUTBOUND_INVENTORY_BROADCAST_INTERVAL{2s};
     170 |  /** Maximum rate of inventory items to send per second.
     171 |   *  Limits the impact of low-fee transaction floods. */
     172 | -static constexpr unsigned int INVENTORY_BROADCAST_PER_SECOND{14};
     173 | +// static constexpr unsigned int INVENTORY_BROADCAST_PER_SECOND{14};
     174 |  /** Target number of tx inventory items to send per transmission. */
    


    instagibbs commented at 2:36 PM on May 12, 2026:

    b47c81af3b3e7ae59bc09a3f621ecdf8f3dc62da

    unrelated to PR: appears to also be the target for blocks, not just tx?


    ajtowns commented at 8:24 PM on May 14, 2026:

    No: we do reserve that much space (INVENTORY_BROADCAST_TARGET) for the vector before announcing blocks by inv, but we'll just spam all the blocks we have queued (splitting into new messages as needed).

  68. in src/txmempool.cpp:556 in 3a72379141
     552 | @@ -553,6 +553,31 @@ void CTxMemPool::check(const CCoinsViewCache& active_coins_tip, int64_t spendhei
     553 |      assert(innerUsage == cachedInnerUsage);
     554 |  }
     555 |  
     556 | +std::vector<CTxMemPool::txiter> CTxMemPool::SortMiningScoreWithTopology(std::span<const Wtxid> wtxids, size_t n) const
    


    instagibbs commented at 2:43 PM on May 12, 2026:

    3a7237914178c9aa8018f86ae5e446f263e0c843

    n is overly terse imo, and the help isn't clear to me either. n_best?

  69. in src/net_processing.cpp:304 in 1ace17fc1b
     300 | @@ -301,7 +301,7 @@ struct Peer {
     301 |           *  we retrieve the txid from the corresponding mempool transaction when
     302 |           *  constructing the `inv` message. We use the mempool to sort transactions
     303 |           *  in dependency order before relay, so this does not have to be sorted. */
     304 | -        std::set<Wtxid> m_tx_inventory_to_send GUARDED_BY(m_tx_inventory_mutex);
     305 | +        std::vector<Wtxid> m_tx_inventory_to_send GUARDED_BY(m_tx_inventory_mutex);
    


    instagibbs commented at 2:46 PM on May 12, 2026:

    1ace17fc1bbf265bcb1100e8b025f279223c9da7

    Still being called a set in the help

  70. in src/net_processing.cpp:6049 in 1ace17fc1b
    6068 | +                    }();
    6069 | +                    tx_relay->m_tx_inventory_to_send.clear();
    6070 | +
    6071 | +                    LOCK(tx_relay->m_bloom_filter_mutex);
    6072 | +                    vInv.reserve(std::min<size_t>(MAX_INV_SZ, vInv.size() + inv_tx.size()));
    6073 | +                    while (!inv_tx.empty()) {
    


    instagibbs commented at 3:01 PM on May 12, 2026:

    1ace17fc1bbf265bcb1100e8b025f279223c9da7

    Feel like just reverse ranging it or similar, then not popping anything, is faster?

      for (auto it = inv_tx.rbegin(); it != inv_tx.rend(); ++it) {
          const auto& tx = *it;
          ...
      }
    

    ajtowns commented at 8:23 PM on May 14, 2026:

    "faster" ? Iterating over inv_tx a second time when destructing to decrement the CTxRefs would probably be slower I would have thought, but it seems likely to basically unmeasurable either way?


    instagibbs commented at 8:25 PM on May 14, 2026:

    ok, "less indirect"?

  71. instagibbs commented at 6:08 PM on May 14, 2026: member

    Some initial comments while I still work through the approach.

    To be honest I'm finding it a little difficult to follow the lifetime of invs.

    In this branch https://github.com/instagibbs/bitcoin/commit/3f87d24eea279fb6b68f5f9af9579cd8b8909db3 , I considered forcing all announcements through the backlog, and then draining this every tick if:

    1. avail() is large enough to drain entire backlog (replacement for immediate path)
    2. same as before in this PR, for when avail() batch gets "big enough" to cost a partial sort

    I also am finding it difficult to understand the negative budgeting. InvToSendBucket::decrement return value is never checked and in my branch is deleted anyways. Does /*floor=*/-50e3 even do anything in the PR?

    This change would mean in the immediate path we wouldn't be checking m_tx_inventory_known_filter and deduped later, fwiw.

    Probably other issues with divergence in your attempt, but I can't make heads or tails right now.

  72. ajtowns commented at 8:38 PM on May 14, 2026: contributor

    I also am finding it difficult to understand the negative budgeting. InvToSendBucket::decrement return value is never checked and in my branch is deleted anyways. Does /*floor=*/-50e3 even do anything in the PR?

    BumpInvVecForProcessing calls if (!inv_bucket.size_bucket.decrement(itervec[i]->GetTx().ComputeTotalSize())) which is where the -50e3 param should be having an effect (allowing a larger batch of txs when the size limit is in effect), but is missing.


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-05-15 03:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me