net processing: avoid serving non-announced txs as a result of a MEMPOOL message #27602

pull sr-gi wants to merge 1 commits into bitcoin:master from sr-gi:mempool-not-inved changing 2 files +44 −20
  1. sr-gi commented at 9:23 pm on May 8, 2023: member

    Currently, non-announced transactions can be served after a BIP35 MEMPOOl message has been received from a peer. By following this pattern, we are potentially allowing peers to fingerprint transactions that may have originated in our node and to perform timing analysis on the propagation of transactions that enter our mempool.

    This simply removes the relay bypass so only announced transactions are served to peers.

  2. DrahtBot commented at 9:23 pm on May 8, 2023: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    Concept ACK glozow, willcl-ark
    Approach NACK vasild

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #27675 (net_processing: Drop m_recently_announced_invs bloom filter by ajtowns)
    • #27625 (p2p: Stop relaying non-mempool txs by MarcoFalke)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  3. DrahtBot added the label P2P on May 8, 2023
  4. sr-gi commented at 9:24 pm on May 8, 2023: member

    Motivation: this fix was discussed over IRC by @willcl-ark @glozow @sipa and @darosior

    https://gnusha.org/bitcoin-core-dev/2023-05-02.log

  5. glozow renamed this:
    net: avoid serving non-announced txs as a result of a MEMPOOL message
    net processing: avoid serving non-announced txs as a result of a MEMPOOL message
    on May 8, 2023
  6. in src/net_processing.cpp:5636 in c04f145894 outdated
    5632@@ -5639,7 +5633,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    5633                             if (!tx_relay->m_bloom_filter->IsRelevantAndUpdate(*txinfo.tx)) continue;
    5634                         }
    5635                         tx_relay->m_tx_inventory_known_filter.insert(hash);
    5636-                        // Responses to MEMPOOL requests bypass the m_recently_announced_invs filter.
    5637+                        tx_relay->m_recently_announced_invs.insert(hash);
    


    ajtowns commented at 8:11 am on May 9, 2023:
    Doesn’t this just change the fingerprinting vector from GETDATA txid to FILTERCLEAR; FILTERADD whatever; MEMPOOL ? In which case we’re just as easily fingerprinted, but we spend more cpu while being fingerprinted?

    sr-gi commented at 3:01 pm on May 9, 2023:

    Do you mean keep adding what you’re interested in into the filter so it triggers an INV from the node and, therefore, we can request it (or, similarly, keep resetting the filter and just add the one single transaction that we want to probe)? If so, isn’t that equivalent to having no filter?

    I think in both of the aforementioned cases (which boil down to the same logic if I’m not mistaken) the patch prevents the attacker from requesting unannounced transactions straight-away, they will have to wait for m_next_inv_send_time to go off.


    ajtowns commented at 0:46 am on May 10, 2023:

    Ah, I see what you mean. That makes sense.

    Next point on those lines though: adding many things to m_recently_announced_invs can overflow the bloom filter, either causing false positives (leading to privacy bugs), or since it’s a rolling bloom filter, causing overflow leading to false negatives (leading to functionality bugs where you can’t get the txs from the MEMPOOL response). Wouldn’t it be better to make the insertion conditional on txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY ? Even then this approach seems like it can trigger false positives fairly easily, since the rolling bloom filter capacity is only 3500 txs?


    sr-gi commented at 6:56 pm on May 10, 2023:

    Wouldn’t it be better to make the insertion conditional on txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY ?

    Yeah, I think that makes perfect sense, given there is no point in adding something to the filter if we are going to unconditionally serve it anyway.

    Even then this approach seems like it can trigger false positives fairly easily, since the rolling bloom filter capacity is only 3500 txs?

    It may indeed. I haven’t got my head around how to properly compute what are the odds of this happening though, given it feels like it depends on how often MEMPOOL messages are sent to us, which may be unbounded (how many MEMPOOL requests can we receive in a UNCONDITIONAL_RELAY_DELAY period, and how full our mempool may be?). In any case, this feels detrimental for the peer implementing this behavior instead of for us (?)

    Something I’m really wondering is why this is actually bound by INVENTORY_MAX_RECENT_RELAY (3500) instead of by MAX_PEER_TX_ANNOUNCEMENTS (5000). It feels quite inconsistent that we are able to accept up to 5000 transaction announcements from someone but we are only able to remember the last 3500 that we have announced to someone(else).

    I think what may make even more sense would be to reply to mempool messages only with transactions that fall into the txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY check, that way we don’t really have to add transactions to m_recently_announced_invs here, and we are also discouraging peers bloating us with MEMPOOL messages. As long as they have a filter set, any transaction that matches their filter will be announced to them anyway (h/t @sdaftuar). I guess this could be done as a followup if it makes sense.


    sr-gi commented at 7:42 pm on May 10, 2023:

    Wouldn’t it be better to make the insertion conditional on txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY ?

    Also wouldn’t that also apply here? https://github.com/bitcoin/bitcoin/blob/27c40e3dcf280a137bcfaa787674c26781d911b8/src/net_processing.cpp#L5688


    ajtowns commented at 0:26 am on May 11, 2023:

    MAX_PEER_TX_ANNOUNCEMENTS doesn’t really provide a rate limit – it stops the queue from being more than 5k at any point in time, but the flow through the queue could still be 400tx/s, for instance. Beyond that, m_recently_announced_invs only tracks things we announced, not things we received, and we rate limit that (as per the static_assert on INVENTORY_MAX_RECENT_RELAY).

    Delaying MEMPOOL results for 2 minutes seems annoying. But how about just recording the timestamp of the last MEMPOOL request into the txrelay structure, and skipping the bloom lookup if m_time <= max(now-2min, txrelay->last_mempool_request) ?

    You could make the inv-trickle insertion into the bloom filter conditional too, but I think that would so rarely actually have an effect that it wouldn’t matter?


    ajtowns commented at 9:51 am on May 11, 2023:

    But how about just recording the timestamp of the last MEMPOOL request …

    Patch might look something like… (EDIT: Err, except that’s just essentially reverting back to what we have now?)

     0--- a/src/net_processing.cpp
     1+++ b/src/net_processing.cpp
     2@@ -295,6 +295,8 @@ struct Peer {
     3          *  permitted if the peer has NetPermissionFlags::Mempool or we advertise
     4          *  NODE_BLOOM. See BIP35. */
     5         bool m_send_mempool GUARDED_BY(m_tx_inventory_mutex){false};
     6+        std::chrono::microseconds m_last_mempool_time GUARDED_BY(m_tx_inventory_mutex){0};
     7+
     8         /** The next time after which we will send an `inv` message containing
     9          *  transaction announcements to this peer. */
    10         std::chrono::microseconds m_next_inv_send_time GUARDED_BY(m_tx_inventory_mutex){0};
    11@@ -2258,7 +2260,7 @@ CTransactionRef PeerManagerImpl::FindTxForGetData(const Peer::TxRelay& tx_relay,
    12     auto txinfo = m_mempool.info(gtxid);
    13     if (txinfo.tx) {
    14         // If a TX is older than UNCONDITIONAL_RELAY_DELAY, permit the request unconditionally.
    15-        if (txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY) {
    16+        if (txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY || WITH_LOCK(tx_relay.m_tx_inventory_mutex, return txinfo.m_time <= tx_relay.m_last_mempool_time)) {
    17             return std::move(txinfo.tx);
    18         }
    19     }
    20@@ -5619,6 +5621,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    21                     tx_relay->m_send_mempool = false;
    22                     const CFeeRate filterrate{tx_relay->m_fee_filter_received.load()};
    23 
    24+                    tx_relay->m_last_mempool_time = current_time;
    25                     LOCK(tx_relay->m_bloom_filter_mutex);
    26 
    27                     for (const auto& txinfo : vtxinfo) {
    28@@ -5632,7 +5635,6 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    29                         if (tx_relay->m_bloom_filter) {
    30                             if (!tx_relay->m_bloom_filter->IsRelevantAndUpdate(*txinfo.tx)) continue;
    31                         }
    32-                        tx_relay->m_tx_inventory_known_filter.insert(hash);
    33                         tx_relay->m_recently_announced_invs.insert(hash);
    34                         vInv.push_back(inv);
    35                         if (vInv.size() == MAX_INV_SZ) {
    

    ajtowns commented at 10:28 am on May 11, 2023:
    Okay, other idea: how about just making sure you only send at most 3500 mempool entries younger than 2 minutes, in response to a MEMPOOL message, and add them all to the filter, and don’t worry if that causes overflow on results from a previous response?

    maflcko commented at 10:42 am on May 11, 2023:

    how about just making sure you only send at most 3500 mempool entries younger than 2 minutes, in response to a MEMPOOL message

    Wouldn’t that imply that peers are now expected to send the mempool message in a loop (or at least twice over a span of two minutes) to detect if there were more transactions to get? If yes, I wonder if it conflicts with the idea to potentially only allow a single mempool message per connection.


    maflcko commented at 11:25 am on May 11, 2023:
    I wonder if we can remove mapRelay and as a result can get more efficient data structures to remember (w)txids per Peer, knowing that only transaction in the mempool can be in them.

    glozow commented at 1:07 pm on May 11, 2023:

    I wonder if we can remove mapRelay and as a result can get more efficient data structures to remember (w)txids per Peer, knowing that only transaction in the mempool can be in them.

    mapRelay is mostly what’s in the mempool, but can also contain entries that were announced less than 15min ago + fell out of the mempool since then. This is sometimes good for fingerprinting :shrug: but also seems like largely redundant information to me.


    ajtowns commented at 1:17 pm on May 11, 2023:

    I don’t think we can remove mapRelay – we need it so we can continue to relay recently announced txs that we’ve removed from the mempool?

    How about this approach: when we get a MEMPOOL message, at the next fSendTrickle time, for each tx that matches the filter, if it’s older than 2 minutes, announce it immediately, otherwise just put it into m_tx_inventory_to_send and let it be sent out at the usual rate. Only use m_recently_announced_invs for txs relayed via m_tx_inventory_to_send. (Maybe bump the “2 minutes” by a small random amount)


    maflcko commented at 1:58 pm on May 11, 2023:

    This is sometimes good for fingerprinting

    fingerprinting seems bad, so another reason for removal?

    I don’t think we can remove mapRelay – we need it so we can continue to relay recently announced txs that we’ve removed from the mempool?

    Sure that is what it does, but if the only use case is for spy nodes to fingerprint, it may be better to remove it?

    See #27625. Feel free to NACK.


    ajtowns commented at 2:02 pm on May 11, 2023:
    Knowing when a tx is removed from the mempool means you know exactly when the tx that replaced it was added to the mempool… Txs are also removed from the mempool when a block comes in; if we process the block first, then see a GETDATA for a tx in the block, replying with the tx is better for block propagation (including compact blocks) that sending a NOTFOUND…

    glozow commented at 3:00 pm on May 11, 2023:

    This is sometimes good for fingerprinting

    fingerprinting seems bad, so another reason for removal?

    Sorry I meant that it’s sometimes good for preventing fingerprinting. In the replacement scenario which @ajtowns is describing. Timing of entry of replacement == notfound of replaced tx. With mapRelay, since you have the replaced tx for 15 minutes, you’ll still serve it instead of immediately saying notfound. But mapRelay is useless against this if the spy node just waits 15 minutes before doing the replacement. So this could just be a lost cause.

    Anyway, will take this to #27625 instead.


    ajtowns commented at 3:59 pm on May 11, 2023:
    I don’t think the timing of removal from mapRelay reveals a secret – that’s 15minutes after it was added, but the time it was added is the time it was first announced (probably to an outbound peer), which is already randomised/batched, compared to the time we received it?

    glozow commented at 7:23 pm on May 11, 2023:

    What I mean is Leak available without mapRelay:

    1. spy sends txA. It’s added to mempool.
    2. spy sends txB (through the network) and continuously queries our node for txA
    3. once we accept txB to mempool, we serve notfound for txA (because it’s not there anymore). spy learns exactly what time we received txB.

    ^This is prevented with mapRelay because after txA leaves mempool, we’ll still be able to serve it from mapRelay.

    This is still possible with mapRelay:

    1. spy sends txA. It’s added to mempool + mapRelay.
    2. 15 minutes later, txA expires from mapRelay
    3. spy sends txB (through the network) and continuously queries our node for txA
    4. once we accept txB to mempool, we serve notfound for txA (because it’s not in mempool or mapRelay anymore). spy learns exactly what time we received txB.

    maflcko commented at 1:16 pm on May 12, 2023:

    Even then this approach seems like it can trigger false positives fairly easily, since the rolling bloom filter capacity is only 3500 txs?

    I didn’t understand this. How is this different from a peer connecting and waiting for 3500 “normal” newly announced transaction to be announced and added to the bloom filter to achieve the same false positive rate?

    or since it’s a rolling bloom filter, causing overflow leading to false negatives (leading to functionality bugs where you can’t get the txs from the MEMPOOL response).

    Same here. How is this different from a “normal” sustained spike in tx announcements on the network?

    I guess we’d need to bypass the bloom filter for fSendTrickle=always-true peers, but that seems unrelated from this pull, unless there is a change so that the mempool permission also forces fSendTrickle to true?


    sr-gi commented at 7:31 pm on May 12, 2023:

    I think those are rate limited so they don’t overflow in less than two minutes, after which they are not relevant anymore.

    For outbound connections, INVs are sent out every 2s (OUTBOUND_INV_BROADCAST_INTERVAL), while for inbound, they are sent every 5s (OUTBOUND_INV_BROADCAST_INTERVAL). In a two-minute timespan, that results in 2100 and 840 INVs at max1, respectively. The m_recently_announced_invs rolling filter is only checked in FindTxForGetData after making sure the traction cannot be sent unconditionally (i.e. after 2min), so if the filter overflows after that it’s not an issue.

    1 data is also added to m_recently_announced_invs when relaying a transaction with a recent but unconfirmed parent (https://github.com/bitcoin/bitcoin/blob/master/src/net_processing.cpp#L2343). I’m unsure if that is rate-limited at all, or if the remaining 1400 slots (in the worse case) are to account for that not overflowing.


    maflcko commented at 7:27 am on May 15, 2023:
    I was trying to say that if any false negative or false positive arises as a result of a mempool message in this pull request, the same should already be observable (on master or releases) in the absence of a mempool message with normal tx relay? So addressing it here seems unrelated? I presume #27630 is addressing it.

    ajtowns commented at 8:49 am on May 15, 2023:

    The scenario here is:

    • issue MEMPOOL with a large , returning 6000 txs from the mempool, all of which are added to the bloom filter, which then causes the first ~750 or more to be removed from the rolling bloom filter as it rolls over
    • request one of those txs via GETDATA, which then fails if the tx hasn’t been in the mempool for more than 2 minutes

    In normal relay, over the course of two minutes we’d only advertise ~840 txs (~2100 for an outbound), and only add the txs to the bloom filter when advertised, so the rollover would only happen very rarely (less than 1-in-a-million times for an outbound, fewer for an inbound). After #27610 we’ll relay more txs during spikes, so these error rates can increase, though.


    maflcko commented at 9:03 am on May 15, 2023:
    Whoops. I was incorrectly assuming that a reply to a mempool message was rate-limited and size-limited. (Which on a second thought, probably doesn’t make sense to do). So a single reply can hold up to 50'000 inv entries, which will certainly overflow. Sorry for the distraction.

    maflcko commented at 9:33 am on May 15, 2023:

    Side note: The 50k max inv size also seems to overflow, which is probably another source for false negatives, depending on the use case.

    Image stolen from the comment #27426 (comment) by 0xB10C .


    sr-gi commented at 2:26 pm on May 15, 2023:

    Side note: The 50k max inv size also seems to overflow, which is probably another source for false negatives, depending on the use case.

    You lost me there. Do you mean this may be a possible source of false negatives for the current path, or for both?

    In this patch, I can see that potentially being an issue (that’s what the original concern from AJ’s comes from), but I don’t see how that is an issue for master, given nothing is added to m_recently_announced_invs when building this huge INVs as a result of a MEMPOOL message.


    maflcko commented at 2:46 pm on May 15, 2023:

    You lost me there.

    Apologies, I typed something incorrect, again.


    sr-gi commented at 3:03 pm on May 15, 2023:
    All good, I was just wondering if I may have been missing something.
  7. in src/net_processing.cpp:2263 in c04f145894 outdated
    2265-        // unconditionally.
    2266-        if ((mempool_req.count() && txinfo.m_time <= mempool_req) || txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY) {
    2267-            return std::move(txinfo.tx);
    2268-        }
    2269+        // If a TX is older than UNCONDITIONAL_RELAY_DELAY, permit the request unconditionally.
    2270+        if (txinfo.m_time <= now - UNCONDITIONAL_RELAY_DELAY) return std::move(txinfo.tx);
    


    maflcko commented at 3:13 pm on May 9, 2023:
    m_last_mempool_req is now unused and can be removed completely? Also, it might be good to not touch the return here when there is no reason, to avoid code churn and review burden.

    sr-gi commented at 6:00 pm on May 9, 2023:

    m_last_mempool_req is now unused and can be removed completely?

    Looks like it

    Also, it might be good to not touch the return here when there is no reason, to avoid code churn and review burden.

    I thought the only reason why this was split into two lines is that it was a bit too long (?), I tried to follow the same approach as in L2267 and, while I think it looks more compact the way it is now, I don’t have a strong opinion about on styling atm, so I’m happy to amend it if there is consensus that this is best.

  8. glozow commented at 3:36 pm on May 9, 2023: member

    Concept ACK

    The issue on master that I believe this addresses is being able to learn exactly when a transaction enters your mempool. You send mempool and get the response, then put out the tx, then getdata getdata getdata getdata... until the node tells you the tx instead of notfound. Also in general I think it’s quite sensible to only serve transactions you announced.

  9. sr-gi force-pushed on May 9, 2023
  10. sr-gi commented at 7:33 pm on May 9, 2023: member
    Addressed comments by @MarcoFalke and fixed a couple of logs from the tests
  11. willcl-ark commented at 8:00 pm on May 10, 2023: member
    Concept ACK
  12. darosior commented at 8:20 am on May 11, 2023: member

    I’m only a casual lurker on the net_processing side but i was under the impression we were thinking of the BIP35 as permissioned anyways. If we are already trusting the peer that sends use MEMPOOL message, why not just include everything we’ve got? A practical counter-argument to this could be - yeah but at this time around 20% to 30% of reachable nodes set peerbloomfilters so it’s still worth closing an obvious fingerprinting vector.

    Another thing is whether it’s breaking any usecase. Since the message is from a trusted peer a client may be relying on getting the whole mempool. For instance BTCPay/NBXplorer needs whitelist=mempool iirc. What’s the proportion of announced vs non-announced transactions? How much would be left out with this patch?

  13. willcl-ark commented at 10:34 am on May 11, 2023: member

    i was under the impression we were thinking of the BIP35 as permissioned anyways. … For instance BTCPay/NBXplorer needs whitelist=mempool iirc.

    Just on these points, currently the MEMPOOL message will be responded to either if you have the whitelist permission mempool on the peer or if you have enabled NODE_BLOOM (with -peerbloomfilters). In the latter case I don’t think we can accurately describe these as “trusted peers”. See the changes in 4581a68 (#27559) which clarify this.

  14. vasild commented at 12:34 pm on May 11, 2023: contributor
    The title “avoid serving non-announced txs as a result of a MEMPOOL message” is confusing - it gives the impression that this PR will change the response to MEMPOOL messages, but it does not do that. Before and after the PR we would send the full mempool in a reply to mempool (correct me if I got it wrong).
  15. glozow commented at 12:56 pm on May 11, 2023: member

    What’s the proportion of announced vs non-announced transactions? How much would be left out with this patch?

    The PR (as it is right now) still announces everything in response to mempool. This is saying “I haven’t even announced this to you, why are you requesting it from me?” A normal client, who presumably does not already know what it wants to request until it receives an announcement (i.e. using mempool to learn about transactions), shouldn’t be affected at all.

  16. in test/functional/p2p_filter.py:161 in 27c40e3dcf outdated
    161+        self.log.info("Create a third transaction (that was therefore not part of the mempool message) and try to retrieve it")
    162+        add_txid, _ = self.wallet.send_to(from_node=self.nodes[0], scriptPubKey=getnewdestination()[1], amount=3 * COIN)
    163+        filter_peer.send_and_ping(msg_getdata([CInv(t=MSG_WTX, h=int(self.nodes[0].getmempoolentry(add_txid)["wtxid"], 16))]))
    164+        assert not filter_peer.tx_received
    165+
    166+        self.log.info("Check this is also sound for a peer that has a match-all filter set")
    


    willcl-ark commented at 1:31 pm on May 11, 2023:
    0        self.log.info("Check all transactions are returned by the match-all filter")
    
  17. in test/functional/p2p_filter.py:166 in 27c40e3dcf outdated
    166+        self.log.info("Check this is also sound for a peer that has a match-all filter set")
    167+        unfilter_peer = P2PBloomFilter()
    168+        self.nodes[0].add_p2p_connection(unfilter_peer)
    169+        unfilter_peer.send_and_ping(msg_mempool())
    170+
    171+        self.log.info("Here the all transactions are relevant")
    


    willcl-ark commented at 1:32 pm on May 11, 2023:
    I think this log line could be removed as this seems part of the previous check?
  18. vasild commented at 3:05 pm on May 11, 2023: contributor

    Is it the case that this PR will change the behavior in the following two cases (from the PR title and description I got the impression that it is wider scope):

    Case 1

    • the node receives MEMPOOL request and replies to it
    • in the same second a new transaction enters the mempool (after the MEMPOOL message)
    • the node receives GETDATA for that transaction

    In master it would send the transaction even though it never advertised it with INV due to the txinfo.m_time <= mempool_req check (both values would be equal).

    In this PR it would claim “not found”.

    This can be resolved by increasing the precision of TxRelay::m_last_mempool_req and TxMempoolInfo::m_time or by changing the condition to <.

    Case 2

    • the node receives MEMPOOL request and replies to it, but some transaction is omitted from the reply because it is not in TxRelay::m_bloom_filter.
    • the node receives GETDATA for that transaction

    In master it would send the transaction even though it never advertised it with INV because PeerManagerImpl::FindTxForGetData() does not check the bloom filter.

    In this PR it would claim “not found”.

    Is this a problem at all? If yes, then it can be resolved by checking the bloom filter from PeerManagerImpl::FindTxForGetData().

  19. sr-gi commented at 3:07 pm on May 11, 2023: member

    I’m only a casual lurker on the net_processing side but i was under the impression we were thinking of the BIP35 as permissioned anyways. If we are already trusting the peer that sends use MEMPOOL message, why not just include everything we’ve got? A practical counter-argument to this could be - yeah but at this time around 20% to 30% of reachable nodes set peerbloomfilters so it’s still worth closing an obvious fingerprinting vector.

    We still do. This is not modifying the behavior of the announcements that result from a mempool message, but the behavior of the node to the getdata messages that may follow. Before the patch, a node was able to send us a mempool message and, from that point on, request any transaction from our mempool (even if we haven’t announced that to them at all).

    Another thing is whether it’s breaking any usecase. Since the message is from a trusted peer a client may be relying on getting the whole mempool. For instance BTCPay/NBXplorer needs whitelist=mempool iirc. What’s the proportion of announced vs non-announced transactions? How much would be left out with this patch?

    As of 27c40e3 we still do, even though I would argue that, by doing this, we are encouraging peers to spam us with mempool messages instead of waiting for us to announce the transactions that match their filters in due time.

  20. sr-gi commented at 3:10 pm on May 11, 2023: member

    The title “avoid serving non-announced txs as a result of a MEMPOOL message” is confusing - it gives the impression that this PR will change the response to MEMPOOL messages, but it does not do that. Before and after the PR we would send the full mempool in a reply to mempool (correct me if I got it wrong).

    Maybe it should read “after receiving a MEMPOOL message” instead of “as a result of a MEMPOOL message”.

    What this PR is trying to achieve is to prevent us from responding to getdata requests about unannounced transactions after receiving a mempool message.

  21. sr-gi commented at 3:30 pm on May 11, 2023: member

    Is it the case that this PR will change the behavior in the following two cases (from the PR title and description I got the impression that it is wider scope):

    Case 1

    • the node receives MEMPOOL request and replies to it
    • in the same second a new transaction enters the mempool (after the MEMPOOL message)
    • the node receives GETDATA for that transaction

    In master it would send the transaction even though it never advertised it with INV due to the txinfo.m_time <= mempool_req check (both values would be equal).

    In this PR it would claim “not found”.

    This can be resolved by increasing the precision of TxRelay::m_last_mempool_req and TxMempoolInfo::m_time or by changing the condition to <.

    This is exactly what we are trying to prevent, but I don’t think your approach fixes the issue. This condition is checking that our inv (txinfo) is more recent than the last mempool_req, meaning that once a mempool request has been processed the peer can request any transaction from our mempool. Why is this bad? It makes transactions that enter our memopool easily probable. The node can create a transaction (tx), send it through any other link, and spam us with getdata messages until we reply with it.

    Case 2

    • the node receives MEMPOOL request and replies to it, but some transaction is omitted from the reply because it is not in TxRelay::m_bloom_filter.
    • the node receives GETDATA for that transaction

    In master it would send the transaction even though it never advertised it with INV because PeerManagerImpl::FindTxForGetData() does not check the bloom filter.

    In this PR it would claim “not found”.

    Is this a problem at all? If yes, then it can be resolved by checking the bloom filter from PeerManagerImpl::FindTxForGetData().

    Arguably this is the expected behavior, master’s is not.

    The response to the mempool message didn’t include an INV from that specific transaction (both in the patch nor in master’s) because it didn’t match the filter set by the peer, therefore, the peer should not be interested in it. With respect to why claiming not found after the peer requests the transaction (what makes this different from master) I think we can default to the same reasoning as per Case 1.

  22. vasild commented at 4:00 pm on May 11, 2023: contributor

    txinfo.m_time <= mempool_req

    This condition is checking that our inv (txinfo) is more recent than the last mempool_req, meaning that once a mempool request has been processed the peer can request any transaction from our mempool.

    I don’t get it. Is it not that the condition is checking whether txinfo is older than the last mempool_req? In that case the transaction has been part of the MEMPOOL reply, so we have sent an INV about it.

  23. glozow commented at 7:17 pm on May 11, 2023: member

    txinfo.m_time <= mempool_req

    This condition is checking that our inv (txinfo) is more recent than the last mempool_req, meaning that once a mempool request has been processed the peer can request any transaction from our mempool.

    I don’t get it. Is it not that the condition is checking whether txinfo is older than the last mempool_req? In that case the transaction has been part of the MEMPOOL reply, so we have sent an INV about it.

    Ah, I think you’re right! So this changes Case 2, not Case 1.

  24. sr-gi commented at 9:16 pm on May 11, 2023: member

    txinfo.m_time <= mempool_req

    This condition is checking that our inv (txinfo) is more recent than the last mempool_req, meaning that once a mempool request has been processed the peer can request any transaction from our mempool.

    I don’t get it. Is it not that the condition is checking whether txinfo is older than the last mempool_req? In that case the transaction has been part of the MEMPOOL reply, so we have sent an INV about it.

    Ah, I think you’re right! So this changes Case 2, not Case 1.

    He is indeed. The test for Case 1 was passing because both requests were processed within the same second 😅

  25. vasild commented at 8:32 am on May 12, 2023: contributor

    No, wait! This PR changes both Case1 and Case2, but Case1 is only about things happening in the same second.

    What I tried yesterday:

    • Run the newly added tests, they were passing, of course. Cool!
    • Revert the net_processing.cpp changes, both tests Request the irrelevant transaction even though it has not being offered to us and Create a third transaction (that was therefore not part of the mempool message) and try to... failed. Good! To reach the second test I disabled the assert from the first.

    But I couldn’t explain why the second test would fail (without net_processing.cpp changes aka on master). Then I suspected things happen in the same second in the test and added time.sleep(1) before creating the third transaction and then the test passed.

    Concept ACK. I agree that we should not respond to GETDATA asking for a transaction which we did not INV to that peer and this PR does that.

    However, I agree with @ajtowns’s concern raised above about blowing up m_recently_announced_invs. That bloom filter has size 3500 and if we put 300k elements in it, then it is probably not going to work as expected. Approach NACK due to that.

    So, to achieve the goal of not replying to GETDATA without a prior INV we have to remember all INVs sent to the peer, but that is kind of “hard” considering a MEMPOOL reply sends 300k INVs…

  26. vasild commented at 8:49 am on May 12, 2023: contributor
    Looking at the comment of CRollingBloomFilter if m_recently_announced_invs is changed from 3500 to 300000 then its size would increase from 37KB to 3MB. I guess that is too much and maybe was the reason why MEMPOOL replies don’t use m_recently_announced_invs and instead use the m_last_mempool_req timing mechanism.
  27. sr-gi commented at 4:14 pm on May 12, 2023: member

    No, wait! This PR changes both Case1 and Case2, but Case1 is only about things happening in the same second.

    I meant regarding the time checks in master, the checks were indeed protecting against querying something that was in the mempool before the mempool message was received. Meaning that while this patch fixes the issues around requesting data after a mempool message is received, they are not as severe as initially assumed.

    However, I agree with @ajtowns’s concern raised above about blowing up m_recently_announced_invs. That bloom filter has size 3500 and if we put 300k elements in it, then it is probably not going to work as expected. Approach NACK due to that.

    I think at this point there are two easy solutions to the problem (with their corresponding tradeoffs, of course):

    1. Do things as stated in the patch but guarding addition to the m_recently_announced_invs rolling filter by checking that the data we are adding is not older than two minutes, as @ajtowns originally suggested. This will prevent us from adding data that will be served unconditionally anyway, reducing the odds of overflowing the filter. However, I’m not sure how AJ computed the odds of the filter overflowing due to high traffic, so I’m not able to convince myself this is enough.

    2. Keep the m_recently_announced_invs bypass as it is in master, but do check the bloom filter when deciding whether to relay something that may have been announced to a peer as a response to a mempool message. This has two main drawbacks:

    • First, we will be checking the filter twice, once when deciding whether to include the transaction announcement as part of the response to the MEMPOOL message (https://github.com/bitcoin/bitcoin/blob/master/src/net_processing.cpp#L5649-L5651) and again as part of the combined check in FindTxForGetData (https://github.com/bitcoin/bitcoin/blob/master/src/net_processing.cpp#L2271).
    • Second, this, potentially, can result in the two versions of the filter differing between them. A node could clear their filter after sending the MEMPOOL message and before requesting the data and we will end up serving something we haven’t really announced. However, I cannot think of a proper use case (or attack, FWIW) that takes advantage of this in a meaningful way. If the goal was for us to include all the data we had in our mempool when sending us the MEMPOOL message, they could have had a clear filter to begin with.
  28. sr-gi force-pushed on May 12, 2023
  29. sr-gi commented at 7:55 pm on May 12, 2023: member
    Addresses @willcl-ark comments and fixes tests to make sure the MEMPOOL message and the second GETDATA message are not processed within the same second, as per @vasild comment.
  30. DrahtBot added the label Needs rebase on May 18, 2023
  31. sr-gi force-pushed on May 22, 2023
  32. sr-gi commented at 2:21 pm on May 22, 2023: member
    Adds invs to m_recently_announced_invs conditionally to them not being unconditionally relayable and rebases master.
  33. DrahtBot removed the label Needs rebase on May 22, 2023
  34. DrahtBot added the label CI failed on May 22, 2023
  35. net: avoid serving non-announced txs as a result of a MEMPOOL message
    Currently, non-announced transactions can be served after a BIP35 MEMPOOl
    message has been received from a peer. By following this pattern, we are
    potentially allowing peers to fingerprint transactions that may have
    originated in our node and to perform timing analysis on the propagation
    of transactions that enter our mempool.
    
    This simply removes the relay bypass so only announced transactions are
    served to peers.
    f543b7dcdb
  36. sr-gi force-pushed on May 23, 2023
  37. DrahtBot removed the label CI failed on May 23, 2023
  38. DrahtBot added the label Needs rebase on Jun 12, 2023
  39. DrahtBot commented at 10:43 am on June 12, 2023: contributor

    🐙 This pull request conflicts with the target branch and needs rebase.

  40. luke-jr commented at 5:18 pm on June 23, 2023: member
    I wonder if there should be a noprivacy permission flag…
  41. maflcko commented at 10:01 am on August 17, 2023: member
    Are you still working on this?
  42. sr-gi commented at 10:55 am on August 17, 2023: member

    Are you still working on this?

    I wonder if this is still meaningful now that #27675 has landed

  43. maflcko commented at 10:59 am on August 17, 2023: member
    Maybe the test? Though, I haven’t looked at it recently.
  44. sr-gi commented at 4:58 pm on August 17, 2023: member

    Maybe the test? Though, I haven’t looked at it recently.

    That may make sense. I’ll take a look and rebase

  45. sr-gi commented at 7:59 pm on September 14, 2023: member

    Maybe the test? Though, I haven’t looked at it recently.

    The test can be adapted to fit the logic after #27675, but the patch itself does not seem relevant anymore. Would it be worth reworking this PR or just closing it and opening one for the test?

  46. maflcko commented at 8:19 pm on September 14, 2023: member
    Thanks for taking a look. Seems fine to open a new test-only pull
  47. sr-gi commented at 8:45 pm on September 14, 2023: member
    I’m closing this one then
  48. sr-gi closed this on Sep 14, 2023

  49. bitcoin locked this on Sep 13, 2024

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-21 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me