p2p: Use network-dependent timers for inbound inv scheduling #33464

pull mzumsande wants to merge 2 commits into bitcoin:master from mzumsande:202509_inv_inbound_per_network changing 8 files +76 −39
  1. mzumsande commented at 5:19 pm on September 23, 2025: contributor

    Currently, NextInvToInbounds schedules each round of inv at the same time for all inbound peers. It’s being done this way because with a separate timer per peer (like it’s done for outbounds), an attacker could do multiple connections to learn about the time a transaction arrived. (#13298).

    However, having a single timer for inbounds of all networks is also an obvious fingerprinting vector: Connecting to a suspected pair of privacy-network and clearnet addresses and observing the inv pattern makes it trivial to confirm or refute that they are the same node.

    This PR changes it such that a separate timer is used for each network. It uses the existing method from getaddr caching and generalizes it to be saved in a new field m_network_key in CNode which will be used for both getaddr caching and inv scheduling, and can also be used for any future anti-fingerprinting measures.

  2. net: use generic network key for addrcache
    The generic key can also be used in other places
    where behavior between different network identities should
    be uncorrelated to avoid fingerprinting.
    This also changes RANDOMIZER_ID - since it is not
    being persisted to disk, there are no compatibility issues.
    94db966a3b
  3. DrahtBot added the label P2P on Sep 23, 2025
  4. DrahtBot commented at 5:19 pm on September 23, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33464.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    ACK sipa, stratospher, naiyoma, danielabrozzoni
    Concept ACK jonatack
    Approach ACK 0xB10C

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #32015 (net: replace manual reference counting of CNode with shared_ptr by vasild)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  5. mzumsande marked this as ready for review on Sep 23, 2025
  6. danielabrozzoni commented at 4:42 pm on September 25, 2025: contributor

    tACK beb75e48ae1d5771932f427a490c7e1b6c1720d3

    I tested locally by adding a log line, and checking that different networks schedule different inv times:

     0diff --git a/src/net_processing.cpp b/src/net_processing.cpp
     1index f5f04c7a17..fec1cfcb6b 100644
     2--- a/src/net_processing.cpp
     3+++ b/src/net_processing.cpp
     4@@ -5716,6 +5716,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
     5                     fSendTrickle = true;
     6                     if (pto->IsInboundConn()) {
     7                         tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL, pto->m_network_key);
     8+                        LogInfo("Scheduling next inv send time for inbound peer on network %d at time %d", pto->m_network_key, tx_relay->m_next_inv_send_time);
     9                     } else {
    10                         tx_relay->m_next_inv_send_time = current_time + m_rng.rand_exp_duration(OUTBOUND_INVENTORY_BROADCAST_INTERVAL);
    11                     }
    
     0$ tail -f ~/.bitcoin/debug.log | grep "Scheduling next inv send time"
     12025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     22025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     32025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     42025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     52025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     62025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     72025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     82025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
     92025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
    102025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
    112025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
    122025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 2336814389803672515 at time 1758809657846106us
    132025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 406641846476317050 at time 1758809650020936us
    142025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 406641846476317050 at time 1758809650020936us
    152025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 406641846476317050 at time 1758809650020936us
    162025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 406641846476317050 at time 1758809650020936us
    172025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 406641846476317050 at time 1758809650020936us
    182025-09-25T14:14:09Z Scheduling next inv send time for inbound peer on network 406641846476317050 at time 1758809650020936us
    
  7. 0xB10C commented at 3:56 pm on September 26, 2025: contributor

    Approach ACK.

    Nice. I had noticed that we announce INVs to everyone at the same time, but it didn’t occur to me that this is a fingerprinting leak. To test this change, I got creative and compared a 30.0 release candidate node with a node running this PR using my p2p-circle peer-observer tool.

    In the middle is our node and around it are our peers. Peers are colored by the network which we are connected to them. I highlight INVs from us -> peer with a thicker line and color the line depending on the network.

    https://github.com/user-attachments/assets/46c672b2-148d-460b-be0f-e294f554440c

    On master, we schedule and send to all networks at the same time. This is shown by the lines with different colors appearing at the same time.

    https://github.com/user-attachments/assets/ee9185c2-8342-4b60-8778-154088aa7359

    With this PR, the scheduling is per network, so only one color of lines (sending INVs to one network) is appearing at the same time.

  8. stratospher commented at 12:37 pm on September 30, 2025: contributor

    ACK beb75e4. liked the way the cache_id logic from getaddr caching was reused here. observed something similar with @danielabrozzoni’s patch!

    02025-09-30T12:29:05Z Scheduling next inv send time for inbound peer id = 0 on network 8495781695020407500 at time 1759235352548196µs
    12025-09-30T12:29:05Z Scheduling next inv send time for inbound peer id = 4 on network 8495781695020407500 at time 1759235352548196µs
    22025-09-30T12:29:06Z Scheduling next inv send time for inbound peer id = 2 on network 8350849156210999220 at time 1759235353947734µs
    32025-09-30T12:29:12Z Scheduling next inv send time for inbound peer id = 0 on network 8495781695020407500 at time 1759235364556172µs
    42025-09-30T12:29:12Z Scheduling next inv send time for inbound peer id = 4 on network 8495781695020407500 at time 1759235364556172µs
    52025-09-30T12:29:13Z Scheduling next inv send time for inbound peer id = 2 on network 8350849156210999220 at time 1759235356097336µs
    
  9. DrahtBot requested review from 0xB10C on Sep 30, 2025
  10. p2p: Use different inbound inv timer per network
    Currently nodes schedule their invs to all inbound peers at the same time.
    It is trivial to make use this timing pattern for fingerprinting
    identities on different networks. Using a separate timers for each network will
    make the fingerprinting harder.
    0f7d4ee4e8
  11. in src/net_processing.cpp:810 in beb75e48ae
    806@@ -807,7 +807,7 @@ class PeerManagerImpl final : public PeerManager
    807 
    808     uint32_t GetFetchFlags(const Peer& peer) const;
    809 
    810-    std::atomic<std::chrono::microseconds> m_next_inv_to_inbounds{0us};
    811+    std::map<uint64_t, std::atomic<std::chrono::microseconds>> m_next_inv_to_inbounds_per_network_key;
    


    stratospher commented at 1:00 pm on September 30, 2025:
    shouldn’t we initialise this with 0?

    mzumsande commented at 3:20 pm on September 30, 2025:
    Good question - while the map is initially empty, we insert entries with auto& timer{m_next_inv_to_inbounds_per_network_key[network_key]}; Even though default-initialisation doesn’t happen in general for std::atomic, this should still be correct because std::map:operator[] uses value initialisation (to 0). That said, I think it’s better to use try_emplace to make the intent clearer, so I will change it to that.
  12. mzumsande force-pushed on Sep 30, 2025
  13. mzumsande commented at 3:27 pm on September 30, 2025: contributor

    beb75e4 to 0f7d4ee:

    • used try_emplace instead of [] for map insertion
    • Removed the atomic and guarded m_next_inv_to_inbounds_per_network_key by g_msgproc_mutex. NextInvToInbounds is already guarded by g_msgproc_mutex anyway, so I’m not sure what the benefit of the atomic was (there was even a comment, removed now, saying that it wouldn’t be sufficient in a multi-threaded case), and the map itself would also not be threadsafe. Let me know if there’s something wrong with that reasoning.
  14. jonatack commented at 4:35 pm on September 30, 2025: member
    Concept ACK. Good to see proposals like this and #33498 to address fingerprinting.
  15. sipa commented at 1:14 pm on October 1, 2025: member

    utACK 0f7d4ee4e8281ed141a6ebb7e0edee7b864e4dcf

    As a follow-up, I think we can give every outbound onion connection a separate network_id, because they all look distinct to external observers.

  16. DrahtBot requested review from jonatack on Oct 1, 2025
  17. DrahtBot requested review from stratospher on Oct 1, 2025
  18. DrahtBot requested review from danielabrozzoni on Oct 1, 2025
  19. stratospher commented at 7:01 am on October 3, 2025: contributor

    reACK 0f7d4ee.

    Removed the atomic and guarded m_next_inv_to_inbounds_per_network_key by g_msgproc_mutex.

    didn’t notice any change in the rate at which m_next_inv_send_time = NextInvToInbounds(..) logs were printed after removing the atomic + no locking related calls in the assembly dump. would be curious as well to know what others think of this diff.

  20. naiyoma commented at 1:15 pm on October 3, 2025: contributor

    Tested ACK 0f7d4ee4e8281ed141a6ebb7e0edee7b864e4dcf

    I’ve started testing and observing the difference before and after this change using my own dual-homed-node. Before, same schedule_time for inbound connections (regardless of network):

    02025-10-03T09:44:44Z id=3 ……..schedule_time=1759484710518878usus
    12025-10-03T09:44:47Z id=11……..schedule_time=1759484710518878usus`
    

    After, different networks, different schedule times

    02025-10-03T09:22:45Z INBOUND: id=20…….m_network_key=10888365335265728969 chedule_time=1759483369015197usus
    12025-10-03T09:22:53Z INBOUND: id=19………m_network_key=7190563574777134380 chedule_time=1759483374441089usus
    
  21. danielabrozzoni commented at 10:36 pm on October 3, 2025: contributor

    reACK 0f7d4ee4e8

    I’m not very familiar with this portion of the code, but using g_msgproc_mutex seems fine to me (and cleaner than having the atomic). The lock already needs to be held in SendMessages, which is the only function from where we call NextInvToInbounds.

  22. fanquake merged this on Oct 3, 2025
  23. fanquake closed this on Oct 3, 2025


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-11-02 15:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me