p2p: Reduce bandwidth during initial headers sync when a block is found #25720

pull sdaftuar wants to merge 2 commits into bitcoin:master from sdaftuar:2022-07-reduce-headers-sync-bandwidth changing 3 files +139 −6
  1. sdaftuar commented at 3:52 pm on July 27, 2022: member

    On startup, if our headers chain is more than a day behind current time, we’ll pick one peer to sync headers with until our best headers chain is caught up (at that point, we’ll try to sync headers with all peers).

    However, if an INV for a block is received before our headers chain is caught up, we’ll then start to sync headers from each peer announcing the block. This can result in doing a big headers sync with many (if not all) of our peers simultaneously, which wastes bandwidth.

    This PR would reduce that overhead by picking (at most) one new peer to try syncing headers with whenever a new block is announced, prior to our headers chain being caught up.

    Post-merge edit: See #25720 (review) for more context about this change.

  2. fanquake added the label P2P on Jul 27, 2022
  3. sdaftuar force-pushed on Jul 27, 2022
  4. sdaftuar force-pushed on Jul 27, 2022
  5. in test/functional/p2p_initial_headers_sync.py:81 in 4a3376376c outdated
    76+                    count += 1
    77+                    peer_receiving_headers = p
    78+                    p.last_message.pop("getheaders", None)
    79+                    p.send_message(msg_headers()) # Send empty response
    80+
    81+        assert count == 1
    


    brunoerg commented at 2:00 pm on July 29, 2022:
    0        assert_equal(count, 1)
    

    Because of #23119


    sdaftuar commented at 1:48 pm on August 3, 2022:
    Done.
  6. in src/net_processing.cpp:688 in 4a3376376c outdated
    683@@ -681,6 +684,9 @@ class PeerManagerImpl final : public PeerManager
    684     /** Number of nodes with fSyncStarted. */
    685     int nSyncStarted GUARDED_BY(cs_main) = 0;
    686 
    687+    /** Hash of the last block we received via INV */
    688+    uint256 m_last_block_inv_triggering_headers_sync GUARDED_BY(cs_main) {};
    


    MarcoFalke commented at 3:03 pm on August 2, 2022:

    nit: Looks like, either:

    • GUARDED_BY is not needed, as this is only accessed by a single thread (and if another thread is added in the future, it is unclear whether this should use cs_main or a different mutex)
    • If it is needed, it might be best to use a mutex different from the validation mutex cs_main to avoid further bundling net_processing with chainstatemanager logic and fields.

    sdaftuar commented at 1:48 pm on August 3, 2022:
    Removed.
  7. in src/net_processing.cpp:3281 in 4a3376376c outdated
    3279+            // consider sending a getheaders now. On initial startup, there's a
    3280+            // reliability vs bandwidth tradeoff, where we only trying doing
    3281+            // initial headers sync with one peer at a time, with a long
    3282+            // timeout (at which point, if the sync hasn't completed, we will
    3283+            // disconnect the peer and then choose another).  In the meantime,
    3284+            // as new blocks are found, we are willing to add one new peer per
    


    MarcoFalke commented at 3:14 pm on August 2, 2022:
    This seems a corner case that blocks are announced to us via inv, no? So I am wondering why such a new block is a good signal to add the peer as a initial headers sync peer, as opposed to, let’s say, a random timer.

    mzumsande commented at 3:46 pm on August 2, 2022:
    While syncing headers, I found this to actually be the typical case. I think it’s because we only use announcement via headers if we know that the peer has the previous header (which is not the case if it’s a new peer currently syncing headers) and otherwise revert to INV-mode. (code)

    MarcoFalke commented at 4:23 pm on August 2, 2022:
    In that case, could make sense to correct the doc, which says “If a node fell back to sending blocks by inv, // it’s probably for a re-org.”

    sdaftuar commented at 1:29 pm on August 3, 2022:
    I guess words like “probably” are confusing in comments, as there is context that is missing. I’ll update both comments.

    sdaftuar commented at 1:49 pm on August 3, 2022:
    Improved the comments and removed the “probably for a re-org” language.
  8. in src/net_processing.cpp:3284 in 4a3376376c outdated
    3282+            // timeout (at which point, if the sync hasn't completed, we will
    3283+            // disconnect the peer and then choose another).  In the meantime,
    3284+            // as new blocks are found, we are willing to add one new peer per
    3285+            // block to sync with as well, to sync quicker in the case where
    3286+            // our initial peer is unresponsive (but less bandwidth than we'd
    3287+            // use if we turned on sync with all peers).
    


    MarcoFalke commented at 3:17 pm on August 2, 2022:
    If we assume that the initial headers sync logic is robust, why add the additional complexity here? Wouldn’t it be better to just skip the getheaders message? Or if the logic isn’t assumed to be robust, lower the timeout or add a new peer on a random timeout?

    sdaftuar commented at 2:04 pm on August 3, 2022:

    I’m not sure I completely understand what you’re asking, but the topic is more complicated than just an assumption around whether the initial headers sync logic is robust:

    • Ideally, we would only download the full headers chain from a single peer when we are starting up, because it saves bandwidth to do so.
    • It’s possible that the peer we pick for initial headers sync could be (a) slow, (b) on a chain that is not the main chain, (c) adversarial, or some other terrible combination of those factors. So we cannot just have our logic rely on the initial peer to serve us the honest chain in a reasonable amount of time.
    • We currently have two behaviors that help protect us from choosing a bad initial peer. The main protection we have is that when a block INV is received, we send a getheaders to all peers that announce the block, resulting in us getting the main chain with high probability. However, this is bandwidth wasting if we have many peers that serve us an INV at the same time, which is probably the common case when we’re in a scenario that our initial peer is slow.
    • The second protection we have is that after about 20 minutes, we’ll evict our initial headers-sync peer if our tip’s timestamp isn’t within a day of the current time. This could kick in if we have a bad initial peer and no blocks are found for a while.

    I think we could do a variety of things to improve the current situation on master; I think that adding (say) one additional headers sync peer on some kind of timer (maybe every 5 or 10 minutes) could make sense. I think that choosing a random peer among the set of peers announcing a block is probably better peer selection than choosing a random peer (or random outbound peer) on a timer, just because if a peer sends an INV there’s more reason to believe that they are responsive and going to be helpful in getting us the chain, but probably some combination of both would be even better.

    However, the complexity I ran into when thinking about other strategies for initial sync has to do with the eviction logic. Right now, I think it’s mostly good that we evict our (single) initial headers-sync peer if we can’t get a chain tip that is recent within 20 minutes. However, triggering that logic on all our peers at the same time seems over the top to me, because there are edge-case scenarios (such as: no blocks have been found on the network for a day, or the honest chain is some kind of billion-block timewarp chain that takes more than 20 minutes to download) where I think such logic could be badly behaved for the network, because we could end up with no peers or we could fall out of consensus.

    I think what I’m proposing in this patch is a narrow change that exactly addresses the bandwidth problem, and maximizes the chance we find a good peer quickly, without making our behavior in those edge-case scenarios any worse. Nevertheless, a bigger overhaul of this logic that carefully considers these things could certainly be an improvement and make this whole thing easier to think about.


    JeremyRubin commented at 4:30 pm on August 15, 2022:
    @sdaftuar can you move this comment into the top of the PR; I think this context helps quite a bit (and was going to ask this a comment on the PR directly).
  9. in src/net_processing.cpp:3285 in 4a3376376c outdated
    3284+            // as new blocks are found, we are willing to add one new peer per
    3285+            // block to sync with as well, to sync quicker in the case where
    3286+            // our initial peer is unresponsive (but less bandwidth than we'd
    3287+            // use if we turned on sync with all peers).
    3288+            CNodeState *state = State(pfrom.GetId());
    3289+            assert(state != nullptr);
    


    MarcoFalke commented at 3:17 pm on August 2, 2022:
    0            CNodeState& state{*Assert(State(pfrom.GetId()));
    

    nit: Can be written in one line


    sdaftuar commented at 2:04 pm on August 3, 2022:
    Done.
  10. MarcoFalke commented at 3:35 pm on August 2, 2022: member
    left a few questions
  11. sdaftuar force-pushed on Aug 3, 2022
  12. in test/functional/p2p_initial_headers_sync.py:43 in 17f2822c0d outdated
    38+    def run_test(self):
    39+        self.log.info("Adding a peer to node0")
    40+        peer1 = self.nodes[0].add_p2p_connection(P2PInterface())
    41+
    42+        # Wait for peer1 to receive a getheaders
    43+        peer1.wait_until(lambda: "getheaders" in peer1.last_message, timeout=30)
    


    dergoegge commented at 10:46 am on August 6, 2022:
    0        peer1.wait_for_getheaders()
    

    Should work for all the other locations below as well.


    sdaftuar commented at 2:46 pm on August 12, 2022:
    Thanks! Fixed.
  13. in test/functional/p2p_initial_headers_sync.py:55 in 17f2822c0d outdated
    50+        peer3 = self.nodes[0].add_p2p_connection(P2PInterface())
    51+
    52+        all_peers = [peer1, peer2, peer3]
    53+
    54+        self.log.info("Verify that peer2 and peer3 don't receive a getheaders after connecting")
    55+        time.sleep(5)
    


    dergoegge commented at 11:04 am on August 6, 2022:

    I think you can just remove this timeout?

    PeerManagerImpl::SendMessages is called twice (once after verack is received in the initial handshake and once after calling sync_with_ping on the next line), so we can be sure that getheaders hasn’t been sent here because it would have been sent after the verack but before the pong.


    sdaftuar commented at 2:47 pm on August 12, 2022:
    Sounds good, done.
  14. in test/functional/p2p_initial_headers_sync.py:40 in 17f2822c0d outdated
    35+        new_block_announcement = msg_inv(inv=[CInv(MSG_BLOCK, random.randint(0, 1<<256 - 1))])
    36+        [ p.send_and_ping(new_block_announcement) for p in peers ]
    37+
    38+    def run_test(self):
    39+        self.log.info("Adding a peer to node0")
    40+        peer1 = self.nodes[0].add_p2p_connection(P2PInterface())
    


    dergoegge commented at 11:11 am on August 6, 2022:
    Doesn’t need to happen in this PR but in the future this test could also cover our preference for outbound peers for the initial headers sync.
  15. dergoegge commented at 11:13 am on August 6, 2022: member
    Concept ACK
  16. in test/functional/p2p_initial_headers_sync.py:35 in 17f2822c0d outdated
    30+    def set_test_params(self):
    31+        self.setup_clean_chain = True
    32+        self.num_nodes = 1
    33+
    34+    def announce_random_block(self, peers):
    35+        new_block_announcement = msg_inv(inv=[CInv(MSG_BLOCK, random.randint(0, 1<<256 - 1))])
    


    jonas-ott commented at 8:51 pm on August 7, 2022:

    Doesn’t really matter but: 1<<256 - 1 == 1 << (256 - 1)

    0        new_block_announcement = msg_inv(inv=[CInv(MSG_BLOCK, random.randint(0, (1 << 256) - 1))])
    

    sdaftuar commented at 2:47 pm on August 12, 2022:
    Thanks! Fixed.
  17. LarryRuane commented at 5:28 pm on August 10, 2022: contributor
  18. pablomartin4btc approved
  19. pablomartin4btc commented at 6:23 pm on August 10, 2022: contributor

    Code review ACK.

    Tested ACK (unit & functional tests).

  20. hernanmarino approved
  21. hernanmarino commented at 6:37 pm on August 10, 2022: contributor
    Tested ACK
  22. in src/net_processing.cpp:3286 in fdbbc6ab91 outdated
    3285+            // as new blocks are found, we are willing to add one new peer per
    3286+            // block to sync with as well, to sync quicker in the case where
    3287+            // our initial peer is unresponsive (but less bandwidth than we'd
    3288+            // use if we turned on sync with all peers).
    3289+            CNodeState& state{*Assert(State(pfrom.GetId()))};
    3290+            if (state.fSyncStarted || (!peer->m_inv_triggered_getheaders_before_sync && *best_block != m_last_block_inv_triggering_headers_sync)) {
    


    ajtowns commented at 6:17 am on August 12, 2022:
    If there are two blocks found at the same time/height, this could still trigger a getheaders to most peers, no? (Someone sends block A, then someone else sends block B, then someone else sends A, etc) Might it be better to set m_last_extra_headers_sync = SteadyClock::now(); and check that it’s been at least a minute before adding an extra one?

    sdaftuar commented at 2:42 pm on August 12, 2022:

    That’s true, but I think your suggestion is less robust overall than what I have proposed here: sending an INV to a node is free (there is no proof-of-work attached to a block hash), so an adversary could take advantage of your proposed strategy by continually connecting, sending an INV, and disconnecting, to prevent a node from trying to sync with any of its honest peers that are announcing main-chain blocks.

    On the other hand, I just checked one of my long-running, well-connected nodes, and it’s seen about 10 stale blocks in the last 50000. So that seems like pretty good odds that a node starting up is unlikely to run into this?

  23. sdaftuar force-pushed on Aug 12, 2022
  24. in src/net_processing.cpp:372 in 61931d7535 outdated
    368@@ -369,6 +369,9 @@ struct Peer {
    369     /** Set of txids to reconsider once their parent transactions have been accepted **/
    370     std::set<uint256> m_orphan_work_set GUARDED_BY(g_cs_orphans);
    371 
    372+    /** Whether we've sent a peer a getheaders in response to an inv prior to initial-headers-sync completing */
    


    LarryRuane commented at 6:08 pm on August 12, 2022:
    0    /** Whether we've sent this peer a getheaders in response to an inv prior to initial-headers-sync completing */
    

    sdaftuar commented at 9:06 pm on August 12, 2022:
    Done.
  25. in src/net_processing.cpp:3277 in 61931d7535 outdated
    3276-                LogPrint(BCLog::NET, "getheaders (%d) %s to peer=%d\n",
    3277-                        m_chainman.m_best_header->nHeight, best_block->ToString(),
    3278-                        pfrom.GetId());
    3279+            // If we haven't started initial headers-sync with this peer, then
    3280+            // consider sending a getheaders now. On initial startup, there's a
    3281+            // reliability vs bandwidth tradeoff, where we only trying doing
    


    LarryRuane commented at 6:17 pm on August 12, 2022:
    0            // reliability vs bandwidth tradeoff, where we are only trying to do
    

    sdaftuar commented at 9:07 pm on August 12, 2022:
    Fixed
  26. in src/net_processing.cpp:3280 in 61931d7535 outdated
    3279+            // If we haven't started initial headers-sync with this peer, then
    3280+            // consider sending a getheaders now. On initial startup, there's a
    3281+            // reliability vs bandwidth tradeoff, where we only trying doing
    3282+            // initial headers sync with one peer at a time, with a long
    3283+            // timeout (at which point, if the sync hasn't completed, we will
    3284+            // disconnect the peer and then choose another).  In the meantime,
    


    LarryRuane commented at 6:18 pm on August 12, 2022:
    0            // disconnect the peer and then choose another). In the meantime,
    

    sdaftuar commented at 9:07 pm on August 12, 2022:
    Oops, fixed.
  27. in test/functional/p2p_initial_headers_sync.py:8 in 61931d7535 outdated
    0@@ -0,0 +1,101 @@
    1+#!/usr/bin/env python3
    2+# Copyright (c) 2022 The Bitcoin Core developers
    3+# Distributed under the MIT software license, see the accompanying
    4+# file COPYING or http://www.opensource.org/licenses/mit-license.php.
    5+"""Test initial headers download
    6+
    7+Test that we only try to initially sync headers from one peer (until our chain
    8+is close to caught up), and that block announcements result in only one
    


    LarryRuane commented at 6:35 pm on August 12, 2022:
    0is close to caught up), and that each block announcement results in only one
    

    sdaftuar commented at 9:08 pm on August 12, 2022:
    Done, thanks.
  28. in test/functional/p2p_initial_headers_sync.py:36 in 61931d7535 outdated
    31+        self.setup_clean_chain = True
    32+        self.num_nodes = 1
    33+
    34+    def announce_random_block(self, peers):
    35+        new_block_announcement = msg_inv(inv=[CInv(MSG_BLOCK, random.randint(0, (1<<256) - 1))])
    36+        [ p.send_and_ping(new_block_announcement) for p in peers ]
    


    LarryRuane commented at 7:02 pm on August 12, 2022:

    nit, this would be clearer since there’s no reason to build up a list from the return values (there aren’t actually any return values)

    0        for p in peers: p.send_and_ping(new_block_announcement)
    

    sdaftuar commented at 9:08 pm on August 12, 2022:
    Done.
  29. in test/functional/p2p_initial_headers_sync.py:35 in 61931d7535 outdated
    30+    def set_test_params(self):
    31+        self.setup_clean_chain = True
    32+        self.num_nodes = 1
    33+
    34+    def announce_random_block(self, peers):
    35+        new_block_announcement = msg_inv(inv=[CInv(MSG_BLOCK, random.randint(0, (1<<256) - 1))])
    


    LarryRuane commented at 7:08 pm on August 12, 2022:

    nit

    0        new_block_announcement = msg_inv(inv=[CInv(MSG_BLOCK, random.randrange(1<<256))])
    

    sdaftuar commented at 9:08 pm on August 12, 2022:
    Done.
  30. in test/functional/p2p_initial_headers_sync.py:47 in 61931d7535 outdated
    40+        peer1 = self.nodes[0].add_p2p_connection(P2PInterface())
    41+
    42+        # Wait for peer1 to receive a getheaders
    43+        peer1.wait_for_getheaders()
    44+        # Give an empty reply
    45+        peer1.send_message(msg_headers())
    


    LarryRuane commented at 7:11 pm on August 12, 2022:
    0        # An empty reply causes the reply handler to not initiate another getheaders
    1        peer1.send_message(msg_headers())
    

    sdaftuar commented at 9:09 pm on August 12, 2022:
    I improved this comment in the latest commit.
  31. in test/functional/p2p_initial_headers_sync.py:68 in 61931d7535 outdated
    63+        self.log.info("Have all peers announce a new block")
    64+        self.announce_random_block(all_peers)
    65+
    66+        self.log.info("Check that peer1 receives a getheaders in response")
    67+        peer1.wait_for_getheaders()
    68+        peer1.send_message(msg_headers()) # Send empty response
    


    LarryRuane commented at 7:12 pm on August 12, 2022:
    0        peer1.send_message(msg_headers()) # Send empty response, no follow-on getheaders
    

    sdaftuar commented at 9:11 pm on August 12, 2022:
    I updated this comment as well (to reference the longer explanation above).
  32. in test/functional/p2p_initial_headers_sync.py:38 in 61931d7535 outdated
    33+
    34+    def announce_random_block(self, peers):
    35+        new_block_announcement = msg_inv(inv=[CInv(MSG_BLOCK, random.randint(0, (1<<256) - 1))])
    36+        [ p.send_and_ping(new_block_announcement) for p in peers ]
    37+
    38+    def run_test(self):
    


    LarryRuane commented at 7:16 pm on August 12, 2022:

    nit, may make the test slightly more readable (and doesn’t give the casual reader the impression that there may be more than one node running).

    0    def run_test(self):
    1    node = self.nodes[0]
    

    sdaftuar commented at 9:09 pm on August 12, 2022:
    Leaving this as-is; I think the self.num_nodes = 1 in set_test_params makes it clear that only one node is involved.
  33. in test/functional/p2p_initial_headers_sync.py:55 in 61931d7535 outdated
    50+        peer3 = self.nodes[0].add_p2p_connection(P2PInterface())
    51+
    52+        all_peers = [peer1, peer2, peer3]
    53+
    54+        self.log.info("Verify that peer2 and peer3 don't receive a getheaders after connecting")
    55+        [ p.sync_with_ping() for p in all_peers ]
    


    LarryRuane commented at 7:20 pm on August 12, 2022:
    0        for p in all_peers: p.sync_with_ping()
    

    sdaftuar commented at 9:09 pm on August 12, 2022:
    Done.

    sdaftuar commented at 9:13 pm on August 12, 2022:
    Oops, turns out the linter doesn’t like this, so this is now split into two lines.
  34. in test/functional/p2p_initial_headers_sync.py:78 in 61931d7535 outdated
    69+        peer1.last_message.pop("getheaders", None)
    70+
    71+        self.log.info("Check that exactly 1 of {peer2, peer3} received a getheaders in response")
    72+        count = 0
    73+        peer_receiving_headers = None
    74+        for p in [peer2, peer3]:
    


    LarryRuane commented at 7:30 pm on August 12, 2022:
    0        for p in [peer2, peer3]:
    1            p.sync_with_ping()
    

    sdaftuar commented at 8:50 pm on August 12, 2022:
    Not sure if I’m missing something; I believe this sync_with_ping() is unnecessary because we use send_and_ping() in announce_random_block(), does that sound right?

    LarryRuane commented at 0:49 am on August 13, 2022:
    You’re right, ignore my suggestion.
  35. in test/functional/p2p_initial_headers_sync.py:78 in 61931d7535 outdated
    73+        peer_receiving_headers = None
    74+        for p in [peer2, peer3]:
    75+            with p2p_lock:
    76+                if "getheaders" in p.last_message:
    77+                    count += 1
    78+                    peer_receiving_headers = p
    


    LarryRuane commented at 7:36 pm on August 12, 2022:
    0                    peer_receiving_getheaders = p
    

    sdaftuar commented at 9:09 pm on August 12, 2022:
    Fixed.
  36. in test/functional/p2p_initial_headers_sync.py:99 in 61931d7535 outdated
    90+        self.log.info("Check that the remaining peer received a getheaders as well")
    91+        expected_peer = peer2
    92+        if peer2 == peer_receiving_headers:
    93+            expected_peer = peer3
    94+
    95+        expected_peer.wait_for_getheaders()
    


    LarryRuane commented at 7:38 pm on August 12, 2022:
    0        expected_peer.wait_for_getheaders()
    1        peer_receiving_headers.sync_with_ping()
    2        with p2p_lock:
    3            assert "getheaders" not in peer_receiving_headers.last_message
    

    sdaftuar commented at 8:56 pm on August 12, 2022:

    So I intentionally did not include this test (that a peer that we’ve started sync with doesn’t receive a subsequent getheaders in response to an INV) because I felt like that was overspecifying what I think of as reasonable behavior. I think the most important property is that we add a new peer with each new block, but whether we continue trying to sync with existing peers is (in my view) up for debate.

    For instance, if the test were rewritten a bit so that the peer receiving the initial getheaders in response to an INV had a big long headers chain to serve the node, then you’d expect there to be a getheaders in flight as headers sync proceeds. Granted that is not the situation in this test right now, but I feel like it’s helpful for future test maintenance to not overspecify behavior if it doesn’t really matter for what we’re trying to achieve.

  37. in test/functional/p2p_initial_headers_sync.py:69 in 61931d7535 outdated
    64+        self.announce_random_block(all_peers)
    65+
    66+        self.log.info("Check that peer1 receives a getheaders in response")
    67+        peer1.wait_for_getheaders()
    68+        peer1.send_message(msg_headers()) # Send empty response
    69+        peer1.last_message.pop("getheaders", None)
    


    LarryRuane commented at 7:45 pm on August 12, 2022:

    lock needed?

    0        with p2p_lock:
    1            peer1.last_message.pop("getheaders", None)
    

    sdaftuar commented at 9:09 pm on August 12, 2022:
    Yes, thanks! Fixed.
  38. LarryRuane commented at 8:22 pm on August 12, 2022: contributor

    I tested this patch pretty carefully, it’s working as intended. It’s not a huge improvement (I know it’s not expected to be), I started IBD on mainnet and there definitely are fewer redundant getheaders. Here’s a summary of what I observed by enabling the “p2p” debug category:

    The node begins by getting headers from a single peer (that’s the same as without this PR). When the first block announcement arrives (via inv), which usually comes (redundantly) from all of our peers, the node begins requesting headers from only one of those peers (instead of all peers that sent the same inv), which is the intended effect of this PR. A moment later, the headers reply from this peer triggers another getheaders request. This is also occurring with the original peer, so now there are two redundant “threads” (not OS threads, but threads in the sense that each headers reply launches another getheaders request to that peer).

    This means that each header is received twice from here onward. If and when a second block announcement appears, a third getheaders “thread” begins, so now we’re getting each header 3 times. But, again, this is much better than getting each header N times (if N is the number of our peers), which is what happens without this PR.

    As others have said, this initial headers sync process can be further improved, but it’s a tricky area so caution is advised! This is a step in the right direction and it seems safe to me.

    Note that this isn’t a huge problem in practice because, depending on network speed, the node can often download headers all the way out to the current tip before even one new block arrives, since the full header sync typically takes only a few minutes. And if a new block does arrive during this time, it’s not a big problem to be receiving these redundant headers; each is only just over 80 bytes in serialized form.

    I’m going to hold off on ack for now because there are a couple of changes I’d like to see in the test.

  39. Reduce bandwidth during initial headers sync when a block is found
    If our headers chain is behind on startup, then if a block is found we'll try
    to catch up from all peers announcing the block, in addition to our initial
    headers-sync peer. This commit changes behavior so that in this situation,
    we'll choose at most one peer announcing a block to additionally sync headers
    from.
    05f7f31598
  40. in test/functional/p2p_initial_headers_sync.py:26 in 61931d7535 outdated
    21+    P2PInterface,
    22+)
    23+from test_framework.util import (
    24+    assert_equal,
    25+)
    26+import time
    


    mzumsande commented at 8:48 pm on August 12, 2022:
    remove this line to make the linter happy

    sdaftuar commented at 9:10 pm on August 12, 2022:
    Thanks, fixed.
  41. sdaftuar force-pushed on Aug 12, 2022
  42. Add functional test for block announcements during initial headers sync f6a916683d
  43. sdaftuar force-pushed on Aug 12, 2022
  44. LarryRuane commented at 0:59 am on August 13, 2022: contributor
    ACK f6a916683d75ed5489666dbfbd711f000ad0707f
  45. ajtowns commented at 2:46 am on August 15, 2022: contributor
    ACK f6a916683d75ed5489666dbfbd711f000ad0707f
  46. mzumsande commented at 6:15 am on August 15, 2022: contributor

    ACK f6a916683d75ed5489666dbfbd711f000ad0707f I reviewed the code and tested this on testnet during a period where blocks were mined multiple times in a minute - each time that happened, one more peer was added to the headers sync.

    I think that this would be good to get in, especially with #25717 doubling the amount of headers download data. While testing #25717 on a slow connection on testnet, I would sometimes run into timeouts - with 2,300,000 headers (~180MB) being downloaded twice (two phases) from 10 outbound peers, it might be necessary to be able to download up to ~3.6GB within 20 minutes - this can be a problem on slower connections.

    While I think that this PR is a good improvement of the existing logic, I would probably prefer longer-term a deterministic mechanism instead of one connected to a random process - sometimes multiple blocks are found immediately after the start of headers sync, sometimes none for > 30 minutes, and I don’t really see why this randomness should be tied to the number of peers for our headers download, instead of doing something like simply adding another additional peer every 10 minutes.

  47. fanquake requested review from dergoegge on Aug 15, 2022
  48. dergoegge commented at 4:26 pm on August 15, 2022: member

    Code review ACK f6a916683d75ed5489666dbfbd711f000ad0707f

    I think I agree with Martin, that in the long-term a deterministic mechanism would be preferable but for now this is a good improvement (especially w.r.t. #25717).

  49. fanquake added this to the milestone 24.0 on Aug 15, 2022
  50. achow101 commented at 7:25 pm on August 15, 2022: member
    ACK f6a916683d75ed5489666dbfbd711f000ad0707f
  51. achow101 merged this on Aug 15, 2022
  52. achow101 closed this on Aug 15, 2022

  53. sidhujag referenced this in commit ad7bc814ae on Aug 16, 2022
  54. bitcoin locked this on Aug 16, 2023

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-21 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me