net: avoid being disconnected from pruned nodes when syncing up #14507

pull kallewoof wants to merge 2 commits into bitcoin:master from kallewoof:net-pruned-limit-requests changing 3 files +116 −0
  1. kallewoof commented at 4:33 AM on October 18, 2018: member

    Nodes right now will end up triggering a self-disconnection if

    1. the peer is a pruned node, and
    2. the node is lagging behind > 290 blocks.

    The end results are arguably even helpful (weed out pruning nodes so you can find a full node to sync up against), but I think this should be handled in a less indirect manner (i.e. a node will drop all or a portion of its limited (pruned) peers if it discovers that there are blocks > a certain height.

  2. net: avoid asking pruning peers for blocks that would result in a remote disconnection of self
    Due to the NODE_NETWORK_LIMITED_MIN_BLOCKS setting, if a node ends up connecting to peers which are pruning nodes, the node may end up inadvertedly disconnecting itself from the peer by asking for a block beyond the limit. Instead, nodes attempt to avoid asking pruning nodes for blocks if they are beyond or close to the limit.
    18f4248b49
  3. kallewoof force-pushed on Oct 18, 2018
  4. fanquake added the label P2P on Oct 18, 2018
  5. test: add a test case where a node will trigger a self-disconnection from a pruning node during sync-up fdf6ec0eb1
  6. kallewoof force-pushed on Oct 18, 2018
  7. gmaxwell commented at 7:13 PM on October 18, 2018: contributor

    So how do we avoid the case where we're behind and all peers are pruned, leaving us stuck?

  8. in src/net_processing.cpp:3697 in fdf6ec0eb1
    3693 | @@ -3694,7 +3694,12 @@ bool PeerLogicValidation::SendMessages(CNode* pto)
    3694 |              std::vector<const CBlockIndex*> vToDownload;
    3695 |              NodeId staller = -1;
    3696 |              FindNextBlocksToDownload(pto->GetId(), MAX_BLOCKS_IN_TRANSIT_PER_PEER - state.nBlocksInFlight, vToDownload, staller, consensusParams);
    3697 | +            uint64_t max_height = std::max<uint64_t>(pto->nStartingHeight, state.pindexBestKnownBlock ? state.pindexBestKnownBlock->nHeight : 0);
    


    practicalswift commented at 6:34 AM on October 19, 2018:

    Please make the conversion here explicit since it changes signedness.

  9. in test/functional/feature_pruning_fast.py:64 in fdf6ec0eb1
      59 | +        # would cause a disconnect-slap.
      60 | +        self.log.info("- connecting to pruning node should not result in invalid requests")
      61 | +        self.start_node(2)
      62 | +        n2 = self.nodes[2]
      63 | +        n1 = self.nodes[1]
      64 | +        assert_equal(n2.getblockcount(),  200)
    


    practicalswift commented at 6:36 AM on October 19, 2018:

    Two spaces instead of one :-)

  10. kallewoof commented at 3:46 AM on October 22, 2018: member

    @gmaxwell

    So how do we avoid the case where we're behind and all peers are pruned, leaving us stuck?

    Sorry for the lack of response -- I am trying to work out a case which shows where this is a really bad scenario for a node, but it's taking longer than I expected. I will show that soon. In the meantime, I think you're right that I am probably doing this out of order (i.e. need to address the case "all peers are pruning nodes out of range" first, before doing this). @practicalswift Thanks for the review! I will address soon.

  11. kallewoof commented at 3:48 AM on October 22, 2018: member

    @gmaxwell

    So how do we avoid the case where we're behind and all peers are pruned, leaving us stuck?

    Sorry for the lack of response -- I am trying to work out a case which shows where this is a really bad scenario for a node, but it's taking longer than I expected. I will show that soon. In the meantime, I think you're right that I am probably doing this out of order (i.e. need to address the case "all peers are pruning nodes out of range" first, before doing this). @practicalswift Thanks for the review! I will address soon.

  12. kallewoof commented at 7:38 AM on October 22, 2018: member

    @gmaxwell

    So how do we avoid the case where we're behind and all peers are pruned, leaving us stuck?

    Sorry for the lack of response -- I am trying to work out a case which shows where this is a really bad scenario for a node, but it's taking longer than I expected. I will show that soon. In the meantime, I think you're right that I am probably doing this out of order (i.e. need to address the case "all peers are pruning nodes out of range" first, before doing this). @practicalswift Thanks for the review! I will address soon.

  13. kallewoof commented at 4:45 AM on October 25, 2018: member

    I am unable to reproduce a scenario where this causes undesired behavior and am closing this PR. Will reopen if I manage to reproduce. Sorry for wasting time! :(

  14. kallewoof closed this on Oct 25, 2018

  15. kallewoof deleted the branch on Oct 17, 2019
  16. DrahtBot locked this on Dec 16, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-14 18:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me