Autoprune: Do relay blocks if possible #6122

pull cozz wants to merge 1 commits into bitcoin:master from cozz:cozz16 changing 1 files +15 −7
  1. cozz commented at 8:46 am on May 12, 2015: contributor

    Currently we dont relay blocks in prune-mode, which seems to be very bad, as such a negative side-effect is not even mentioned in the help-message.

    So with this pr we still relay if the peer has at least up to our pruning threshold. I added another 6, to give a little more room for preventing the unlikely case that we are pruning the block while the request is in flight. This means that we still relay, if the guy is not more than 282 blocks behind.

  2. sipa commented at 9:19 am on May 12, 2015: member
    We should do something like this, but it won’t help without also having some means of advertizing that we can relay blocks, as right now, nobody should even be connecting to pruned nodes (which advertize as spv nodes).
  3. cozz commented at 9:29 am on May 12, 2015: contributor
    @sipa Can you point me to the code, where we dont connect to pruned nodes, please? Because I assumed we dont distinguish on service-bits in the connection logic.
  4. sipa commented at 10:08 am on May 12, 2015: member
    Unsure, addnode may work, but automatic discovery shouldn’t.
  5. cozz commented at 10:21 am on May 12, 2015: contributor

    So you dont know the code? Didnt you write the address-manager?

    Well, I will go bother and test now, if a node without NODE_NETWORK announces his ip, and whether or not we add it and connect to it with the address-manager. Because exactly that is, what I was assuming.

  6. laanwj added the label P2P on May 12, 2015
  7. cozz commented at 1:25 pm on May 12, 2015: contributor

    So I dont know how to test this NODE_NETWORK thing.

    The address-manager does not care about nServices, so from my understanding, there is no difference between NODE_NETWORK-nodes and non-NODE_NETWORK-nodes. We make a difference after we have connected in the version message, but not before.

    I any case this pull request does not hurt. Block relay is highest priority, and I believe we really should always announce the block, whenever possible.

    Lets say someone runs a bunch of full nodes, because he wants to contribute to the network, then our -prune feature is really bad, if suddenly all those nodes dont relay blocks anymore. The guy thinks that he is doing something good for the network and himself (with pruning) while its not.

    If what you are saying is true, that nobody connects to pruned nodes, this would be also bad, because this is not even mentioned in the help-message. If so, there is a big fat warning missing. But again, I can not even find this behavior in the source code.

  8. petertodd commented at 2:39 pm on May 12, 2015: contributor

    ACK concept.

    I don’t think this pull-req should be gated on whether or not we advertise that other nodes should connect to us; relaying is always valuable to ensure the P2P network is well connected. Note how if you start a node with -listen=0 you’ll never have anyone connect to you, yet we still relay blocks. The same logic should apply here.

    What I don’t get is why make this dependent on the peer’s advertised starting height? Why should their height (which may be out of date) have any relevance to whether or not we relay a block? Instead I think this should be based only on our height, with an appropriate window to prevent stalls.

    As for the size of that window… isn’t this code only triggered when we get a new block? You’d have to get a burst of >288 blocks in a row in non-initial-download mode to run into that problem - seems rather unlikely. I’m not sure it’s really worth having extra code for such an unlikely case.

  9. cozz commented at 4:16 pm on May 12, 2015: contributor

    @petertodd The code I added only cares about the height of the other node, for both nStartingHeight and pindexBestKnownBlock. The reason is that this avoids, that the other node asks us for blocks we dont have. If we know, that the other has at least what we have (minus 288), nothing can go wrong. The code already is executed for (!fInitialDownload), which is based on our height. So there must have been a reason why it has been disabled in the first place. So I added, to also care about the other nodes height.

    I am just very unhappy, with block relay being completely disabled for pruning. To me, this is a bug.

  10. petertodd commented at 4:41 pm on May 12, 2015: contributor

    @cozz Whoops, yeah, brainfart on my part. That exception is reasonable, although the fact we need it is annoying. :( Pity that we still deal with stalling peers so badly.

    Definitely agree re: the total disabling. Heck, I even argue that SPV nodes should be forwarding blocks.

  11. in src/main.cpp: in f9da3b2d14 outdated
    2282@@ -2283,11 +2283,20 @@ bool ActivateBestChain(CValidationState &state, CBlock *pblock) {
    2283                 nBlockEstimate = Checkpoints::GetTotalBlocksEstimate(chainParams.Checkpoints());
    2284             // Don't relay blocks if pruning -- could cause a peer to try to download, resulting
    2285             // in a stalled download if the block file is pruned before the request.
    2286-            if (nLocalServices & NODE_NETWORK) {
    2287+            {
    


    sdaftuar commented at 6:23 pm on May 12, 2015:
    Comment above needs to be updated.
  12. in src/main.cpp: in f9da3b2d14 outdated
    2293+                    if (chainActive.Height() > (pnode->nStartingHeight != -1 ? pnode->nStartingHeight - 2000 : nBlockEstimate)) {
    2294+                        if (!fPruneMode) // not pruning, always relay
    2295+                            pnode->PushInventory(CInv(MSG_BLOCK, hashNewTip));
    2296+                        else { // pruning, still relay if peer has at least up to our pruning threshold
    2297+                            int nLastBlockWeMustKeep = chainActive.Height() - MIN_BLOCKS_TO_KEEP + 6; // add 6 to ensure we dont prune while the request is in flight
    2298+                            CNodeState *state = State(pnode->GetId());
    


    sdaftuar commented at 6:23 pm on May 12, 2015:
    I believe you need to hold cs_main for this.
  13. sdaftuar commented at 7:17 pm on May 12, 2015: member

    I made a couple code comments, but in reviewing this I noticed another issue.

    I think pruning nodes should respond to a “getblocks” message differently than non-pruning nodes, so that they don’t ever generate an inv for a block they don’t have on disk. I haven’t generated this scenario in a test yet, but I believe it’s possible to be in a situation where an 0.9 (or earlier) node could send a getblocks message to a pruning node, and if the 0.9 node is far enough behind, that could result in a pruning node serving up inv’s for blocks it doesn’t have (which the 0.9 node would then request, but not be able to receive, and then eventually disconnect).

    Assuming we agree that behavior should change, then I think perhaps this pull could be much simpler, where we don’t need to bother with the special case here in the event that we’re pruning (ie we just remove the if (nLocalServices & NODE_NETWORK)) on line 2286. My reasoning:

    • 0.10 and later nodes will not download non-inv’ed blocks except from NODE_NETWORK peers (there’s a guard around the call to FindNextBlocksToDownload)
    • 0.9 and 0.8 nodes rely on inv’s, including those generated in response to a getblocks, to determine what to ask for from a peer

    (I’m not all that familiar with the p2p behavior of 0.9 and 0.8 nodes though, hopefully someone more familiar can confirm that the above is correct.)

    I guess one thing I am not sure how to consider is the p2p behavior of even older bitcoin core clients (which I haven’t looked at) and non-bitcoin-core clients. But my first thought is that if we’re advertising ourselves as not being NODE_NETWORK and we’re only generating inv’s for blocks that we have (and are likely to still have for some reasonable time window), that this ought to be acceptable network behavior.

  14. sipa commented at 7:34 pm on May 12, 2015: member

    @sdaftuar Sounds very reasonable to me. 0.8 and 0.9 should never fetch anything that was not inv’ed, as far as I know.

    For 0.10, we should only inv in case the last common ancestor of pindexLastCommonBlock and the block being inved is not pruned (otherwise we may trigger an async fetch that crosses pruned block ranges). That is probably a safe mechanism in any case.Sorry if the code already does this, reviewing is currently hard for me. @cozz Yes, I understand. I think the current pruning facility is extremely limited in usefulness (e.g. you can’t even -addnode within a private network to it and expect a useful result), and I agree the relaying issue is independent from the discovery/announcement.

  15. Autoprune: Do relay blocks if possible b37d61ee43
  16. cozz force-pushed on May 12, 2015
  17. cozz commented at 10:11 pm on May 12, 2015: contributor

    Fixed nits.

    So I believe this pruning feature might be better to be declared as “experimental” for 0.11, if there is such limited usefulness and potential negative side effects. For example besides block relay, if a lot of nodes decide to use this feature, then the address-manager gets “spammed” with pruned nodes. Non-pruned nodes should have priority over pruned ones, as for address-relay and address-selection, which is not even implemented currently.

    As for this pull, I’d also say we can just always inv, because we dont call FindNextBlocksToDownload for pruned nodes. But then again, its unclear why it was even disabled in the first place.

  18. sipa commented at 0:26 am on May 13, 2015: member
    Hmm, @sdaftuar mentions that 0.10 does not download from fClient nodes. That’s not entirely true - the direct fetching loop (in immediate response to an inv) may fetch from anyone who sends an inv, but the asynchronous fetching won’t. That means that even with this code, 0.10 and up will not be able to retrieve a reorg from a pruned peer.
  19. sdaftuar commented at 0:48 am on May 13, 2015: member
    @sipa Sorry if I was unclear but yes that was what I was referring to when I said 0.10 wouldn’t download non-inv’ed blocks except from NODE_NETWORK peers. I agree that it seems unfortunate that 0.10 clients specifically wouldn’t be able to retrieve a reorg from a pruned peer (while 0.8/0.9 would, via their getblocks logic). Perhaps we should consider #5307 with some small cap on the number of blocks to inv, so that small reorgs would be successfully relayed from a pruning node to an 0.10 node (though with the n^2 overhead mentioned in that pull)?
  20. sdaftuar commented at 1:33 pm on May 14, 2015: member

    EDIT: the suggestion I posted in this comment doesn’t actually work; inv’s received for blocks that are already in mapBlockIndex don’t get directly fetched, so this approach wouldn’t help 0.10/0.11 nodes receive reorg’s after all. #5307 may still be an option if we think this is important…

    Another idea: perhaps pruning nodes could behave differently when they’re responding to a “getheaders” message, so that in addition to returning headers, they would also send an inv if the headers being requested are for blocks that are all on disk and unlikely to be pruned (just as we would do for getblocks in #6130)? This could serve as a hint to a peer that it can download those blocks from us, and would assist a peer in either handling a reorg or completing a small-ish sync (after, say, being down for less than 2 days).

    This feels a little ugly, and I’m guessing not how we’d want to behave in the long-run, but maybe this could be a workaround for the time being.

  21. laanwj commented at 8:15 am on May 16, 2015: member
    Continued in #6148
  22. laanwj closed this on May 16, 2015

  23. laanwj commented at 8:17 am on May 16, 2015: member
    OH misundertood, #6148 is not based on this, but is a different implementation reopening.
  24. laanwj reopened this on May 16, 2015

  25. cozz commented at 11:40 am on May 16, 2015: contributor

    Closing for #6148

    We only announce what we have and not delete that soon anyway. And the asynchronous fetching is not called for pruned nodes. So what I did here is not even necessary.

  26. cozz closed this on May 16, 2015

  27. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-19 00:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me