Revert "net: Avoid duplicate getheaders requests." PR #8054 #8306

pull gmaxwell wants to merge 1 commits into bitcoin:master from gmaxwell:revert_8054 changing 1 files +1 −8
  1. gmaxwell commented at 8:47 PM on July 5, 2016: contributor

    This reverts commit f93c2a1b7ee912f0651ebb4c8a5eca220e434f4a.

    This can cause synchronization to get stuck.

  2. Revert "net: Avoid duplicate getheaders requests." PR #8054
    This reverts commit f93c2a1b7ee912f0651ebb4c8a5eca220e434f4a.
    
    This can cause synchronization to get stuck.
    4fbdc4365b
  3. gmaxwell commented at 8:50 PM on July 5, 2016: contributor

    I observed a testnet node persistently stuck (even across restarts) in a case where its best header chain was invalid and the public network has a best valid chain with much more work than the invalid best header chain that I have, but it took more than one headers message to connect it.

    Suhas identified this PR as the probable cause, and on revert the node immediately became unstuck. Considering how near we are to release, I think simply reverting this is the right action currently. The issue it was fixing should have been rare and largely inconsequential on the Bitcoin network.

  4. sdaftuar commented at 10:07 PM on July 5, 2016: member

    utACK. We should tag this for 0.13.0.

  5. pstratem commented at 11:28 PM on July 5, 2016: contributor

    I ran into this problem, this fixed it.

  6. gmaxwell commented at 3:30 AM on July 6, 2016: contributor

    I've run into at least two people on IRC with this issue (in addition to Patrick).

  7. laanwj added the label P2P on Jul 6, 2016
  8. laanwj merged this on Jul 6, 2016
  9. laanwj closed this on Jul 6, 2016

  10. laanwj referenced this in commit 005d3b6430 on Jul 6, 2016
  11. domob1812 commented at 11:55 AM on July 9, 2016: contributor

    Even if the issue my original patch fixed is rare in practice, I would like to see it fixed. I understand that it is best to roll the change back after finding the issue now before the upcoming release, though. Can I open an issue to track a "fixed fix" for 0.14? Also, I do not yet fully understand how a node could become stuck with the patch even if it is on an invalid chain. Does anyone have a good explanation for what the issue is exactly?

  12. sdaftuar commented at 11:38 AM on July 10, 2016: member

    @domob1812 One example: before reverting this patch, if there are two competing forks with tips A and B, and a node is at tip A and the fork point C between A and B is more than 2000 blocks in the past, and a node already has the first 2000 headers from C to B but no later ones, then it's possible that the hasHeaders check added by #8054 would prevent the node from ever learning about tip B, causing chain sync to fail.

  13. domob1812 commented at 4:15 PM on July 13, 2016: contributor

    Thanks @sdaftuar, makes sense. I'll think about it.

  14. DeckerSU referenced this in commit a9231ef60e on Apr 16, 2020
  15. DeckerSU referenced this in commit 431204467c on Apr 16, 2020
  16. DeckerSU commented at 9:53 PM on April 16, 2020: none

    @sdaftuar i know it's been a long time since initial patch, but how do you think, if we limit initial @domob1812 fix only to IBD, like:

    bool hasNewHeaders = true;
    if (IsInitialBlockDownload()) {
        hasNewHeaders = (mapBlockIndex.count(headers.back().GetHash()) == 0);
    }
    ...
    if (nCount == MAX_HEADERS_RESULTS && pindexLast && hasNewHeaders) {
    ...
        pfrom->PushMessage("getheaders", chainActive.GetLocator(pindexLast), uint256());
    }
    

    Will it cause sync stuck case described here #8306 (comment) ?

    I'm trying to solve duplicate getheaders requests issue in ZCash and Komodo, bcz in these chains duplicate getheaders requests causes really huge overhead (additional traffic download), bcz of bigger blockheader size (1488 size). Just an example:

    2020-04-16 02:03:29 more getheaders (1641598) to end to peer=1 (startheight:1836680)
    2020-04-16 02:03:29 sending: getheaders (1061 bytes) peer=1
    2020-04-16 02:03:29 received: headers (238081 bytes) peer=2
    2020-04-16 02:03:29 more getheaders (1641598) to end to peer=2 (startheight:1836680)
    2020-04-16 02:03:29 sending: getheaders (1061 bytes) peer=2
    2020-04-16 02:03:29 sending: getheaders (1061 bytes) peer=3
    2020-04-16 02:03:29 getheaders (1641598) 08685ebc864678e16e9094fafce2119a9c63aae7872dc6fcc974f29cfc201e1e to peer=3
    ...
    2020-04-16 02:03:29 more getheaders (1641598) to end to peer=5 (startheight:1836680)
    2020-04-16 02:03:29 sending: getheaders (1061 bytes) peer=5
    2020-04-16 02:03:29 received: headers (238081 bytes) peer=5
    

    As we are see here same getheaders request sent to peer 1, 2 and 5. And we got 3 replies (MAX_HEADERS_RESULTS x size(CBlockHeader) = 160 x 1488 = 238081 bytes each). With ~1.8M blocks in chain it can cause even ~500 Gb of download traffic during IBD and initial headers sync stage, while the size of entire blockchain is only ~25-30 Gb. I did small test-lab experiment with syncing from scratch from 2 peers in local 1 Gbps network - it downloads 1 Gb of duplicate headers just in few minutes.

    So, any advice is appreciated.

    p.s. Limiting initial fix with IsInitialBlockDownload() seems working, trafic download during initial headers sync reducing significantly, but I just want to make sure that there are no any pitfalls.

  17. hdevalence referenced this in commit 7d1dc174ba on Dec 2, 2020
  18. dconnolly referenced this in commit c04cc39a03 on Dec 3, 2020
  19. DrahtBot locked this on Feb 15, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-14 15:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me