During IBD, prune as much as possible until we get close to where we will eventually keep blocks #20827

pull luke-jr wants to merge 1 commits into bitcoin:master from luke-jr:ibd_prune_max changing 1 files +8 −4
  1. luke-jr commented at 1:49 am on January 2, 2021: member

    This should reduce pruning flushes even more, speeding up IBD with pruning on systems that have a sufficient dbcache.

    Assumes 1 MB per block between tip and best header chain. Simply adds this to the buffer pruning is trying to leave available, which results in pruning almost everything up until we get close to where we need to be keeping blocks.

  2. DrahtBot commented at 2:01 am on January 2, 2021: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage

    For detailed information about the code coverage, see the test coverage report.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    ACK andrewtoth, fjahr, achow101
    Concept ACK jonasschnelli, theStack, 0xB10C, jonatack, kristapsk

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    No conflicts as of last run.

  3. luke-jr force-pushed on Jan 2, 2021
  4. DrahtBot added the label Validation on Jan 2, 2021
  5. jonatack commented at 0:16 am on January 4, 2021: contributor
    Interesting, will have a look.
  6. jonasschnelli commented at 8:26 am on January 4, 2021: contributor
    Nice! Concept ACK. Could we further reduce flushes by allowing a user parameter (-pruneflushbuffer [or similar])? Some users are probably okay with providing a few GBs of extra space for speeding up pruning (assume hight dbcache) (but wan’t the extra space be freed after IBD). Or we could automatically use he dbcache size as the buffer size.
  7. Sjors commented at 5:50 pm on January 8, 2021: member
    I also like the idea of temporally assigning more RAM for IBD. This could even be done by default for GUI users (with opt-out in the intro screen).
  8. luke-jr commented at 6:33 pm on January 8, 2021: member
    dbcache adjustments are ultimately an unrelated feature. I see #19873 as the next step in that area.
  9. DrahtBot commented at 11:45 am on January 13, 2021: contributor

    🕵️ @sipa has been requested to review this pull request as specified in the REVIEWERS file.

  10. DrahtBot added the label Needs rebase on Feb 18, 2021
  11. in src/validation.cpp:3648 in 2c35ce710d outdated
    4023-            // Since this is only relevant during IBD, we use a fixed 10%
    4024-            nBuffer += nPruneTarget / 10;
    4025+        // So when pruning in IBD, increase the buffer to avoid a re-prune too soon.
    4026+        if (is_ibd && target_sync_height > (uint64_t)chain_tip_height) {
    4027+            // Since this is only relevant during IBD, we assume blocks are at least 1 MB on average
    4028+            static constexpr uint64_t average_block_size = 1000000;  /* 1 MB */
    


    luke-jr commented at 11:28 pm on July 6, 2021:
    AFAIK we don’t have access to future block sizes at this point.
  12. luke-jr force-pushed on Aug 31, 2021
  13. DrahtBot removed the label Needs rebase on Aug 31, 2021
  14. stickies-v commented at 2:15 pm on December 16, 2021: contributor

    Without rebase, compiling fails for me:

     0$ make clean && ./configure CXXFLAGS="-O0 -g" CFLAGS="-O0 -g" && make -j 9
     1...
     2touch src/config/bitcoin-config.h.in
     3  CXX      util/libbitcoin_util_a-moneystr.o
     4  CXX      util/libbitcoin_util_a-readwritefile.o
     5  CXX      util/libbitcoin_util_a-settings.o
     6  CXX      util/libbitcoin_util_a-serfloat.o
     7  CXX      util/libbitcoin_util_a-spanparsing.o
     8  CXX      util/libbitcoin_util_a-strencodings.o
     9  CXX      util/libbitcoin_util_a-string.o
    10  CXX      util/libbitcoin_util_a-url.o
    11make[3]: *** No rule to make target `libunivalue.la'.  Stop.
    12make[2]: *** [univalue/libunivalue.la] Error 2
    13make[2]: *** Waiting for unfinished jobs....
    14make[1]: *** [all-recursive] Error 1
    15make: *** [all-recursive] Error 1
    

    After git rebase master, everything runs smoothly again. Will report back shortly with my review results!

  15. maflcko commented at 2:21 pm on December 16, 2021: member
    @stickies-v If you want to compile older commits of master, you’ll need to make distclean first.
  16. luke-jr commented at 4:44 pm on December 16, 2021: member

    Without rebase, compiling fails for me:

    Note: It’s usually best to merge PRs into master for testing. Sometimes (bugfixes) they can be based on very old commits (when the bug was introduced); and if there’s a silent problem with the merge, you’d want to notice that too.

    @stickies-v If you want to compile older commits of master, you’ll need to make distclean first.

    Can we fix this build system bug?

  17. theStack commented at 1:15 am on December 20, 2021: contributor
    Concept ACK
  18. 0xB10C commented at 11:36 am on December 20, 2021: contributor

    Concept ACK

    We’ll be doing a Bitcoin Core PR Review club covering this PR on the 29th: bitcoincore.reviews/20827

  19. 0xB10C commented at 12:03 pm on December 26, 2021: contributor

    #12404 (not merged) may also be interesting to reviewers. @Sjors did run a few benchmarks back then and found #11658 (which this PR overrides) to be the better option at that time.

    I’m planing on benchmarking this change too.

  20. fanquake referenced this in commit e9ee023f6e on Jan 2, 2022
  21. sidhujag referenced this in commit 67c037da14 on Jan 2, 2022
  22. DrahtBot added the label Needs rebase on Jan 4, 2022
  23. fjahr commented at 6:36 pm on January 9, 2022: contributor
    Concept ACK, I read the PR Review Club and will review after rebase.
  24. 0xB10C commented at 3:52 pm on January 10, 2022: contributor

    fwiw: #23581 moved BlockManager to node/blockstorage, that’s where the conflict with master happens.

    I’ve rebased this PR in https://github.com/0xB10C/bitcoin/tree/2022-01-lukejr-ibd-prune-max-rebased and started benchmarking https://github.com/0xB10C/bitcoin/commit/db2a9e71a748c4806b85f767441fb3f090d62163 (PR) vs https://github.com/0xB10C/bitcoin/commit/2e01b6986099715afa40ed6464da4b321b630e9c (mergebase; MB).

    I have 8 different prune and dbcache configurations set-up. Each configuration runs three times per binary (PR and MB). That’s 8 configs * 3 runs * 2 binaries = 48. I expect this to take about 6 days. Configurations:

     01. -dbcache=300 -prune=550
     12. -dbcache=4000 -prune=550
     2
     33. -dbcache=300 -prune=1100
     44. -dbcache=4000 -prune=1100
     5
     65. -dbcache=300 -prune=2200
     76. -dbcache=4000 -prune=2200
     8
     97. -dbcache=300 -prune=4400
    108. -dbcache=4000 -prune=4400
    

    Benchmarks run on a dedicated, properly cooled, and otherwise idle machine with two HDDs in a RAID0. The node syncs from a local node on the same machine and disks (connected via addnode and otherwise connect=0). The sync starts at block height 500_000 and ends at height 600_000. Between each run, the machine idles 5 minutes. The debug.log, with extra debug=coindb debug=prune logging, of each run is saved. I’ll post results and graphs next week.

  25. 0xB10C commented at 10:29 am on January 15, 2022: contributor

    With a prune just before IBD, there is more blk and undo data available when running master than with this change. One potential issue that was mentioned during the PR review club:

    On non-mainnet networks, the 1MB per block assumption doesn’t hold. When we prune with a large nBuffer just before IBD ends, the prune might cause problems on larger reorgs as not enough undo data is present to revert back to the last shared block. Testnet has seen large reorgs in the past.

    Might be worth thinking about potential alternatives to increasing the nBuffer by 1MB * remaining_blocks. There might be cleaner ways to implement “prune everything as long as we are not close to IBD being done”. Haven’t come up with a practical threshold for “IBD being close to done” yet that works for all networks.

  26. luke-jr commented at 7:09 pm on January 15, 2022: member
    The 1 MB per block assumption would mean at least 550 blocks. Testnet has deeper reorgs than that?
  27. 0xB10C commented at 11:05 am on January 16, 2022: contributor

    The 1 MB per block assumption would mean at least 550 blocks. Testnet has deeper reorgs than that?

    I haven’t observed this personally, but yes testnet has been unreliable in regards to deep reorgs in the past. That’s one motivation for SigNet in BIP 325. Also:

  28. 0xB10C commented at 3:07 pm on January 20, 2022: contributor

    benchmarking 0xB10C@db2a9e7 (PR) vs 0xB10C@2e01b69 (mergebase; MB).

    I have 8 different prune and dbcache configurations set-up. Each configuration runs three times per binary (PR and MB). That’s 8 configs * 3 runs * 2 binaries = 48. I expect this to take about 6 days. Configurations:

     01. -dbcache=300 -prune=550
     12. -dbcache=4000 -prune=550
     2
     33. -dbcache=300 -prune=1100
     44. -dbcache=4000 -prune=1100
     5
     65. -dbcache=300 -prune=2200
     76. -dbcache=4000 -prune=2200
     8
     97. -dbcache=300 -prune=4400
    108. -dbcache=4000 -prune=4400
    

    Benchmarks run on a dedicated, properly cooled, and otherwise idle machine with two HDDs in a RAID0. The node syncs from a local node on the same machine and disks (connected via addnode and otherwise connect=0). The sync starts at block height 500_000 and ends at height 600_000. Between each run, the machine idles 5 minutes. The debug.log, with extra debug=coindb debug=prune logging, of each run is saved. I’ll post results and graphs next week.

    Benchmarks done. Happy to share debug.logs and code if anyone wants to look into it or reproduce.

    tl;dr: With some configurations, an IBD performance improvement of 15% or more between block 500k and 600k was measured.

    Number of prunes, pruned files, dbcache flushes

    This PR makes pruning more aggressive. We prune more blk/rev files in a single prune operation to minimize the total number of prune operations needed. This also reduces the number of expensive dbcache flushes as each prune operation requires a dbcache flush.

    config MB prunes PR prunes prunes MB pruned files PR pruned files pruned files MB flushes PR flushes flushes
    0: prune=550 dbcache=300 726 670 92.29% 1 1.08 1.08x 726 670 92%
    1: prune=550 dbcache=4000 726 670 92.29% 1 1.08 1.08x 726 670 92%
    2: prune=1100 dbcache=300 723 150 20.75% 1 4.83 4.83x 723 150 21%
    3: prune=1100 dbcache=4000 723 150 20.75% 1 4.83 4.83x 723 150 21%
    4: prune=2200 dbcache=300 358 58 16.2% 2 12.47 6.23x 359 76 21%
    5: prune=2200 dbcache=4000 358 58 16.2% 2 12.47 6.23x 358 58 16%
    6: prune=4400 dbcache=300 175 26 14.86% 4 27.69 6.92x 178 69 39%
    7: prune=4400 dbcache=4000 175 26 14.86% 4 27.69 6.92x 175 26 15%

    In the IBDs for configurations 0 & 1 (prune = 550MB; minimum prune target), we only prune and flush about 8% less in PR than in MB. With MB, every prune operation prunes 1 blk/rev file pairs. PR prune operations sometimes prune 2 blk/rev files, averaging at 1.08 files per prune. Increasing the dbcache doesn’t affect the number of flushes. Likely because the max cache size is not reached.

    Similarly, a larger dbcache doesn’t affect configurations 2 & 3. However, PR only has 21% of the flushes and prune operations of MB. MB prunes 1 file pair per prune operation while PR prunes 4.83 file pairs on average.

    We first see a larger dbcache affecting the number of flushes in configurations 4 & 5. Configuration 4 (c4) has a dbcache of 300MB and configuration 5 (c5) a dbcache of 4000MB. With c4, the number of flushes of PR compared to MB is only 21%. With c5, it’s 16%. PR prunes on average 12.47 file pairs per prune operation, while MB prunes 2 pairs.

    With configurations 6 & 7 PR prunes 27.7 file pairs on average while MB prunes 4 pairs. However, in configuration 6 (c6) the small dbcache (300MB) is a limiting factor. PR flushes 69 times with c6 and 26 times with configuration 7 (dbcache=4000MB) compared to 178 (c6) and 175 (c7) flushes with MB.

    IBD performance (between block 500000 and 600000)

    Fewer dbcache flushes due to fewer prune operations should yield IBD performance improvements.

    Configuration MB run 1 MB run 2 MB run 3 MB mean PR run 1 PR run 2 PR run 3 PR mean improvement
    0: prune=550 dbcache=300 9799s 10046s 10003s 9949s 9478s 9531s 9518s 9509s 4.4%
    1: prune=550 dbcache=4000 10219s 10047s 9915s 10060s 9649s 9549s 9712s 9636s 4.2%
    2: prune=1100 dbcache=300 9970s 10191s 10073s 10078s 8249s 8467s 8310s 8342s 17.2%
    3: prune=1100 dbcache=4000 10084s 9887s 9976s 9982s 8652s 8483s 8329s 8488s 15.0%
    4: prune=2200 dbcache=300 8991s 9071s 8817s 8959s 7494s 7687s 7496s 7559s 15.6%
    5: prune=2200 dbcache=4000 8973s 8865s 9063s 8967s 7405s 7169s 7661s 7411s 17.4%
    6: prune=4400 dbcache=300 8378s 8102s 7972s 8150s 7645s 7533s 7515s 7564s 7.2%
    7: prune=4400 dbcache=4000 8144s 8263s 8002s 8136s 6833s 6982s 6792s 6869s 15.6%

    The performance improves for all configurations.

    In configurations 0 (c0) and 1 (c1), the increased dbcache had no effect. We saw 8% fewer prunes and flushes with PR than MB for both configurations. We measured a 4.4% (c0) and a 4.2% (c1) improvement in IBD performance with PR.

    image

    In configurations 2 (c2) and 3 (c3), PR had only 21% of the flush and prune operations than MB had (see above). We measured a performance improvement of 17.2% for c2 and a 15% for c3. The difference of 2.2% stems likely from PR run 1 of c3, which took about 400s (~5%) longer than PR run 1 of c2.

    image

    The smaller dbcache of configuration 4 (c4) was noticeable in the number of flushes and prunes already. The PR showed a 15.6% improvement over MB in c4 and a 17.4% improvement in c5.

    image

    With configurations 6 (c6) and 7 (c7), the effect of the smaller dbcache of c6 became more noticeable. We saw a 7.2% improvement for PR with c6 compared to a 15.6% improvement with c7.

    image

    Conclusion

    PR with a larger prune target does reduce the number of flushes significantly compared to MB by deleting more blk/rev file pairs per prune operation. However, the default dbcache (of 300MB) can fill up with larger prune targets. Here, increasing the dbcache is helpful. Given the right configuration, fewer flushes significantly impact the IBD performance between block 500000 and 600000. Often, a performance improvement of over 15% can be measured.

    This PR is worth pursuing.


    I’m running full (genesis to 710k) IBD benchmarks now.

  29. unknown approved
  30. unknown commented at 3:38 pm on January 20, 2022: none

    utACK https://github.com/bitcoin/bitcoin/pull/20827/commits/24f3936337de3afb4fa56efc83009e2527d22df0

    Review by @0xB10C is helpful although I was confused at some places if MB is for Master Branch or Mega Byte.

  31. 0xB10C commented at 3:40 pm on January 20, 2022: contributor

    Review by @0xB10C is helpful although I was confused at some places if MB is for Master Branch or Mega Byte.

    Hm, good point. MB actually stands for mergebase (the commit right before the changes from this PR) here.

  32. 0xB10C commented at 11:23 am on January 27, 2022: contributor

    I ran full IBD benchmarks (block 0 to 710k compared to 500k to 600k in #20827 (comment)) with these three configurations:

    01. -dbcache=300 -prune=550
    12. -dbcache=4000 -prune=4400
    23. -dbcache=4000 -prune=8800
    

    I ran each configuration two times (compared to three times in #20827 (comment)) for both the PR binary and the MB binary. That’s 3 configurations * 2 binaries * 2 runs = 12 full IBDs. The setup and hardware were the same as in the 500k-to-600k benchmark. However, I’ve started with an empty datadir compared to a datadir pre-synced to block 500k in the previous benchmark.

    tl;dr: I measured a 3.9% full-IBD speed-up with a default dbcache and the minimum pruning target and a 17.4% full-IBD speed-up with a dbcache of 4000MB and a prune target of 8800MB.

    Number of prunes, pruned files, dbcache flushes

    config MB prunes PR prunes prunes MB pruned files PR pruned files pruned files MB flushes PR flushes flushes
    0: prune=550 dbcache=300 2805 2403 85.67% 1 1.17 1.17x 2805 2403 86%
    1: prune=4400 dbcache=4000 695 101 14.53% 4 27.75 6.94x 695 101 15%
    2: prune=8800 dbcache=4000 393 48 12.21% 6.99 58.17 8.32x 393 48 12%

    Across the board, it’s visible that pruning more files in a single pruning operation causes fewer pruning operations in total. This causes fewer dbcache flushes in total. The PR binary in configuration 0 pruned 1.17 blk/rev file pair per prune operation compared 1 file pair in the MB. This caused the PR binary to flush 24% fewer times. With higher prune targets, we flush even less often. In configuration 1 we flushed 85% fewer times with PR than with MB and in configuration 2 we flushed 88% fewer times. In configuration 2 we flushed 58 blk/rev file pairs on average per flush operation compared to only 7 file pairs in MB.

    full-IBD performance (between block 0 and 710k)

    Configuration MB run 1 MB run 2 MB mean PR run 1 PR run 2 PR mean improvement
    0: prune=550 dbcache=300 40621s 39884s 40252s 38127s 39208s 38667s 3.9%
    1: prune=4400 dbcache=4000 33346s 33395s 33370s 27597s 28715s 28156s 15.6%
    2: prune=8800 dbcache=4000 30734s 32040s 31387s 25656s 26214s 25935s 17.4%

    image

    With configuration 0, I measured a 3.9% IBD speed-up with the PR binary. The MB binary averaged at 11:10:52 and the PR binary at 10:44:27 for a full IBD.

    image

    With configuration 1 I measured a 15.6% IBD speed-up with the PR binary. The MB binary averaged at 09:16:10 and the PR binary at 07:49:16 for a full IBD.

    image

    With configuration 2, I measured a 17.4% IBD speed-up with the PR binary. The MB binary averaged at 08:43:07 and the PR binary at 07:12:15 for a full IBD.

    dbcache usage on flush

    I’ve looked at the dbcache size right before it’s flushed to discuss the chosen configurations. This gives a good indication of the dbcache utilization.

    image

    For the three configurations, the blue box plot (for the MB binary) on the left and the orange box plot (for the PR binary) show the distributions of the dbcache size right before flushing it to disk.

    For configuration 0, with a default dbcache of 300MB, both MB and PR utilize only about 100MB and 125MB of the dbcache in the median. PR has some outliners just below 500MB, which are larger than the configured dbcache. This is likely the reserved mempool memory being used as dbcache during IBD (where the mempool is empty).

    With configuration 1 with a 4000MB dbcache, the median utilization is about 275MB for the MB binary and about 1050MB for PR. PR has outliers up to 2100MB, but limiting it to 1600MB compared to 4000MB would probably be fine too.

    With configuration 2 with a 4000MB dbcache, the median utilization is about 450MB for the MB binary and about 1600MB for PR. PR has outliers up to 2700MB, but limiting it to 2600MB compared to 4000MB would probably be fine too.

    None of the configurations are limited by dbcache size.

    Hardware utilization

    image

    The 4c/8t i7 CPU of the benchmarking machine was under about 25% load without signature verification and 65% load with signature verification (assumevalid). The limiting factor was probably disk IO on the HDD raid.

    Again, benchmarks on a system with SDDs would be helpful too. As HDD space is usually cheaper than SSD space, maybe more people enable pruning on systems with an SSD. Additionally, systems with weaker processors might see smaller performance improvements as they are also limited by their CPU and not only by disk IO.

  33. luke-jr force-pushed on Mar 24, 2022
  34. luke-jr commented at 2:09 am on March 24, 2022: member
    Rebased
  35. DrahtBot removed the label Needs rebase on Mar 24, 2022
  36. DrahtBot added the label Needs rebase on Oct 19, 2022
  37. achow101 commented at 8:50 pm on February 3, 2023: member
    Are you still working on this?
  38. PastaPastaPasta referenced this in commit e57396299c on Apr 16, 2023
  39. PastaPastaPasta referenced this in commit ebbe540d3a on Apr 16, 2023
  40. PastaPastaPasta referenced this in commit b0592e1ab5 on Apr 17, 2023
  41. PastaPastaPasta referenced this in commit 8157dfcc60 on Apr 17, 2023
  42. achow101 marked this as a draft on Apr 25, 2023
  43. luke-jr force-pushed on Jun 26, 2023
  44. luke-jr force-pushed on Jul 19, 2023
  45. luke-jr marked this as ready for review on Jul 19, 2023
  46. luke-jr commented at 1:25 am on July 19, 2023: member
    Rebased
  47. jonatack commented at 1:33 am on July 19, 2023: contributor
    Concept ACK
  48. DrahtBot removed the label Needs rebase on Jul 19, 2023
  49. maflcko removed the label Validation on Jul 19, 2023
  50. maflcko added the label Block storage on Jul 19, 2023
  51. kristapsk commented at 9:42 am on July 19, 2023: contributor
    Concept ACK
  52. BenWestgate commented at 4:15 am on August 4, 2023: none
    I have a project that syncs bitcoin core on an external drive and this would really help a lot of people testing it with smaller flash drives and higher RAM. So much so that I’ve made scripts to do manual pruning (back to 550mib) only when disk space is getting low to emulate this.
  53. DrahtBot added the label Needs rebase on Aug 21, 2023
  54. achow101 commented at 4:22 pm on September 20, 2023: member
    Are you still working on this?
  55. During IBD, prune as much as possible until we get close to where we will eventually keep blocks d298ff8b62
  56. luke-jr force-pushed on Dec 27, 2023
  57. DrahtBot removed the label Needs rebase on Dec 27, 2023
  58. in src/node/blockstorage.cpp:301 in d298ff8b62
    297@@ -298,6 +298,7 @@ void BlockManager::FindFilesToPrune(
    298     // Distribute our -prune budget over all chainstates.
    299     const auto target = std::max(
    300         MIN_DISK_SPACE_FOR_BLOCK_FILES, GetPruneTarget() / chainman.GetAll().size());
    301+    const uint64_t target_sync_height = chainman.m_best_header->nHeight;
    


    andrewtoth commented at 11:52 pm on December 27, 2023:
    nit: Any reason this declaration is up here and not directly above where it is used below?
  59. in src/node/blockstorage.cpp:325 in d298ff8b62
    320@@ -320,10 +321,13 @@ void BlockManager::FindFilesToPrune(
    321         // On a prune event, the chainstate DB is flushed.
    322         // To avoid excessive prune events negating the benefit of high dbcache
    323         // values, we should not prune too rapidly.
    324-        // So when pruning in IBD, increase the buffer a bit to avoid a re-prune too soon.
    325-        if (chainman.IsInitialBlockDownload()) {
    326-            // Since this is only relevant during IBD, we use a fixed 10%
    327-            nBuffer += target / 10;
    328+        // So when pruning in IBD, increase the buffer to avoid a re-prune too soon.
    329+        const auto chain_tip_height = chain.m_chain.Height();
    


    andrewtoth commented at 11:53 pm on December 27, 2023:

    nit: could remove the cast in the next line with

    0        const uint64_t chain_tip_height = chain.m_chain.Height();
    
  60. andrewtoth commented at 3:11 pm on December 30, 2023: contributor

    ACK d298ff8b62b2624ed390c8a2f905c888ffc956ff

    Ran benchmark with hyperfine:

    0hyperfine --show-output --parameter-list commit 96ec3b67a7a7f968d002e13d6fc227f69b7f07d7,d298ff8b62b2624ed390c8a2f905c888ffc956ff --setup 'git checkout {commit} && make -j$(nproc) src/bitcoind' --prepare 'sync; sudo /sbin/sysctl vm.drop_caches=3; rm -r /home/user/.bitcoin/blocks /home/user/.bitcoin/chainstate' -M 3 './src/bitcoind -dbcache=2000 -prune=2000 -printtoconsole=0 -stopatheight=800000'
    

    Results were 21% faster on this branch :rocket:

    0Summary
    1  ./src/bitcoind -dbcache=2000 -prune=2000 -printtoconsole=0 -stopatheight=800000 (commit = d298ff8b62b2624ed390c8a2f905c888ffc956ff) ran
    2    1.21 ± 0.00 times faster than ./src/bitcoind -dbcache=2000 -prune=2000 -printtoconsole=0 -stopatheight=800000 (commit = 96ec3b67a7a7f968d002e13d6fc227f69b7f07d7)
    
  61. DrahtBot requested review from jonatack on Dec 30, 2023
  62. DrahtBot requested review from fjahr on Dec 30, 2023
  63. DrahtBot requested review from 0xB10C on Dec 30, 2023
  64. DrahtBot requested review from theStack on Dec 30, 2023
  65. fjahr commented at 8:20 pm on January 12, 2024: contributor

    utACK d298ff8b62b2624ed390c8a2f905c888ffc956ff

    While there may be slightly better configurations possible, as @0xB10C pointed out above, I think the 1MB assumption is alright and this is a clear improvement already, so this can be merged and potentially improved later.

  66. DrahtBot removed review request from fjahr on Jan 12, 2024
  67. DrahtBot added the label CI failed on Jan 24, 2024
  68. achow101 commented at 8:07 pm on January 25, 2024: member
    ACK d298ff8b62b2624ed390c8a2f905c888ffc956ff
  69. achow101 merged this on Jan 25, 2024
  70. achow101 closed this on Jan 25, 2024

  71. vostrnad commented at 7:18 pm on February 28, 2024: none
    15% improvement in IBD performance in some configurations seems significant enough for a release note, no?

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-23 09:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me