bitcoind 29.0 much slower than 28.0 on my system: cause found #32455

issue hMsats openend this issue on May 9, 2025
  1. hMsats commented at 6:08 am on May 9, 2025: none

    I always compile the bitcoin software myself (Ubuntu 24.04.2 LTS). For bitcoin-29.0 I compiled it (of course) for the first time with cmake (with and without berkeley-db).

    For both (with and without berkeley-db), bitcoind-29.0 is much slower than bitcoin-28.0 which gives me problems on my server.

    To investigate I wrote a shell script that determines the time difference in seconds between “Saw new header” and “UpdateTip”:

    02025-05-09T05:24:11Z Saw new header hash=0000000000000000000213e630619be8945d471d06b0395fb6adca797877527d height=895920
    12025-05-09T05:24:11Z Saw new cmpctblock header hash=0000000000000000000213e630619be8945d471d06b0395fb6adca797877527d peer=5197
    22025-05-09T05:24:12Z UpdateTip: new best=0000000000000000000213e630619be8945d471d06b0395fb6adca797877527d height=895920 version=0x2a4aa000 log2_work=95.598602 tx=1188522869 date='2025-05-09T05:23:59Z' progress=1.000000 cache=216.6MiB(1587712txo)
    

    which in this bitcoind-28.0 example is only 1 second. This is typical for bitcoind-28.0 on my system where it is between 0 and 1 seconds with only very occasionally a bit longer. Now for bitcoind-29.0 it’s much longer:

    Block nr: delta t in seconds 895579: 3 895580: 3 895581: 4 895582: 2 895583: 16 895584: 65 895585: 8 895586: 280 895587: 81 895588: 133 895589: 124 895590: 3 895591: 73 895592: 153 895593: 284 895594: 528 895595: 17 895596: 8 895597: 4 895598: 2 895599: 3 895600: 4 895601: 6 895602: 3 895603: 2 895604: 5 895605: 5 895606: 34 895607: 2 895608: 6

    This is true whether it’s compiled with or without berkeley-db. So mometimes it takes several minutes!

    So I did some investigations. The first thing I found was that my bitcoind-29.0 and others are much bigger than the pre-compiled ones:

    Precompiled 29.0 (du in MB): 3 bitcoin-cli 16 bitcoind 42 bitcoin-qt 5 bitcoin-tx 3 bitcoin-util 10 bitcoin-wallet 28 test_bitcoin

    For my self-compiled 29.0: 21 bitcoin-cli 273 bitcoind 47 bitcoin-tx 21 bitcoin-util 134 bitcoin-wallet 511 test_bitcoin

    Although normally I use some compiler options, here for comparison compiled according to the docs: (cmake -B build; cmake --build build; cmake --install build)

    Both (my and pre-compiled) test_bitcoin work fine. Note that also my (not too slow) bitcoind-28.0 (etc) is about the same too big a size (269 MB) as my bitcoind-29.0. So I had this before but never noticed.

    So my first question before I investigate any further is: how come my bitcoind is so big compared to the pre-compiled one?

  2. fanquake commented at 6:41 am on May 9, 2025: member

    So my first question before I investigate any further is: how come my bitcoind is so big compared to the pre-compiled one?

    This will be the debug symbols. If you run ‘strip’ on the binaries. i.e ‘strip bitcoind’ this debug information will be removed, and the size of your binaries should shrink 80-90%.

  3. hMsats commented at 7:38 am on May 9, 2025: none

    @fanquake Thanks a lot! I expected something like this (debug info) but thought it was some compiler option. The strip worked and now my bitcoind is only 12 MB 👍. As I never used strip on bitcoin-28.0 I don’t expect this to make a difference but will try it first on my server anyway or could it?

    This is very confusing. I’m a long time bitcoin node runner and linux user but would never have thought of this possibility. Shouldn’t this be mentioned in the doc/build-unix.md?

  4. maflcko commented at 8:15 am on May 9, 2025: member

    What are your settings?

    Can you share the debug log?

    Can you run with the debug categories bench, blockstorage, lock, prune, validation, and possibly rpc, if there is a background caller?

  5. maflcko added the label Resource usage on May 9, 2025
  6. maflcko added the label Questions and Help on May 9, 2025
  7. hMsats commented at 8:25 am on May 9, 2025: none

    @maflcko I will but it will take some time as I’m on vacation right now but will investigate when I have time.

    Created bitcoind via:

    cmake -B build -DWITH_ZMQ=ON -DBUILD_GUI=ON

    Settings (same as for 28.0):

    0txindex=1
    1rpcthreads=4
    2rpcworkqueue=64
    3dbcache=2500
    4maxconnections=100
    5permitbaremultisig=0
    6datacarrier=0
    
  8. sipa commented at 1:31 pm on May 9, 2025: member
    Unsure if this matters in your case, but the cache being warm or not impacts validation speed significantly. If you have a node that is running for a long time, and then stop and start it again, the speed will be slower for a while until the cache warms up again.
  9. hMsats commented at 5:16 am on May 10, 2025: none
    @sipa these were only the last 30 blocks. In total I let bitcoin-29.0 run for 56 blocks but the performance is so bad compared to when I restart bitcoin-28.0 that something else must be going on. Maybe the debug.log gives you more information?
  10. hMsats commented at 5:28 am on May 10, 2025: none

    @maflcko I found time to run 29.0 with the following debug catagories:

    0debug=bench
    1debug=blockstorage
    2debug=cmpctblock
    3debug=tor
    4debug=validation
    5debug=rpc
    6debug=zmq
    

    You can find it here and more system information on my website.

    There are background callers. Besides a full node I run: Fulcrum (public electrum server), 2 CLN lightning nodes, electrum personal server. It’s maybe a lot but 28.0 has no problems with it. At the beginning of my debug test I started these as well.

  11. hMsats commented at 5:30 pm on May 11, 2025: none

    Although the jury is still out, it might be related to what @sipa suggested: that the cache still hasn’t warmed up enough and validation speed is temporarily slow. I’m testing different options like running with the pre-compiled bitcoind but every test takes quite a long time to come to a clear conclusion (days).

    At least I got an answer to my questions of why my executables are so big. Closing for now, will reopen if this issue persists.

  12. hMsats closed this on May 11, 2025

  13. hMsats commented at 10:18 pm on May 21, 2025: none

    @sipa After a long and systematic research of my problem, I was probably able to find the cause:

    In validation.cpp else if (!check()) was changed into else if (auto result = check(); result.has_value()). Shouldn’t that be else if (auto result = check(); !result.has_value())?

  14. hMsats reopened this on May 21, 2025

  15. achow101 commented at 10:29 pm on May 21, 2025: member

    In validation.cpp else if (!check()) was changed into else if (auto result = check(); result.has_value()). Shouldn’t that be else if (auto result = check(); !result.has_value())?

    Why do you think that’s the error?

    If this were incorrect, you wouldn’t be seeing that validation be slow; it would instead be failing on a bunch of blocks.

    The code on that line is correct. It was changed from a bool in 28.x to a std::optional<std::pair<ScriptError, std::string>> in 29.x. If the return value has a value, then that means an error occurred and the value contained is the error. On success, it returns no value.

  16. hMsats commented at 10:35 pm on May 21, 2025: none
    @achow101 somewhere near that commit I think my server becomes slow. I didn’t fully understand the code, so sorry. I will investigate further until I do find a problematic commit. Thanks!
  17. hMsats commented at 10:45 pm on May 21, 2025: none

    For now I think something bad happens between commit:

    52fd1511a774d1ff9e747b2ce88aa1fcb778ced8

    and

    0a159f0914775756dcac2d9fa7fe4e4d4e70ba0c

    I’ll try to narrow it down.

  18. hMsats commented at 11:06 pm on May 23, 2025: none

    Couldn’t find a commit error (luckily). Improved a few things on my server and now it seems to be working well:

    Removed permitbaremultisig=0, datacarrier=0, increased my reduced cpu speed, give bitcoind more time before starting public electrum server Fulcrum and Core Lightning (CLN). Maybe 29.0 it a bit more demanding than 28.0. Sorry for the confusion and thanks for all the help!

  19. hMsats closed this on May 23, 2025

  20. hMsats renamed this:
    Self-compiled bitcoind 29.0 much slower than self-compiled 28.0 on my system
    bitcoind 29.0 much slower than 28.0 on my system: cause found
    on May 31, 2025
  21. hMsats commented at 7:14 am on May 31, 2025: none

    I was too forgiving in my search for the problematic commit and did the search again holding on to my original low cpu system settings. A (very) long story short, the problematic commit for my system is 097c66f, where the LevelDB max file size is increase to 32 MiB.

    I tested that my system ran perfectly again on (almost) master (v29.99.0-14c16e81598a) removing the code changes by this commit (so removing: options.max_file_size = std::max(options.max_file_size, DBWRAPPER_MAX_FILE_SIZE); and static const size_t DBWRAPPER_MAX_FILE_SIZE = 32 << 20; // 32 MiB from dbwrapper.cpp and dbwrapper.h)

    So to keep things as short as possible:

    • running bitcoind (low cpu) without the LevelDB increase, block verification times are 0 or 1 seconds (except for the very beginning) and both Fulcrum (electrum server) and Core Lightning (CLN) never complain
    • running bitcoind (low cpu) with the LevelDB increase, verification times are sometimes also 0 and 1 seconds but occasionally increase to much longer times (compactification?) and Fulcrum (see: Lost connection to bitcoind) and CLN (see: UNUSUAL plugin-bcli) complain a lot.
    • this situation didn’t improve (much) by not reducing my cpu speed. See bitcoin, times, Fulcrum , CLN

    So there seems to be more than meets the eye in this reasonable LevelDB file size increase, maybe due to compactification times but maybe there’s more going on.

    I understand that this setting won’t be changed based on the complains of one user but maybe others will see similar issues. Should the LevelDB max file size become a optional parameter in bitcoin.conf?

    So I’m going to run bitcoind 29.0 without the LevelDB file size increase on my server.

    bitcoin.conf: txindex=1 rpcthreads=4 rpcworkqueue=64 dbcache=2500 maxconnections=100

    System info:

    Acer Aspire E1-572, 64 bits 8 GB RAM, 4x Intel 1.6 GHz (2.6 GHz max) CPU capped at 1.3 GHz to reduce fan usage

    2 TB external SSD

    Ubuntu 24.04.2 LTS swapoff -a vm.swappiness=0

    Settings for low cpu:

    sudo cpufreq-set -c 0 -u 1300000 sudo cpufreq-set -c 1 -u 1300000 sudo cpufreq-set -c 2 -u 1300000 sudo cpufreq-set -c 3 -u 1300000

    sudo cpufreq-set -c 0 -g powersave sudo cpufreq-set -c 1 -g powersave sudo cpufreq-set -c 2 -g powersave sudo cpufreq-set -c 3 -g powersave

    Settings for high/normal cpu: sudo cpufreq-set -c 0 -u 2600000 sudo cpufreq-set -c 1 -u 2600000 sudo cpufreq-set -c 2 -u 2600000 sudo cpufreq-set -c 3 -u 2600000

    sudo cpufreq-set -c 0 -g performance sudo cpufreq-set -c 1 -g performance sudo cpufreq-set -c 2 -g performance sudo cpufreq-set -c 3 -g performance

  22. hMsats reopened this on May 31, 2025

  23. TheCharlatan commented at 7:31 am on May 31, 2025: contributor
    Does your server use a SSD, or a spinning disk?
  24. fanquake added this to the milestone 29.1 on May 31, 2025
  25. hMsats commented at 8:23 am on May 31, 2025: none
    @TheCharlatan a 2 TB external SSD, see “System info”
  26. sipa commented at 12:08 pm on May 31, 2025: member
    What filesystem are you using?
  27. andrewtoth commented at 12:11 pm on May 31, 2025: contributor
    Not necessarily related to validation, but I would remove the rpcthreads and rpcworkqueue lines from your config. Fulcrum and clightning will make a lot of concurrent requests, and the default threads were increased in v29 from 4 to 16. That should help with errors, especially with Fulcrum.
  28. hMsats commented at 12:22 pm on May 31, 2025: none

    @sipa @andrewtoth

    Maybe the solution is as trivial as increasing:

    0options.block_cache = leveldb::NewLRUCache(nCacheSize / 2);
    1options.write_buffer_size = nCacheSize / 4; // up to two write buffers may be held in memory simultaneously
    

    in src/dbwrapper.cpp?

    I have naively set (after putting back the LevelDB max file size increase to 32 MiB):

    0options.block_cache = leveldb::NewLRUCache(nCacheSize / 2 * 16);
    1options.write_buffer_size = nCacheSize / 4 * 16; // up to two write buffers may be held in memory simultaneously
    

    and the first results look promising. I’ll report back tomorrow. @andrewtoth Fulcrum would retry every 5 seconds which would fill up my rpcworkqueue quickly, I’ve set it to 60 seconds and later to 30 seconds and didn’t see a full rpcworkqueue again.

    So my question is: shouldn’t options.block_cache and options.write_buffer_size be increased?

  29. sipa commented at 12:31 pm on May 31, 2025: member

    @hMsats That’s worth looking into or benchmarking, but I would be surprised if that has such a dramatic effect.

    What filesystem are you using?

  30. hMsats commented at 12:35 pm on May 31, 2025: none

    @sipa

    /dev/sdb1 ext4 1.8T 935G 805G 54% /media/ssd

    it contains .bitcoin and the CLN and Fulcrum data

  31. andrewtoth commented at 12:38 pm on May 31, 2025: contributor

    Fulcrum would retry every 5 seconds which would fill up my rpcworkqueue quickly

    v29 also increased rpcworkqueue to 64 by default, so that line is redundant. By keeping rpcthreads at 4 you are limiting the number of concurrent requests to 4, while default can now service 16 at a time. That would also help keep the work queue lower since requests in the queue can be processed faster.

  32. hMsats commented at 12:38 pm on May 31, 2025: none

    The following are the Verifying last 3 blocks at level 3 times on the last 3 runs, and my new run

     0debug.log_Fri_May_30_05:58:19_AM_CEST_2025:2025-05-29T11:26:33Z Verification progress: 0%
     1debug.log_Fri_May_30_05:58:19_AM_CEST_2025-2025-05-29T11:26:38Z Verification progress: 33%
     2debug.log_Fri_May_30_05:58:19_AM_CEST_2025-2025-05-29T11:26:53Z Verification progress: 66%
     3debug.log_Fri_May_30_05:58:19_AM_CEST_2025-2025-05-29T11:26:58Z Verification progress: 99%
     425 s
     5--
     6debug.log_Fri_May_30_09:47:30_PM_CEST_2025:2025-05-30T11:35:37Z Verification progress: 0%
     7debug.log_Fri_May_30_09:47:30_PM_CEST_2025-2025-05-30T11:36:16Z Verification progress: 33%
     8debug.log_Fri_May_30_09:47:30_PM_CEST_2025-2025-05-30T11:36:21Z Verification progress: 66%
     9debug.log_Fri_May_30_09:47:30_PM_CEST_2025-2025-05-30T11:36:26Z Verification progress: 99%
    1049 s
    11>
    12--
    13debug.log_Fri_May_30_11:42:20_AM_CEST_2025:2025-05-30T03:59:02Z Verification progress: 0%
    14debug.log_Fri_May_30_11:42:20_AM_CEST_2025-2025-05-30T03:59:22Z Verification progress: 33%
    15debug.log_Fri_May_30_11:42:20_AM_CEST_2025-2025-05-30T03:59:36Z Verification progress: 66%
    16debug.log_Fri_May_30_11:42:20_AM_CEST_2025-2025-05-30T03:59:40Z Verification progress: 99%
    1738 s
    18--
    19/media/ssd/.bitcoin/debug.log:2025-05-31T11:42:11Z Verification progress: 0%
    20/media/ssd/.bitcoin/debug.log-2025-05-31T11:42:13Z Verification progress: 33%
    21/media/ssd/.bitcoin/debug.log-2025-05-31T11:42:18Z Verification progress: 66%
    22/media/ssd/.bitcoin/debug.log-2025-05-31T11:42:19Z Verification progress: 99%
    238 s
    
  33. hMsats commented at 12:40 pm on May 31, 2025: none
    @andrewtoth but the nproc command gives the answer 4. Is it still worthwhile to set rpcthreads to 16?
  34. andrewtoth commented at 12:46 pm on May 31, 2025: contributor
    Yes, since the requests are io bound they will mostly be waiting concurrently and not using much CPU.
  35. hMsats commented at 12:48 pm on May 31, 2025: none
    OK, thanks a lot. I will remove the rpcthreads and rpcworkqueue settings sooner or later!
  36. hMsats commented at 2:49 pm on May 31, 2025: none
    Increasing options.block_cache and options.write_buffer_size didn’t (really) help. Back to removing the LevelDB max file size increase. The rpcthreads and rpcworkqueue settings in bitcoin.conf have been removed.
  37. l0rinc commented at 4:09 pm on May 31, 2025: contributor

    shouldn’t options.block_cache and options.write_buffer_size be increased?

    I have experimented a lot with these and they all just slowed down IBD for me. Definitely let us know if it’s different on your system.

    Compaction could theoretically be a problem, but my understanding is that it would just mean that a few spikes are larger, most other processing should be significantly faster. Is this your only complain? On average this made IBD and block processing in general ~30% faster - but there’s a tradeoff, it also means that some blocks will be significantly slower now (when compaction cannot be delayed anymore). So we would expect more variation between blocks in 29, but most of the processing should be faster now on average - it’s why I have recommended bumping it from 2 to 8 or 16 in #30039 (comment), since that would likely have caused smaller spikes, while retaining most of the speedup on average. Luckily we can still do that in a minor update theoretically, LevelDB allows changing this size dynamically (after compaction the new file size limit is enforced, regardless of the input file sizes).

    Thanks for double checking these, I will run a few measurements in the next weeks and plot the average new-header-to-UpdateTip times to understand this better.

  38. hMsats commented at 5:18 pm on May 31, 2025: none
    @l0rinc My experience is that it’s best to just leave the LevelDB max file size as it always was (never a spike or complaints from Fulcrum or CLN) but I’ve never verified that for more than 110 blocks. So I will now just let the whole system run 29.0 (without the LevelDB max file size increase to 32MiB) for a longer time (a week or so) and see what happens and report back. Thanks for the feedback!
  39. l0rinc commented at 12:30 pm on June 4, 2025: contributor

    I ran a few reindexes with to try reproducing your issue. Since we don’t have compact block announcements during ibd/reindexes (like in your example), after discussing this with @andrewtoth, I’ve tried reindex-chainstate until 888,888 blocks with -debug=bench -debug=leveldb to plot the block connect times, comparing 32MiB (master @ 370c5926) with 2MiB (master with lower leveldb file size, v28-like setup @ c9417a59).


    A differential flame graph comparing the two leveldb file sizes suggests that the speedup observed from #30039 was at least partially due to better LevelDB table caching (and faster writes with the 32MiB files):

    • 32MiB: Image
    • 2MiB: Image

    I have run before/after benchmarks with both txindex=0 and txindex=1 - the plots are quite different, but in every single metric the current 32 MiB file size was dramatically better. I couldn’t even back my previous statement that compactions will take longer now - they seem to be better and faster than before in every scenario I’ve measured (both average and worst case).

    • 32MiB: Image
    • 2MiB: Image

    Despite variations in individual compaction durations, both setups can experience significant block processing delays lasting several minutes - they will just likely occur at different times now, but there’s an obvious correlation between the block-connect spikes and the leveldb events (both spikes are in near-perfect alignemnt):

    • 32MiB: Image
    • 2MiB: Image

    If such a compaction occurs while your node is processing new blocks, it could explain the multi-minute delays you’ve experienced. The older 2MiB file configuration also had long compactions, but the pattern and frequency might have been different. Besides compactions, flushes to disk also take a considerable amount of time. In your case (with compact blocks) an empty mempool would also introduce some variation. But to be clear, my measurements show that the situation should have improved considerably in v29. Let’s find out why we have differing views here.


    Analyzing the -debug=bench logs reveals that the block connect times are quite similar:

    • 32MiB: Image
    • 2MiB: Image

    It’s possible that the IBD phase behaves differently in this case from a synced node under load with compact blocks enabled (which wouldn’t be simulated by a reindex/IBD benchmark). Your Fulcrum/CLN setup, which relies heavily on txindex, could also introduce I/O patterns that interact with LevelDB compactions in ways not fully captured during an IBD-only benchmark.


    We appreciate you testing and finding these inconsistencies, could you please help us reproduce your concern more reliably? It would be really helpful if you could provide the raw logs (unprocessed) with the above debug logs enabled, running for a few thousand blocks (we need at least one flush caused by dbcache filling up, 100 blocks is not representative), so I can compare them with my results. I’m also interested in your compilation options (e.g. gcc or clang, version, etc.). I have tested with a powerful i9 for now, once I have a better understanding of the issue I can try to reproduce it a raspberry pi or similar.

  40. hMsats commented at 2:35 pm on June 4, 2025: none

    That’s an impressive amount of data analyses that will take me some time to fully understand but thanks a lot!

    I did some more testing but for my system 2 MiB is a clear winner and the good news is that then my server runs flawlessly but I need to let it run for more blocks. On the other hand, there are other people running version 29.0 and (with google) I haven’t seen any complains yet.

    In your case (with compact blocks) an empty mempool would also introduce some variation.

    During testing I did remove the mempool.dat file at each new run. Could that introduce problems?

    For now I have two questions:

    1: shouldn’t static constexpr size_t MAX_BLOCK_DB_CACHE{2_MiB}; in src/kernel/cache.h not also be increased to 32 MiB?

    2: The LevelDB documentation states “There isn’t much benefit in using blocks smaller than one kilobyte, or larger than a few megabytes.” (I think block size here corresponds to max_file_size from this issue but I’m not sure)

    I’ll see what I can do but it will take some time. Thanks again.

  41. l0rinc commented at 5:04 pm on June 4, 2025: contributor
    1. & 2)

    MAX_BLOCK_DB_CACHE sets block_cache which is used to configure leveldb::Options.block_cache and .write_buffer_size. It’s not strictly related to max_file_size - see https://github.com/bitcoin/bitcoin/blob/master/src/leveldb/db/db_impl.cc#L104-L105.

    2 MiB is a clear winner @andrewtoth just had a brilliant realization that I hadn’t thought of: you’ve just switched to v29, so you will still need to convert your 2 MiB files to 32 MiB files which requires heavy compaction at first: basically your whole index has to be rewritten. It’s a temporary growing pain (literally). I’ll measure this properly and if the effect is as obvious as we think it is, we may want to add a warning to 29.1 if we detect small leveldb file sizes. So if you want an even clearer winner, you could probably leave it for the night with v29 to let it compact - and tell us if that fixed the issue.

  42. hMsats commented at 9:21 pm on June 4, 2025: none
    @l0rinc thanks again. I will let v29 run for a longer time and have a look at the file lengths (in txindex) but it will take a few days as something went wrong here and I’m putting back a backup of all the block chain data …
  43. hMsats commented at 9:36 pm on June 4, 2025: none
    I thought the txindex files would become 32 MB from the moment the max_file_size is changed and all the other txindex files stay 2MB and would become 32MB only after a -reindex. If it happens automatically in the background than yes that could easily explain my issue and a warning would indeed be very necessary. I’ll report back.
  44. l0rinc commented at 2:53 pm on June 7, 2025: contributor

    I have measured the cost of switching from 2MiB max_file_size to 32MiB from 700k blocks to 888,888:

    Image

     0COMMIT_2MIB="c9417a59ee7b65d9dd3352c55f0e414d5dbdb7af"; COMMIT_32MIB="370c59261269fd9043674e0f4fd782a89e724473"; \            
     1STOP1=700000; STOP2=888888; DBCACHE=2500; \                                                                                                                                               
     2BASE_DIR="/mnt/my_storage"; DATA_DIR="$BASE_DIR/BitcoinData"; LOG_DIR="$BASE_DIR/logs"; mkdir -p $LOG_DIR; \                                                                              
     3BUILD_AND_RUN() { \                                                                                                                                                                       
     4  local COMMIT=$1 STOP=$2 EXTRA=$3; \                                                                                                                                                     
     5  git checkout $COMMIT && git clean -fxd && git reset --hard && \                                                                                                                         
     6  cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release && ninja -C build bitcoind && \                                                                                                      
     7  ./build/bin/bitcoind -datadir=$DATA_DIR -stopatheight=$STOP -dbcache=$DBCACHE -blocksonly -printtoconsole=0 -debug=bench -debug=leveldb $EXTRA; \                                       
     8} && \                                                                                                                                                                                    
     9BUILD_AND_RUN $COMMIT_2MIB $STOP2 "" && \
    10echo -e "\n-- Test 1: 2MiB to 2MiB --" && \
    11BUILD_AND_RUN $COMMIT_2MIB $STOP1 "-connect=0 -reindex-chainstate" && BUILD_AND_RUN $COMMIT_2MIB $STOP2 "-connect=0" && \
    12cp $DATA_DIR/debug.log $LOG_DIR/debug-2-2-${COMMIT_2MIB:0:8}-$(date +%s).log && \
    13echo -e "\n-- Test 2: 2MiB to 32MiB (conversion) --" && \
    14BUILD_AND_RUN $COMMIT_2MIB $STOP1 "-connect=0 -reindex-chainstate" && BUILD_AND_RUN $COMMIT_32MIB $STOP2 "-connect=0" && \
    15cp $DATA_DIR/debug.log $LOG_DIR/debug-2-32-${COMMIT_32MIB:0:8}-$(date +%s).log && \
    16echo -e "\nAll tests complete!" && ls -la $LOG_DIR/debug-2-*.log
    17HEAD is now at c9417a59ee DBWRAPPER_MAX_FILE_SIZE = 2 << 20
    

    The resulting logs confirm that LevelDB is indeed compacting to bigger files after restarting with 32MiB starting from 700k blocks:

    2MiB -> 2MiB

    Image

    Image

    2MiB -> 32MiB

    Image

    Image

    This indicates that full compaction will likely take some time. It also indicates that even this is faster than just the 2MiB file, even though there are more waits in the 2 -> 32 MiB scenario:

    0$ cat debug-2-2-c9417a59-1749262598.log | grep 'Too many L0 files' | wc -l
    1   28327
    2$ cat debug-2-32-370c5926-1749282484.log | grep 'Too many L0 files' | wc -l
    3   34930
    

    you can also see in the plots that compacting many 2MiB to 32MiB can sometimes indeed temporarily have bigger spikes:

    2MiB -> 2MiB

    Image

    2MiB -> 32MiB

    Image


    So @andrewtoth’s intuition seems to have been correct, I’m still waiting for your assessment, but otherwise I consider this mystery solved - I will try to find a way to add a warning if we detect the user still has 2 MiB blocks to avoid these surprises in the future.

  45. hMsats commented at 6:45 pm on June 7, 2025: none

    Thanks again for all this information, although it’s not always easy to understand what the precise consequences are for my particular system.

    I had to learn the hard way that the command dd is very dangerous, which caused quite a bit of a delay but I’m up and running again. My bitcoind, Fulcrum and 2 CLN nodes are now running for 6 hours with max_file_size=32MiB for bitcoind (starting from max_file_size=2MiB ) . Here are some preliminary results for the different file sizes for 2MiB and 32MiB (unix diff command):

    diff_blocks_index_du.txt diff_blocks_index_ls-l.txt diff_chainstate_du.txt diff_chainstate_ls-l.txt diff_indexes_txindex_du.txt diff_indexes_txindex_ls-l.txt

    In (approximately) the first few hours only (approximately) half the chainstate files were converted to 32 MiB.

    I have a method that informs me when (if ever) the complaints from Fulcrum and CLN are over. Tomorrow I will have a look if that happened.

    Whatever the outcome, a warning will be very useful for a user because many users will assume that the changes as described in the 29.0 release won’t affect them when they upgrade but it might.

  46. hMsats commented at 11:04 am on June 8, 2025: none

    Not much has changed, Fulcrum and CLN keep complaining. I observe that Fulcrum reconnects quickly after it complains but CLN has much more serious complains about the rpc calls getmempoolinfo and getblockhash:

    02025-06-08T05:23:52.559Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/media/ssd/.bitcoin/ -rpcclienttimeout=900 getmempoolinfo (29957 ms)
    12025-06-08T05:28:43.125Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/media/ssd/.bitcoin/ -rpcclienttimeout=900 getmempoolinfo (15270 ms)
    22025-06-08T05:28:43.596Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/media/ssd/.bitcoin/ -rpcclienttimeout=900 getblockhash 900289 (20714 ms)
    32025-06-08T05:30:27.159Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/media/ssd/.bitcoin/ -rpcclienttimeout=900 getblockhash 900289 (12427 ms)
    42025-06-08T05:30:27.454Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/media/ssd/.bitcoin/ -rpcclienttimeout=900 getmempoolinfo (13258 ms)
    52025-06-08T05:34:44.266Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/media/ssd/.bitcoin/ -rpcclienttimeout=900 getblockhash 900291 (10295 ms)
    62025-06-08T05:34:44.806Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/media/ssd/.bitcoin/ -rpcclienttimeout=900 getmempoolinfo (10289 ms)
    

    The minimum timeout after which CLN complains seems to be 10 seconds.

    To investigate further, I turned off Fulcrum, CLN nodes (and even electrum-personal-server) and so only Bitcoin Core was still running (with 32 MiB). I then wrote a bash shell script that requests getmempoolinfo and getblockhash every 5 seconds and prints the result to a file if it takes 10 seconds or more. This is the result:

     018 900310 Sun Jun  8 11:09:54 AM CEST 2025
     114 900311 Sun Jun  8 11:13:58 AM CEST 2025    4 minutes after
     216 900311 Sun Jun  8 11:18:56 AM CEST 2025    5 minutes after
     312 900312 Sun Jun  8 11:23:52 AM CEST 2025    5 minutes after
     411 900313 Sun Jun  8 11:28:52 AM CEST 2025    5 minutes after
     519 900313 Sun Jun  8 11:33:59 AM CEST 2025    5 minutes after
     617 900313 Sun Jun  8 11:39:07 AM CEST 2025    5 minutes after
     714 900314 Sun Jun  8 11:44:00 AM CEST 2025    5 minutes after
     816 900316 Sun Jun  8 11:50:03 AM CEST 2025    6 minutes after
     913 900316 Sun Jun  8 11:53:56 AM CEST 2025    4 minutes after
    1014 900317 Sun Jun  8 11:58:59 AM CEST 2025    5 minutes after
    1120 900317 Sun Jun  8 12:03:57 PM CEST 2025    5 minutes after
    

    So every 4 to 6 minutes but usually 5 minutes, Bitcoin Core is doing “something” and during that time rpc calls take longer than usual, which makes second layer applications complain. If that “something” requires a factor of 16 less time in the 2 MiB case (or a factor of 10 for example) then the applications will keep silent.

    I was able to couple the “something” with extra bitcoind activity with iotop where in between I only see m-msghand but during “something” there’s also b-httpworker, b-scheduler and b-http activity.

    I don’t believe this problem goes away after some time but I will keep my server running a little longer with the 32 MiB. If the situation doesn’t improve, I will switch back to 2MiB or accept the warnings.

    Note that after some time has passed, I don’t see any spikes in block verification time anymore, so that improved with a longer run time.

    So my conclusion for my setup is that Bitcoin Core runs just fine (after a little while) but rpc requests occasionally take longer due to the increased max_file_size and for my system too long.

    Another observation: the files in blocks/index and chainstate are about 32 MiB now but the files in indexes/txindex are still at 2 MiB. Does increasing the file sizes in indexes/txindex require a -reindex?

    This is my simple bash script:

     0#!/bin/bash
     1
     2cd
     3echo -n "" > getinfolatencies.txt
     4
     5while [ 1 ]; do
     6  start_t=$(date "+%s")
     7
     8  height=$(tail -100 /media/ssd/.bitcoin/debug.log | grep "UpdateTip: new best=" | sed 's/.*height=//' | sed 's/ version=.*//' | tail -1)
     9
    10  /home/user/bitcoin-29.0/build/bin/bitcoin-cli -datadir=/media/ssd/.bitcoin/ getmempoolinfo > /dev/null
    11  /home/user/bitcoin-29.0/build/bin/bitcoin-cli -datadir=/media/ssd/.bitcoin/ getblockhash $height > /dev/null
    12
    13  end_t=$(date "+%s")
    14
    15  getinfolatency=$(( $end_t - $start_t ))
    16  if [ "$getinfolatency" -ge 10 ]; then
    17    echo -n "$getinfolatency $height " >> getinfolatencies.txt
    18    date >> getinfolatencies.txt
    19  fi
    20
    21  sleep 5
    22done
    

    Note that I use debug.log to get the height because I don’t want to make another rpc call. This script can easily be adjusted by anyone running a full node to possibly reproduce my issue maybe with a lower value than 10 seconds.

  47. hMsats commented at 2:26 pm on June 8, 2025: none

    Another observation: the files in blocks/index and chainstate are about 32 MiB now but the files in indexes/txindex are still at 2 MiB. Does increasing the file sizes in indexes/txindex require a -reindex?

    Some of the files have now moved to 32MiB also in .bitcoin/indexes/txindex: diff_indexes_txindex_du.txt.

  48. andrewtoth commented at 3:13 pm on June 9, 2025: contributor

    Does increasing the file sizes in indexes/txindex require a -reindex? @hMsats -reindex is overkill for recreating the txindex. Shut down the node, delete .bitcoin/indexes and start up, and only the txindex will be recreated.

  49. hMsats commented at 3:33 pm on June 9, 2025: none
    @andrewtoth oh, that sounds scary to me but I have a fresh backup so I’ll try that. Thanks!
  50. hMsats commented at 5:44 pm on June 9, 2025: none
    @andrewtoth I just realized I shouldn’t remove indexes because a user that switches from a lower version to 29.0 wouldn’t do that and I want to see what happens in the most common use case.
  51. andrewtoth commented at 5:46 pm on June 9, 2025: contributor
    Ok, just wanted to point out you don’t need a -reindex just for recreating the txindex. -reindex will wipe all your block index and chainstate as well.
  52. l0rinc commented at 5:47 pm on June 9, 2025: contributor
    Thanks for testing it, please keep us in the loop.
  53. hMsats commented at 5:48 pm on June 9, 2025: none
    @andrewtoth Yes, thanks I also realized what the difference is.
  54. hMsats commented at 5:54 pm on June 9, 2025: none
    @l0rinc Thanks also. Well, I observe that the ‘.ldb’ files in .bitcoin/blocks/index and .bitcoin/chainstate were increased quite fast but not much is happening in .bitcoin/indexes/txindex where only a very low percentage of files have changed and it seems that Fulrum and CLN keep complaining …
  55. hMsats commented at 8:05 am on June 10, 2025: none

    This is my final conclusion:

    Because the situation didn’t improve I switched back to 2 MiB using a backup and everything works fine again. After a while I turned off Fulcrum and CLN and ran the above bash script again. Now no spikes (rcp request taking longer than 10 seconds) came up anymore. I then ran it again with a threshold of 5 seconds. Still nothing. Then ran it with a threshold of 2 seconds and the result was:

    03 900585 Tue Jun 10 08:58:46 AM CEST 2025
    12 900585 Tue Jun 10 09:01:44 AM CEST 2025
    22 900587 Tue Jun 10 09:16:39 AM CEST 2025
    

    So occasionally there is a “spike” but they are much shorter (2 or 3 seconds) than in the 32 MiB case. It could be that after all the 2MiB files have been converted to 32 MiB files the higher (>= 10 seconds) spikes are over but it seems to take a very long time when switching from a earlier version of Bitcoin Core to 29.0.

    So my conclusion is that when switching from an earlier version of Bitcoin Core to 29.0, for a long time it is doing “something” and occasionally rpc requests take longer. It’s possible that on better hardware it doesn’t matter much but on my system I will continue with the 2 MiB max_file_size.

    The df -i command shows that the large number of files is not an issue at all:

    0Filesystem     Inodes IUsed IFree IUse% Mounted on
    1/dev/sdb1        117M   52K  117M    1% /media/ssd
    
  56. l0rinc commented at 10:40 am on June 10, 2025: contributor

    This is my final conclusion

    I’d say it’s early to call this “final” yet.

    and everything works fine again

    We’d need some actual numbers, not just the conclusions.

    Then ran it with a threshold of 2 seconds and the result was:

    What was the effect after a few days of 32?

    but on my system I will continue with the 2 MiB max_file_size

    My measurements indicate that Bitcoin is struggling with 2MiB - can you do a full -reindex with v29, check if all index files are indeed 32 MiB and check again to see what your spikes indicate?

  57. hMsats commented at 11:01 am on June 10, 2025: none

    What was the effect after a few days of 32?

    Nothing really changed, all the block/index and chainstate files were near 32 MiB but only a small percentage of the indexes/txindex files were near 32 MiB. Fulcrum and CLN kept complaining.

    A -reindex would take a lot of time on my system. I don’t feel comfortable doing that at the moment because I’m pretty exhausted. I’m happy to close the issue and wait if anyone else is experiencing something similar.

  58. l0rinc commented at 11:06 am on June 10, 2025: contributor
    A full reindex on my servers take ~8 hours. On a lower-end machine it can take a week. It would be helpful for us to know if this is something we should prioritize or not.
  59. hMsats commented at 11:23 am on June 10, 2025: none
    You could go back to 2 MiB and run my bash script (or an adapted version of it) and observe if you see any spikes. Then go back to 32 MiB and see how long it takes to convert all the .ldb files (without an explicit -reindex) and see if you get any spikes during or after all the .ldb files have been converted.
  60. l0rinc commented at 11:27 am on June 10, 2025: contributor
    I have posted my findings above, can you pinpoint where you think I haven’t measured the same things that you have?
  61. hMsats commented at 11:46 am on June 10, 2025: none
    I probably don’t understand everything but have you also measured how long it takes for all the files in indexes/txindex to be converted to 32 MiB without a -reindex?
  62. hMsats commented at 1:46 pm on June 10, 2025: none
    @l0rinc in a few days a 4 tb ssd will arrive. I will copy the data and do a -reindex (with 32 MiB) using a different machine. That way my server will be up most of the time. It’s a a good way to test the ssd and the blocks directory.
  63. maflcko removed the label Questions and Help on Jun 12, 2025
  64. hMsats commented at 5:13 pm on June 12, 2025: none
    @l0rinc I’m running -reindex and will come back with some results next week.
  65. hMsats commented at 6:11 am on June 16, 2025: none

    The -reindex has finished but I had to put the chainstate and indexes directories on the internal disk of the laptop I used because the sync became painfully slow after a while with those directories on the external SSD (USB 3.0). After putting those directories on the internal SSD, syncing became 16 times faster!

    I used a spare 2TB disk I had forgotten about, so I didn’t had to wait for the 4TB SSD to arrive. However, while copying data from the 2TB SSD to the 4TB SSD, I noticed that the 4TB was faster by about a factor of 2.

    So now all LevelDB files were around 32MiB. Running my shell script again (2 rpc calls every 5 seconds) I got the following result with the 2TB SSD:

    035 901227 Sat Jun 14 05:14:48 PM CEST 2025
    110 901231 Sat Jun 14 05:51:09 PM CEST 2025 37 minutes after
    213 901231 Sat Jun 14 05:51:42 PM CEST 2025
    318 901237 Sat Jun 14 06:32:29 PM CEST 2025 41 minutes after
    

    while with the 4TB disk I got no spikes at all and stopped it after an hour!

    Running Fulcrum and the 2 CLN nodes with the 4TB SSD gave no complaints from Fulcrum and 3 complaints from the 2 CLN nodes in 26 hours because a rpc call to bitcoind took a little over 10 seconds which I would say is negligible. For CLN default a rpc call is only fatal if it takes more than 60 seconds but puts out a warning when it takes more than 10 seconds.

    So the good news is that I will continue with the 4TB SSD and Bitcoin Core 29.0 unaltered (using the 32MiB LevelDB files obtained after the -reindex).

    I think that the warning to the users (added to debug.log and/or to the release of 29.0 or 29.1) should be that while the LevelDB files in the chainstate and blocks directories are being converted to 32MiB, rpc calls may occasionally become really slow, like take many minutes (as also reported by #32733) , especially when using an external SSD combined with txindex=1. This may last about a day. For a longer duration, while the LevelDB files in the indexes/index directory are being converted, rpc calls may occasionally take a little longer (10 to 15 seconds)

    EDIT: after the -reindex calculated block verification times are now (rounded) 0 or 1 seconds without spikes, thus excellent.

    EDIT: my latency is also excellent.

  66. l0rinc commented at 6:19 am on June 16, 2025: contributor
    Thanks for following up. Not sure I fully understand the implications of all the changes (hardware and software), but my understanding is that the problems is solved. I’m not sure the warning is needed, but we can reconsider that if more people complain.
  67. hMsats commented at 7:02 am on June 16, 2025: none
    @l0rinc I agree but I do think the 29.0 release notes should have mentioned that for users upgrading from an earlier version of Bitcoin Core, the LevelDB files are converted from (approximately) 2MiB to 32MiB. Maybe that can be added if there’s a 29.1 release.
  68. martinatime commented at 4:08 pm on June 16, 2025: none

    A full reindex on my servers take ~8 hours. On a lower-end machine it can take a week. It would be helpful for us to know if this is something we should prioritize or not.

    I’m running a Raspberry Pi 5 with 8GB of RAM and a 4TB SSD on RPi’s Debian Lite Bookworm. This is a fresh install of v29 and after more than two weeks I’m only at blockheight of 829564 of 901531.

    I don’t know if this is a related problem but I don’t recall the intial chainstate verification taking this long to only be at 92%. I should have captured daily stats but I feel like it has taken the majority of the time for the last 5%. So I would be looking at probably two more weeks to complete.

    My other system (RPi v4 with 4GB RAM and 2TB SSD mentioned in #32733 ) seems to have only created 32MB txindex files since I upgraded it about two weeks ago. The older files are still 2MB.

  69. hMsats commented at 6:21 pm on June 16, 2025: none
    @martinatime I had exactly the same problem at about the same block height as you and (as I wrote above) for me the (only) solution was to put the .bitcoin directory without the blocks directory on the internal SSD of my laptop and make a symbolic link to the blocks directory on the external SSD. After that -reindex was 16x faster! Afterwards, I removed and copied the chainstate and indexes directories to the external SSD. But I also thought: is -reindex with an external SSD really that slow …
  70. sipa commented at 6:23 pm on June 16, 2025: member
    External USB disk controllers have far lower IO operations/s second than internal ones. Even if the sequential read/write speed seems great, they’re really not appropriate for heavy database loads.
  71. andrewtoth commented at 6:30 pm on June 16, 2025: contributor
    @hMsats are you aware of the -blocksdir argument for bitcoind? It stores the blocks at <blocksdir>/.bitcoin/blocks and then the rest of the data is at <datadir>/.bitcoin. So you don’t have to use symlinks, you can just do bitcoind -blocksdir=<external SSD>/.bitcoin -datadir=<internal SSD>/.bitcoin or omit -datadir altogether since that would be the default location.
  72. martinatime commented at 6:30 pm on June 16, 2025: none
    I’ve been running this hardware/configuration for almost 5 years now without this issue.
  73. hMsats commented at 6:43 pm on June 16, 2025: none
    @andrewtoth I heard about it but never really look into that bitcoind argument. Sounds good and exactly what I needed. Thanks!
  74. whitslack commented at 9:05 pm on June 22, 2025: contributor

    I too had to increase CLN’s bitcoin-rpcclienttimeout and bitcoin-retry-timeout to 300 seconds after upgrading to Bitcoin Core 29.0 because CLN kept dying due to Bitcoin RPCs taking multiple minutes to complete.

    A general question: Why does Bitcoin Core stop responding to RPCs while it is validating a new chain tip? Shouldn’t block validation be concurrent with handling RPCs? (The RPCs would simply query the chainstate as of the previous tip until the new tip finishes being validated.)

  75. sipa commented at 9:12 pm on June 22, 2025: member
    @whitslack It’d be great if it worked that way, but it doesn’t. Most of the data structures involved with transaction/block validation are protected by single exclusive mutex, cs_main. So things touching that are effectively single-threaded, and slow validation results in significant latency across the whole application, sadly.
  76. whitslack commented at 9:52 pm on June 22, 2025: contributor

    It’d be great if it worked that way, but it doesn’t. Most of the data structures involved with transaction/block validation are protected by single exclusive mutex, cs_main. @sipa: Well, okay, that’s a description of the way it was implemented, but surely that isn’t seen as the best feasible implementation, is it? Even an almost entirely naĂŻve switch to a global read/write lock would yield some benefit, and that seems like it would be very low-hanging fruit. (The block validator could lock the read lock while it validates a block, then upgrade to the write lock just in time to commit its change of the best chain tip. If the chosen RW-lock implementation doesn’t support atomically upgrading the lock, then the validator would just need to re-check its preconditions after releasing the read lock and acquiring the write lock, and it would have to discard its work and try again if the preconditions changed.) Obviously this wouldn’t help in the case of RPCs that need to modify the state protected by the global lock, but it would completely eliminate the delay in processing read-only RPCs while validating a block.

  77. hMsats commented at 7:01 am on June 23, 2025: none
    @whitslack it would be really interesting to see how often you get an UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli ... CLN warning if you just let version 29.0 run without a reindex. The expectation is that should decrease over time. I now run without setting bitcoin-rpcclienttimeout in the CLN config file and (after a reindex) get about 1 warning a day and the delay is usually a little over 10 seconds (between 10-12 seconds). Do you know if you ever got this CLN warning before switching to 29.0? What’s your system setup and are you running with txindex=1?
  78. whitslack commented at 2:58 pm on June 23, 2025: contributor

    it would be really interesting to see how often you get an UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli ... CLN warning if you just let version 29.0 run without a reindex. The expectation is that should decrease over time. I now run without setting bitcoin-rpcclienttimeout in the CLN config file and (after a reindex) get about 1 warning a day and the delay is usually a little over 10 seconds (between 10-12 seconds). @hMsats: I have never reĂŻndexed.

    Do you know if you ever got this CLN warning before switching to 29.0?

    • I got the UNUSUAL message complaining about bitcoin-cli taking long than 10 seconds…
      • …throughout 2020: 5093 times.
      • …throughout 2021: 2071 times.
      • …throughout 2022: 788 times.
      • …throughout 2023: 4129 times.
      • …throughout 2024: 10916 times.
      • …so far in 2025: 8656 times — so it appears to be on its worst pace ever.
    • I got the BROKEN message due to bitcoin-cli taking longer than 60 seconds…
      • …throughout 2020: 0 times.
      • …throughout 2021: 1 time.
      • …throughout 2022: 0 times.
      • …throughout 2023: 0 times.
      • …throughout 2024: 3 times.
      • …so far in 2025: 5 times.
    • I upgraded to Bitcoin Core 29.0 on 12 April 2025. After the upgrade…
      • I got the UNUSUAL message 1929 times.
      • I got the BROKEN message 3 times, but I upped my timeout to 300 seconds on 6 June (and didn’t get any more BROKEN messages after that).
      • The most recent UNUSUAL message reporting a bitcoin-cli latency greater than 60 seconds (which would have resulted in CLN death if not for my increased timeout setting) was only two days ago:
        02025-06-21T00:49:35.250Z UNUSUAL plugin-bcli: bitcoin-cli: finished bitcoin-cli -datadir=/var/lib/bitcoind -rpcclienttimeout=300 getblockhash 902113 (90895 ms)
        

    What’s your system setup

    An ancient Intel Core 2 Quad Q6600 (Kentsfield) with 8 GB RAM. Bitcoin Core’s chainstate LevelDB database resides on a Linux mdraid striped (“RAID0”) pair of SATA SSDs (a Crucial MX500 and a Samsung 860 EVO) attached to the motherboard’s Intel NM10/ICH7 SATA controller.

    and are you running with txindex=1?

    No.

    0$ grep '^\w' /etc/bitcoin/bitcoin.conf
    1blocknotify=/var/lib/bitcoind/blocknotify.sh %s
    2dbcache=32
    3maxmempool=64
    4mempoolexpiry=72
    5listenonion=0
    6walletnotify=/var/lib/bitcoind/walletnotify.sh %s %w %b %h
    7walletrbf=1
    8whitelistforcerelay=1
    9rpcthreads=16
    
  79. hMsats commented at 3:54 pm on June 23, 2025: none
    @whitslack thanks a lot for the elaborate answer! I get the impression that although CLN was already having some problems, it deteriorated after upgrading bitcoind to 29.0 because you got the CLN BROKEN message more often, even after running 29.0 for many weeks

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-06-29 00:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me