Slow catchup for recent blocks on non-SSD drive #12058

issue Sjors openend this issue on December 30, 2017
  1. Sjors commented at 7:17 pm on December 30, 2017: member

    It often takes more than half an hour to catch up on less than a day of blocks. I suspect the bottleneck is disk I/O because I don’t have an SSD Drive. Here’s a recent log.

    It’s a Fusion drive, which means some part of it is SSD, but it’s up to the OS where to put stuff. Maybe there’s some way to give MacOS a hint which files to be on the SSD part? (Or maybe it eventually figures this out).

    I’m not using txindex.

    I’ve noticed things tends to speed up a bit as the used dbcache grows. @sipa wrote in #10647:

    The database is stored in a special compact format on disk. When loaded into memory it’s done an quickly-accessible way that is several times larger than the disk version. Loading the whole thing in memory needs around 8 GB.

    There could perhaps be an option to load the whole database into memory at startup to speed things up later if you actually have that much dbcache configured….

    Is there an experimental branch for this? Depending on the level of rocket science required, I might take a stab at it myself.

    I’m buying an external SSD drive, so will be able to compare. Although apparently that might still not perform nearly as well as an internal SSD drive. Though again, loading a whole bunch of date in one operation might help there as well.

  2. Varunram commented at 5:31 pm on January 1, 2018: contributor

    My 2 cents:

    I’ve noticed things tends to speed up a bit as the used dbcache grows.

    That might be due to the fact that Fusion drives have the most recently used data on the SSD rather than on the hard drive

    It’s a Fusion drive, which means some part of it is SSD, but it’s up to the OS where to put stuff. Maybe there’s some way to give MacOS a hint which files to be on the SSD part? (Or maybe it eventually figures this out).

    I don’t think you can do that, although we could see which files are being stored on the SSD with some cli tricks.

  3. fanquake added the label MacOSX on Jan 2, 2018
  4. fanquake added the label Resource usage on Jan 2, 2018
  5. TheBlueMatt commented at 7:16 pm on January 3, 2018: member
    I believe @pstratem had a branch to do that years ago. One simple thing to try is to drop your OS-level disk caches with /proc/sys/vm/drop_caches before load, benchmark a load, and then do a cat ~/.bitcoin/chainstate/* > /dev/null before load and benchmark again to see whether having the DB in OS cache helps.
  6. Sjors commented at 5:11 pm on January 6, 2018: member

    Just tried with a Glyph Atom RAID SSD drive connected via USB-C 3.1 Gen 2. About an order of magnitude faster on the same machine. Only downside is that my second monitor which uses the other USB-C port goes bezirk. :-) @Varunram I meant that it accelaterated within each session. It’s slow again for every subsequent session. So that suggests it gets faster as there’s more dbcache hits.

    Would it make sense to just fetch the most recent UTXO’s from disk up to some percentage of dbcache, or a function of the number of blocks left to sync? I’m assuming the most recent UTXO’s are most likely to be spent.

  7. Sjors commented at 5:13 pm on January 6, 2018: member
    Once my additional memory arrives I might try running a pruned node on a RAM disk to see how that performs :-)
  8. sipa commented at 5:23 pm on January 6, 2018: member

    @Sjors Not with the current design. Currently, we just load spent/created UTXOs in RAM, until the cache is full, and then write the whole thing (all modified UTXO entries) to disk, and wipe it, and start over.

    You might think that there’s a benefit in keeping some non-dirty recently created UTXOs around, but benchmarks show that this is actually not beneficial, as it reduces the number of updates that can be done before the cache is full again, leading to more frequent flushes.

    I would like to change this design to one where flushing happens asynchronously in a different thread, so that it is no longer on the critical path for validation.

    In such a new design loading some percentage of recent UTXOs into memory may be worth it. Right now, I think it’s only worth it if you actually have enough memory for (nearly) the entire UTXOs set.

  9. bitsolemn commented at 5:00 am on January 15, 2019: none
    I was able to reduce an estimated 30+days full blockchain sync with txindex=1 to a completed sync in ~3-4 days by bumping -dbcache=4000 on a HDD this week. Without that it seemed futile.
  10. fanquake closed this on Feb 3, 2020

  11. DrahtBot locked this on Feb 15, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-23 09:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me