having dataworkdir and dataarchivedir to increase velocity #21645

issue StefT7 opened this issue on April 9, 2021
  1. StefT7 commented at 7:52 PM on April 9, 2021: none

    Process is very slow. It makes many days to import full blocks. In my case, I had an index issue during download and resetindexes was very long process (too long in fact)

    SSD drives seems to increase process speed Looking at forum and my personal tests, using SSD drive increase process speed. unfortunatly, big SSD drives are very expensive.

    Speed up is only needed during current traitment One solution should be to have 2 data emplacements instead of 1:

    • dataworkdir: path inside SSD drive to store curently files in process (block, index, chainstate)

    • archivedir: path where to store ended files (those files are not modifed often)

    Like this, no need of a large and expensive SSD drive to run Bitcoin Core server

  2. StefT7 added the label Feature on Apr 9, 2021
  3. MarcoFalke commented at 8:00 PM on April 9, 2021: member

    This is possible with -blocksdir

  4. MarcoFalke closed this on Apr 9, 2021

  5. StefT7 commented at 9:27 PM on April 9, 2021: none

    Hi,

    -blocksdir only store blocks and undo files in a specific directory. My request was to make a temporary SSD working directory to work on large files. When work is done, just move files to large HD device. When I had to reset indexes, it also recalculates undo files on HD device and it was so long that I decided to stop and restart from empty chain, it's allways running for now.

  6. MarcoFalke commented at 6:12 AM on April 10, 2021: member

    You can set -datadir=ssd -blocksdir=hdd

  7. StefT7 commented at 6:38 AM on April 10, 2021: none

    The problem is that calculated undo files (rev*.dat) are stored in blokcs directory (HD Drive) and those files take a lot of time to be generated. It would be nice if they was generated just once, but it seems that they are created again during reindexation.

  8. StefT7 commented at 8:44 AM on April 10, 2021: none

    I think... the best way in fact should be to have a mixed prune mode. Using prune, instead of removing files, archiving it to a second storage. It takes not a lot of code modification I think. In that case, it become possible to put everything in SSD drive with -arcprune=<n>, to run with the best speed and to keep full block chain.

  9. MarcoFalke commented at 9:13 AM on April 11, 2021: member

    -reindex shouldn't be needed at all in normal operation, so it seems odd to over-optimize it. There is also -reindex-chainstate, which may be faster, but covers less cases of data corruption.

  10. StefT7 commented at 3:37 PM on April 11, 2021: none

    I understand. In my case corruption was on an index file, so I had to do it or load from empty folder. In any case, first load (and future corrections) is too long. Speed is slowing down and is now near 5mn per block file. It can be discouraging for many people. I think that my sugestion can be a good test to do to improve speed. I started like this so I will wait till the end (in few days, hope no error), but if you don't implemant this, I will do it myself (but it would be better done by a pro).

  11. DrahtBot locked this on Aug 18, 2022
Contributors
Labels

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-05-02 12:14 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me