Reindex seems really slow (at least with default dbcache) #8245

issue gmaxwell openend this issue on June 23, 2016
  1. gmaxwell commented at 10:33 am on June 23, 2016: contributor

    Doing some testing and seeing very slow reindex on my laptop:

    2016-06-22 09:51:04 Bitcoin version v0.12.99.0-9e45ef1-dirty 2016-06-22 23:33:17 UpdateTip: new best=0000000000000000006becbd6a46552fb37225db2e34417352f04dc040f93652 height=328980 version=0x00000002 log2_work=81.374884 tx=50826071 date=‘2014-11-07 16:20:11’ progress=0.377530 cache=51.0MiB(25600tx)

    Thought it might be my machine but sipa tested and saw similar behavior. Seems most time is going into flushing chainstate. Not sure if it’s a regression but it’s very slow.

    Opening issue so we don’t lost track of it before release.

    [Please tag for 0.13]

  2. sipa added this to the milestone 0.13.0 on Jun 23, 2016
  3. sipa added the label Validation on Jun 23, 2016
  4. laanwj commented at 10:51 am on June 23, 2016: member

    Is it just reindex that is slower or also sync from a node?

    Can you try before and after #7917?

  5. MarcoFalke added the label Priority Medium on Jun 23, 2016
  6. MarcoFalke commented at 11:51 am on June 23, 2016: member
    I am pretty sure I tested #7917 back when the pull was open and I could not find any performance issues, even when switching back and forth versions of bitcoin core during the reindex.
  7. sipa commented at 1:46 pm on June 23, 2016: member
    I’m not convinced it’s actually a regression. Maybe we’re so used to always running with large dbcache that we’re now surprised to slow it is with the default setting.
  8. laanwj commented at 1:57 pm on June 23, 2016: member

    I’ve been extensively testing on Windows 10 today and seeing slow synchronization (not reindex):

     02016-06-23 13:50:19 UpdateTip: new best=00000000000000000412bc1fa22a035f7f7f545586ce52ea1292883d65ef3f50 height=356958 version=0x00000002 log2_work=82.801074 tx=69031401 date='2015-05-18 07:51:33' progress=0.622482 cache=58.3MiB(15614tx)
     12016-06-23 13:50:23 UpdateTip: new best=0000000000000000020fa4b30a53a44debf3600ebb14d1a6561e2eebd575ef2b height=356959 version=0x00000002 log2_work=82.80111 tx=69031724 date='2015-05-18 07:56:13' progress=0.622486 cache=59.2MiB(16110tx)
     22016-06-23 13:50:29 UpdateTip: new best=000000000000000009bff6a73f74d1efd4c0945af0ed528e2492624ccc1044e5 height=356960 version=0x00000002 log2_work=82.801146 tx=69032017 date='2015-05-18 08:00:57' progress=0.622490 cache=61.7MiB(16665tx)
     32016-06-23 13:50:41 UpdateTip: new best=000000000000000002b15055f17bf9e876914bf9566abab8c6a165968277c45e height=356961 version=0x00000002 log2_work=82.801182 tx=69032559 date='2015-05-18 08:09:30' progress=0.622496 cache=62.7MiB(17738tx)
     42016-06-23 13:50:51 UpdateTip: new best=000000000000000008a24b93ffab49d39694083fef5f6b9d24b69cfe5e0280d2 height=356962 version=0x00000002 log2_work=82.801218 tx=69032746 date='2015-05-18 08:22:59' progress=0.622503 cache=63.3MiB(18316tx)
     52016-06-23 13:51:19 UpdateTip: new best=00000000000000000229d3b9297140d003994cde00e0a726f6e55c6b95561fe0 height=356963 version=0x00000002 log2_work=82.801253 tx=69033878 date='2015-05-18 08:30:59' progress=0.622512 cache=0.3MiB(0tx)
     62016-06-23 13:51:43 UpdateTip: new best=000000000000000015de74fe80f877a5aba32735b0d6841ff0d146ff758b81d3 height=356964 version=0x00000002 log2_work=82.801289 tx=69035054 date='2015-05-18 08:52:19' progress=0.622528 cache=11.2MiB(3029tx)
     72016-06-23 13:52:08 UpdateTip: new best=0000000000000000110f9a8d5ebca55b992e014a0084b35ad11f0ac91d1a4278 height=356965 version=0x00000003 log2_work=82.801325 tx=69036009 date='2015-05-18 09:09:32' progress=0.622540 cache=27.3MiB(5363tx)
     82016-06-23 13:52:28 UpdateTip: new best=0000000000000000086225a6554b82a687c93ed4abd8c37019c2f9e95ad9a25f height=356966 version=0x00000003 log2_work=82.801361 tx=69036664 date='2015-05-18 09:13:56' progress=0.622546 cache=33.6MiB(7050tx)
     92016-06-23 13:52:37 Pre-allocating up to position 0x400000 in rev00270.dat
    102016-06-23 13:52:37 UpdateTip: new best=000000000000000001de8968d633c5b0c3bc77d2fcd377d8ede0c5be42abcdea height=356967 version=0x00000002 log2_work=82.801397 tx=69037153 date='2015-05-18 09:21:13' progress=0.622552 cache=36.0MiB(7975tx)
    112016-06-23 13:53:25 UpdateTip: new best=00000000000000000293fce0cb9f6afc49b3f307ef5ca556617fd182a030db59 height=356968 version=0x00000003 log2_work=82.801433 tx=69038601 date='2015-05-18 09:45:21' progress=0.622570 cache=57.1MiB(11507tx)
    122016-06-23 13:53:25 UpdateTip: new best=000000000000000000555586c55daec4ae63c90a725e98dd76212d62e5b8ad52 height=356969 version=0x00000002 log2_work=82.801469 tx=69038602 date='2015-05-18 09:46:44' progress=0.622570 cache=57.1MiB(11508tx)
    

    Now, this is a slow laptop, and it’s never been really fast, but some blocks take more than 20 seconds. I’ve had faster sync times on crappy ARM boxes.

    Edit: it is possible that this is due to another process interfering with disk access, and unrelated to any change, see #8250

  9. laanwj commented at 2:11 pm on June 23, 2016: member

    I’m not convinced it’s actually a regression.

    I’m not sure that it is a regression either, but it would make sense to measure w/ recent changes, especially CB (I remember some worries about the effect on initial sync).

    Separately from that, increasing the default dbcache makes sense IMO. 100 is very small, and it’s also divided up inefficiently (the leveldb caches part of is allocated to hardly count for anything).

  10. grant-olson commented at 5:37 pm on July 5, 2016: none

    If you want some anecdotal evidence…

    I just filled up my old SSD and got a new SSD that I synced to the block chain from scratch. It seemed to run fine until we got in to the range of a half year, or maybe year, left, at which point my disk started thrashing so hard it would end up freezing up my Xubuntu 16.04 install and I’d need to hard cycle power on the computer. This happened repeatedly. Raising dbcache solved the problem for me. I think there’s just got to be so many transactions lately the default setting does need to go up.

    EDIT: for clarification, this was while --reindexing blocks on disk.

  11. MarcoFalke commented at 4:45 pm on July 8, 2016: member
    Anything left to do here?
  12. laanwj commented at 9:59 am on July 11, 2016: member
    Time to close this. I suppose there is enough possible to be done to speed up the sync process (no more low-hanging fruit, though), but as intermediate step (#8273) has been taken for 0.13.0 and there doesn’t seem to be an issue besides ’newer blocks are more expensive to verify’, there’s no need to keep this issue open for it.
  13. laanwj closed this on Jul 11, 2016

  14. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-01-21 21:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me