Crash possibly due to socket timeout #9187

issue GSPP openend this issue on November 18, 2016
  1. GSPP commented at 3:21 pm on November 18, 2016: none

    I just experienced a crash in v0.13:

    image

    Log tail:

    2016-11-18 14:09:03 Pre-allocating up to position 0x600000 in rev00636.dat 2016-11-18 14:09:03 UpdateTip: new best=000000000000000000d16634d927aba9e8bb99873463c4a32e2a05280d3b8a30 height=432015 version=0x20000000 log2_work=85.336859 tx=159293242 date=‘2016-09-29 03:24:16’ progress=0.977256 cache=6491.3MiB(13736081tx) 2016-11-18 14:09:03 UpdateTip: new best=0000000000000000040faf93fae6d2208e224d3e3908dfdd2aef9c7ff4337e50 height=432016 version=0x20000000 log2_work=85.336889 tx=159293884 date=‘2016-09-29 03:28:31’ progress=0.977257 cache=6491.3MiB(13736079tx) 2016-11-18 14:09:03 UpdateTip: new best=0000000000000000023550eb973ac673f9a4c98d019467aa3a923357f8b5a0b5 height=432017 version=0x20000000 log2_work=85.33692 tx=159293915 date=‘2016-09-29 03:28:50’ progress=0.977258 cache=6491.3MiB(13736090tx) 2016-11-18 14:09:04 UpdateTip: new best=000000000000000000b445e34bb9211a415ab2e7b5235a753eadeea29be727e3 height=432018 version=0x20000000 log2_work=85.336951 tx=159296763 date=‘2016-09-29 04:07:04’ progress=0.977270 cache=6491.4MiB(13736939tx) 2016-11-18 14:09:04 UpdateTip: new best=0000000000000000007517b1c7799a87ccc578a84d15c503ff986082a7334fff height=432019 version=0x20000000 log2_work=85.336981 tx=159299585 date=‘2016-09-29 04:32:23’ progress=0.977278 cache=6491.6MiB(13737666tx) 2016-11-18 14:09:04 UpdateTip: new best=000000000000000001994d82b4955e4922ee56182394a1445165eb8832247c39 height=432020 version=0x20000000 log2_work=85.337012 tx=159301759 date=‘2016-09-29 04:34:13’ progress=0.977279 cache=6491.7MiB(13738037tx) 2016-11-18 14:09:05 UpdateTip: new best=00000000000000000120fbf36091d18ee0af7a0b25e06f71a8cda9ff7f2992e9 height=432021 version=0x20000000 log2_work=85.337042 tx=159302930 date=‘2016-09-29 04:35:52’ progress=0.977280 cache=6491.7MiB(13737798tx) 2016-11-18 14:24:08 ping timeout: 1200.000907s 2016-11-18 14:27:20 socket sending timeout: 1201s 2016-11-18 14:27:37 ping timeout: 1200.032108s 2016-11-18 14:28:00 socket sending timeout: 1201s 2016-11-18 14:29:04 socket sending timeout: 1201s

    Seems similar to part 1 of #8074.

    My command line was bitcoin-qt.exe -reindex.

  2. GSPP commented at 4:00 pm on November 18, 2016: none

    After restarting the software the GUI says “Loading block index…” but that does not appear to be an accurate status string. It already read 16GBs and the debug.log says things like:

    2016-11-18 15:57:08 UpdateTip: new best=000000000000008e8378a4d194f40ffe0e73d6275e6b8c4916427348df3aa0d6 height=243438 version=0x00000002 log2_work=70.487434 tx=19972477 date=‘2013-06-26 14:15:02’ progress=0.061897 cache=755.6MiB(2135653tx)

    Apparently, the reindex process restarted (which is OK I guess) but the GUI does not show it. I think that should be changed.

  3. GSPP commented at 5:19 pm on November 19, 2016: none

    Now, the process crashed:

    image

    Last log entry is:

    2016-11-19 16:54:49 UpdateTip: new best=00000000000000000120fbf36091d18ee0af7a0b25e06f71a8cda9ff7f2992e9 height=432021 version=0x20000000 log2_work=85.337042 tx=159302930 date=‘2016-09-29 04:35:52’ progress=0.976789 cache=6491.7MiB(13737798tx)

    I don’t know what’s going on here. Maybe a block is corrupt at 97%. I’m not saying this is a bug in the software at all. It just would be nice to be able to have insight into why processing aborted here.

    After clicking OK I get:

    2016-11-19 17:15:46 Aborted block database rebuild. Exiting. 2016-11-19 17:15:46 scheduler thread interrupt 2016-11-19 17:15:46 Shutdown: In progress… 2016-11-19 17:15:46 StopNode()

    Maybe more is coming but I’m not waiting for the shutdown. Restoring a working VM backup now.

    I find the process of obtaining a synchronized copy of the blockchain to be very brittle. For some reason I constantly experience issues causing some data store to become corrupted causing a need to restart the process. I really hope it’s not something about my machine or the way I’m doing it. But I really don’t do more than running the official software in a clean VM. The hardware appears to work correctly for anything else.

    (Possibly this comment is totally unrelated to the main issue. Posting it anyway to add as much information as possible.)

  4. rebroad commented at 9:10 am on November 20, 2016: contributor
    @GSPP I doubt it’s anything to do with the timeouts - to get a better picture of what happened before the crash you probably need to enable additional debug, or perhaps use a debugger, the former being the less technical option.
  5. laanwj added the label Windows on Nov 21, 2016
  6. laanwj added the label Data corruption on Nov 21, 2016
  7. laanwj commented at 8:03 am on November 21, 2016: member

    The log message is almost certainly unrelated.

    To be able to troubleshoot this we need a stack trace of the crash, and the exact release that you used, if you used the binary from bitcoin.org. We have tooling to convert memory offsets to symbols. A screenshot of a dialog without specific error message is not going to help, unfortunately.

    I find the process of obtaining a synchronized copy of the blockchain to be very brittle. For some reason I constantly experience issues causing some data store to become corrupted causing a need to restart the process. I really hope it’s not something about my machine or the way I’m doing it.

    Usually issues such as this, if you don’t do anything special, are related to hardware failures. That it works fine for other applications isn’t much of a reassurance, verifying the block chain is very dense in I/O and CPU usage and tends to bring up issues that nothing else does.

  8. GSPP commented at 6:59 pm on November 27, 2016: none

    So since the system this happened on is gone I cannot provide further evidence. I hope this will somehow help you in the future in case someone else experiences a similar issue. Feel free to close if nothing further can be done.

    The hardware is running some intense SQL loads. Disk pages are checksummed by the RDBMS. I’ll see if I can run some CPU and memory stress tool.

  9. laanwj closed this on Dec 10, 2016

  10. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-30 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me