It often takes more than half an hour to catch up on less than a day of blocks. I suspect the bottleneck is disk I/O because I don’t have an SSD Drive. Here’s a recent log.
It’s a Fusion drive, which means some part of it is SSD, but it’s up to the OS where to put stuff. Maybe there’s some way to give MacOS a hint which files to be on the SSD part? (Or maybe it eventually figures this out).
I’m not using txindex
.
I’ve noticed things tends to speed up a bit as the used dbcache grows. @sipa wrote in #10647:
The database is stored in a special compact format on disk. When loaded into memory it’s done an quickly-accessible way that is several times larger than the disk version. Loading the whole thing in memory needs around 8 GB.
There could perhaps be an option to load the whole database into memory at startup to speed things up later if you actually have that much dbcache configured….
Is there an experimental branch for this? Depending on the level of rocket science required, I might take a stab at it myself.
I’m buying an external SSD drive, so will be able to compare. Although apparently that might still not perform nearly as well as an internal SSD drive. Though again, loading a whole bunch of date in one operation might help there as well.