optimization: increase default LevelDB write batch size to 64 MiB #31645

pull l0rinc wants to merge 1 commits into bitcoin:master from l0rinc:l0rinc/utxo-dump-batching changing 1 files +1 −1
  1. l0rinc commented at 8:08 pm on January 12, 2025: contributor

    The UTXO set has grown significantly since 2017, and flushing it from memory to LevelDB often takes over 20 minutes after a successful IBD with large dbcache values. The final UTXO set is written to disk in batches, which LevelDB sorts into SST files. By increasing the default batch size, we can reduce overhead from repeated compaction cycles, minimize constant overhead per batch, and achieve more sequential writes.

    Experiments with different batch sizes (loaded via assumeutxo at block 840k, then measuring final flush time) show that 64 MiB batches significantly reduce flush time without notably increasing memory usage.

    dbbatchsize flush_sum (ms)
    8 « 20 236993.73
    8 « 20 239557.79
    8 « 20 244149.25
    8 « 20 246116.93
    8 « 20 243496.98
    16 « 20 209673.01
    16 « 20 225029.97
    16 « 20 230826.61
    16 « 20 230312.84
    16 « 20 235912.83
    32 « 20 201898.77
    32 « 20 196676.18
    32 « 20 198958.81
    32 « 20 196230.08
    32 « 20 199105.84
    64 « 20 150691.51
    64 « 20 151072.18
    64 « 20 151465.16
    64 « 20 150403.59
    64 « 20 150342.34
    128 « 20 155917.81
    128 « 20 156121.83
    128 « 20 156514.6
    128 « 20 155616.36
    128 « 20 156398.24
    256 « 20 166843.39
    256 « 20 166226.37
    256 « 20 166351.75
    256 « 20 166197.15
    256 « 20 166755.22
    512 « 20 186020.24
    512 « 20 186689.18
    512 « 20 186895.21
    512 « 20 185427.1
    512 « 20 186105.48
    1 « 30 185488.98
    1 « 30 185963.51
    1 « 30 185754.25
    1 « 30 186993.17
    1 « 30 186145.73

    Checking the impact of a -reindex-chainstate with -stopatheight=878000 and -dbcache=30000 gives:

    On SSD:

    16 « 20

    02025-01-12T07:31:05Z [warning] Flushing large (26 GiB) UTXO set to disk, it may take several minutes
    12025-01-12T07:53:51Z Shutdown: done
    

    Flush time before: 22 minutes and 46 seconds

    64 « 20

    02025-01-12T18:30:00Z [warning] Flushing large (26 GiB) UTXO set to disk, it may take several minutes
    12025-01-12T18:44:43Z Shutdown: done
    

    Flush time after: 14 minutes and 43 seconds

    On HDD:

    16 « 20

    02025-01-12T04:31:40Z [warning] Flushing large (26 GiB) UTXO set to disk, it may take several minutes
    12025-01-12T05:02:39Z Shutdown: done
    

    Flush time before: 30 minutes and 59 seconds

    64 « 20

    02025-01-12T20:22:24Z [warning] Flushing large (26 GiB) UTXO set to disk, it may take several minutes
    12025-01-12T20:42:57Z Shutdown: done
    

    Flush time after: 20 minutes and 33 seconds


    Reproducer:

    You can either do a full IBD or a reindex(-chainstate) and check the final logs flush the 840k blocks or load the UTXO set from the assumeUTXO torrent and use that to measure:

     0# Build Bitcoin Core
     1cmake -B build -DCMAKE_BUILD_TYPE=Release && cmake --build build -j$(nproc)
     2
     3# Set up a clean demo environment
     4mkdir -p demo && rm -rfd demo/chainstate demo/chainstate_snapshot demo/debug.log
     5
     6# Download the UTXO set until 840k
     7See: [#28553](/bitcoin-bitcoin/28553/)
     8
     9# Start bitcoind with minimal settings without mempool and internet connection
    10build/src/bitcoind -datadir=demo -stopatheight=1
    11build/src/bitcoind -datadir=demo -daemon -blocksonly=1 -connect=0
    12
    13# Load the AssumeUTXO snapshot, making sure the path is correct
    14# Expected output includes `"coins_loaded": 176948713`
    15build/src/bitcoin-cli -datadir=demo loadtxoutset ~/utxo-840000.dat
    16
    17# Stop the daemon and verify snapshot flushes in the logs
    18build/src/bitcoin-cli -datadir=demo stop
    19grep "FlushSnapshotToDisk: completed" demo/debug.log
    
  2. l0rinc commented at 8:11 pm on January 12, 2025: contributor

    Visual representation of the assumeUTXO measurements (16MiB was the previous default, 64MiB is the proposed one):

     0import re
     1import sys
     2
     3
     4def parse_bitcoin_debug_log(file_path):
     5    results = []
     6
     7    flush_sum = 0.0
     8    flush_count = 0
     9
    10    version_pattern = re.compile(r"Bitcoin Core version")
    11    flush_pattern = re.compile(r'FlushSnapshotToDisk: completed \(([\d.]+)ms\)')
    12
    13    def finalize_current_block():
    14        nonlocal flush_sum, flush_count
    15        if flush_count > 0:
    16            results.append((flush_sum, flush_count))
    17        flush_sum = 0.0
    18        flush_count = 0
    19
    20    try:
    21        with open(file_path, 'r') as file:
    22            for line in file:
    23                if version_pattern.search(line):
    24                    finalize_current_block()
    25                    continue
    26
    27                match_flush = flush_pattern.search(line)
    28                if match_flush:
    29                    flush_ms = float(match_flush.group(1))
    30                    flush_sum += flush_ms
    31                    flush_count += 1
    32    except Exception as e:
    33        print(f"Error reading file: {e}")
    34        sys.exit(1)
    35
    36    finalize_current_block()
    37
    38    return results
    39
    40
    41if __name__ == "__main__":
    42    if len(sys.argv) < 2:
    43        print("Usage: python3 script.py <path_to_debug_log>")
    44        sys.exit(1)
    45
    46    file_path = sys.argv[1]
    47    parsed_results = parse_bitcoin_debug_log(file_path)
    48
    49    for total_flush_time, total_flush_calls in parsed_results:
    50        print(f"{total_flush_time:.2f},{total_flush_calls}")
    
  3. DrahtBot commented at 8:11 pm on January 12, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/31645.

    Reviews

    See the guideline for information on the review process. A summary of reviews will appear here.

    Conflicts

    No conflicts as of last run.

  4. laanwj added the label UTXO Db and Indexes on Jan 13, 2025
  5. sipa commented at 3:52 pm on January 13, 2025: member

    FWIW, the reason for the existence of the batch size behavior (as opposed to just writing everything at once) is that it causes a memory usage spike at flush time. If that spike exceeds the memory the process can allocate it causes a crash, at a particularly bad time (may require a replay to fix, which may be slower than just reprocessing the blocks).

    Given that changing this appears to improve performance it’s worth considering of course, but it is essentially a trade-off between speed and memory usage spiking.

  6. 1440000bytes commented at 6:00 pm on January 13, 2025: none

    If there are tradeoffs (speed, memory usage etc.) involved in changing default batch size, then it could remain the same.

    Maybe a config option can be provided to change it.

  7. sipa commented at 6:02 pm on January 13, 2025: member
    There is a config option. This is about changing the dedault.
  8. 1440000bytes commented at 6:12 pm on January 13, 2025: none

    There is a config option. This is about changing the dedault.

    Just realized dbbatchsize already exists.

  9. l0rinc commented at 6:58 pm on January 13, 2025: contributor

    If that spike exceeds the memory the process can allocate it causes a crash

    Thanks for the context, @sipa. On the positive side, the extra allocation is constant (or at least non-proportional with the usage) and it’s narrowing the window for other crashes during flushing (https://github.com/bitcoin/bitcoin/pull/30611 will also likely help here). This change may also enable another one (that I’m currently re-measuring to be sure), which seems to halve the remaining flush time again (by sorting the values in descending order before adding them to the batch), e.g. from 30 minutes (on master) to 10 (with this change included).

  10. luke-jr commented at 9:55 pm on January 14, 2025: member
    Can we predict the memory usage spike size? Presumably as we flush, that releases memory, which allows for a larger and larger batch size?
  11. l0rinc commented at 10:32 am on January 16, 2025: contributor

    Since profilers may not catch these short-lived spikes, I’ve instrumented the code, loaded the UTXO set (as described the in the PR), parsed the logged flushing times and memory usages and plotted them against each other to see the effect of the batch size increase.

      0diff --git a/src/txdb.cpp b/src/txdb.cpp
      1--- a/src/txdb.cpp	(revision d249a353be58868d41d2a7c57357038ffd779eba)
      2+++ b/src/txdb.cpp	(revision bae884969d35469320ed9967736eb15b5d87edff)
      3@@ -90,7 +90,81 @@
      4     return vhashHeadBlocks;
      5 }
      6
      7+/*
      8+ * Author:  David Robert Nadeau
      9+ * Site:    http://NadeauSoftware.com/
     10+ * License: Creative Commons Attribution 3.0 Unported License
     11+ *          http://creativecommons.org/licenses/by/3.0/deed.en_US
     12+ */
     13+#if defined(_WIN32)
     14+#include <windows.h>
     15+#include <psapi.h>
     16+
     17+#elif defined(__unix__) || defined(__unix) || defined(unix) || (defined(__APPLE__) && defined(__MACH__))
     18+#include <unistd.h>
     19+#include <sys/resource.h>
     20+
     21+#if defined(__APPLE__) && defined(__MACH__)
     22+#include <mach/mach.h>
     23+
     24+#elif (defined(_AIX) || defined(__TOS__AIX__)) || (defined(__sun__) || defined(__sun) || defined(sun) && (defined(__SVR4) || defined(__svr4__)))
     25+#include <fcntl.h>
     26+#include <procfs.h>
     27+
     28+#elif defined(__linux__) || defined(__linux) || defined(linux) || defined(__gnu_linux__)
     29+#include <stdio.h>
     30+
     31+#endif
     32+
     33+#else
     34+#error "Cannot define  getCurrentRSS( ) for an unknown OS."
     35+#endif
     36+
     37+/**
     38+ * Returns the current resident set size (physical memory use) measured
     39+ * in bytes, or zero if the value cannot be determined on this OS.
     40+ */
     41+size_t getCurrentRSS( )
     42+{
     43+#if defined(_WIN32)
     44+    /* Windows -------------------------------------------------- */
     45+    PROCESS_MEMORY_COUNTERS info;
     46+    GetProcessMemoryInfo( GetCurrentProcess( ), &info, sizeof(info) );
     47+    return (size_t)info.WorkingSetSize;
     48+
     49+#elif defined(__APPLE__) && defined(__MACH__)
     50+    /* OSX ------------------------------------------------------ */
     51+    struct mach_task_basic_info info;
     52+    mach_msg_type_number_t infoCount = MACH_TASK_BASIC_INFO_COUNT;
     53+    if ( task_info( mach_task_self( ), MACH_TASK_BASIC_INFO,
     54+        (task_info_t)&info, &infoCount ) != KERN_SUCCESS )
     55+        return (size_t)0L;      /* Can't access? */
     56+    return (size_t)info.resident_size;
     57+
     58+#elif defined(__linux__) || defined(__linux) || defined(linux) || defined(__gnu_linux__)
     59+    /* Linux ---------------------------------------------------- */
     60+    long rss = 0L;
     61+    FILE* fp = NULL;
     62+    if ( (fp = fopen( "/proc/self/statm", "r" )) == NULL )
     63+        return (size_t)0L;      /* Can't open? */
     64+    if ( fscanf( fp, "%*s%ld", &rss ) != 1 )
     65+    {
     66+        fclose( fp );
     67+        return (size_t)0L;      /* Can't read? */
     68+    }
     69+    fclose( fp );
     70+    return (size_t)rss * (size_t)sysconf( _SC_PAGESIZE);
     71+
     72+#else
     73+    /* AIX, BSD, Solaris, and Unknown OS ------------------------ */
     74+    return (size_t)0L;          /* Unsupported. */
     75+#endif
     76+}
     77+
     78 bool CCoinsViewDB::BatchWrite(CoinsViewCacheCursor& cursor, const uint256 &hashBlock) {
     79+    const auto start = std::chrono::steady_clock::now();
     80+    size_t max_mem{getCurrentRSS()};
     81+
     82     CDBBatch batch(*m_db);
     83     size_t count = 0;
     84     size_t changed = 0;
     85@@ -129,7 +203,11 @@
     86         it = cursor.NextAndMaybeErase(*it);
     87         if (batch.SizeEstimate() > m_options.batch_write_bytes) {
     88             LogDebug(BCLog::COINDB, "Writing partial batch of %.2f MiB\n", batch.SizeEstimate() * (1.0 / 1048576.0));
     89+
     90+            max_mem = std::max(max_mem, getCurrentRSS());
     91             m_db->WriteBatch(batch);
     92+            max_mem = std::max(max_mem, getCurrentRSS());
     93+
     94             batch.Clear();
     95             if (m_options.simulate_crash_ratio) {
     96                 static FastRandomContext rng;
     97@@ -146,8 +224,16 @@
     98     batch.Write(DB_BEST_BLOCK, hashBlock);
     99
    100     LogDebug(BCLog::COINDB, "Writing final batch of %.2f MiB\n", batch.SizeEstimate() * (1.0 / 1048576.0));
    101+
    102+    max_mem = std::max(max_mem, getCurrentRSS());
    103     bool ret = m_db->WriteBatch(batch);
    104+    max_mem = std::max(max_mem, getCurrentRSS());
    105+
    106     LogDebug(BCLog::COINDB, "Committed %u changed transaction outputs (out of %u) to coin database...\n", (unsigned int)changed, (unsigned int)count);
    107+    if (changed > 0) {
    108+        const auto end{std::chrono::steady_clock::now()};
    109+        LogInfo("BatchWrite took=%dms, maxMem=%dMiB", duration_cast<std::chrono::milliseconds>(end - start).count(), max_mem >> 20);
    110+    }
    111     return ret;
    112 }
    
      0import os
      1import re
      2import shutil
      3import statistics
      4import subprocess
      5import time
      6import datetime
      7import argparse
      8import matplotlib.pyplot as plt  # python3.12 -m pip install matplotlib --break-system-packages
      9
     10# Regex to parse logs
     11BATCHWRITE_REGEX = re.compile(r"^(\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}Z) BatchWrite took=(\d+)ms, maxMem=(\d+)MiB")
     12
     13
     14def parse_log(archive):
     15    """Parse the log file to extract elapsed times, flush times, and memory usage."""
     16    start_time = None
     17    elapsed, batchwrite_times, usage_snapshots = [], [], []
     18    with open(archive, "r") as f:
     19        for line in f:
     20            if m := BATCHWRITE_REGEX.search(line):
     21                dt = datetime.datetime.strptime(m.group(1), "%Y-%m-%dT%H:%M:%SZ")
     22                if start_time is None:
     23                    start_time = dt
     24                elapsed.append((dt - start_time).total_seconds())
     25                batchwrite_times.append(int(m.group(2)))
     26                usage_snapshots.append(int(m.group(3)))
     27    return elapsed, batchwrite_times, usage_snapshots
     28
     29
     30def plot_results(results, output_dir):
     31    """Create separate plots for flush times and memory usage."""
     32    if len(results) != 2:
     33        print("plot_results() requires exactly 2 runs for comparison.")
     34        return
     35
     36    (dbbatch0, elapsed0, flush0, mem0) = results[0]
     37    (dbbatch1, elapsed1, flush1, mem1) = results[1]
     38
     39    # Compute percentage differences
     40    avg_flush0, avg_flush1 = statistics.mean(flush0), statistics.mean(flush1)
     41    max_mem0, max_mem1 = max(mem0), max(mem1)
     42    flush_improvement = round(((avg_flush0 - avg_flush1) / avg_flush0) * 100, 1)
     43    mem_increase = round(((max_mem1 - max_mem0) / max_mem0) * 100, 1)
     44
     45    # Plot flush times
     46    plt.figure(figsize=(16, 8))
     47    plt.plot(elapsed0, flush0, color="red", linestyle="-", label=f"Flush Times (dbbatch={dbbatch0})")
     48    plt.axhline(y=avg_flush0, color="red", linestyle="--", alpha=0.5, label=f"Mean ({dbbatch0})={avg_flush0:.1f}ms")
     49    plt.plot(elapsed1, flush1, color="orange", linestyle="-", label=f"Flush Times (dbbatch={dbbatch1})")
     50    plt.axhline(y=avg_flush1, color="orange", linestyle="--", alpha=0.5, label=f"Mean ({dbbatch1})={avg_flush1:.1f}ms")
     51    plt.title(f"Flush Times (dbbatch {dbbatch0} vs {dbbatch1}) — {abs(flush_improvement)}% {'faster' if flush_improvement > 0 else 'slower'}")
     52    plt.xlabel("Elapsed Time (seconds)")
     53    plt.ylabel("Flush Times (ms)")
     54    plt.legend()
     55    plt.grid(True)
     56    plt.tight_layout()
     57    flush_out_file = os.path.join(output_dir, "plot_flush_times.png")
     58    plt.savefig(flush_out_file)
     59    print(f"Flush Times plot saved as {flush_out_file}")
     60    plt.close()
     61
     62    # Plot memory usage
     63    plt.figure(figsize=(16, 8))
     64    plt.plot(elapsed0, mem0, color="blue", linestyle="-", label=f"Memory (dbbatch={dbbatch0})")
     65    plt.axhline(y=max_mem0, color="blue", linestyle="--", alpha=0.5, label=f"Max Mem ({dbbatch0})={max_mem0}MiB")
     66    plt.plot(elapsed1, mem1, color="green", linestyle="-", label=f"Memory (dbbatch={dbbatch1})")
     67    plt.axhline(y=max_mem1, color="green", linestyle="--", alpha=0.5, label=f"Max Mem ({dbbatch1})={max_mem1}MiB")
     68    plt.title(f"Memory Usage (dbbatch {dbbatch0} vs {dbbatch1}) — {abs(mem_increase)}% {'higher' if mem_increase > 0 else 'lower'}")
     69    plt.xlabel("Elapsed Time (seconds)")
     70    plt.ylabel("Memory Usage (MiB)")
     71    plt.legend()
     72    plt.grid(True)
     73    plt.tight_layout()
     74    mem_out_file = os.path.join(output_dir, "plot_memory_usage.png")
     75    plt.savefig(mem_out_file)
     76    print(f"Memory Usage plot saved as {mem_out_file}")
     77    plt.close()
     78
     79
     80def loadtxoutset(dbbatchsize, datadir, bitcoin_cli, bitcoind, utxo_file):
     81    """Load the UTXO set and run the Bitcoin node."""
     82    archive = os.path.join(datadir, f"results_dbbatch-{dbbatchsize}.log")
     83
     84    # Skip if logs already exist
     85    if os.path.exists(archive):
     86        print(f"Log file {archive} already exists. Skipping loadtxoutset for dbbatchsize={dbbatchsize}.")
     87        return
     88
     89    os.makedirs(datadir, exist_ok=True)
     90    debug_log = os.path.join(datadir, "debug.log")
     91
     92    try:
     93        print("Cleaning up previous run")
     94        for subdir in ["chainstate", "chainstate_snapshot"]:
     95            shutil.rmtree(os.path.join(datadir, subdir), ignore_errors=True)
     96
     97        print("Preparing UTXO load")
     98        subprocess.run([bitcoind, f"-datadir={datadir}", "-stopatheight=1"], cwd=bitcoin_core_path)
     99        os.remove(debug_log)
    100
    101        print(f"Starting bitcoind with dbbatchsize={dbbatchsize}")
    102        subprocess.run([bitcoind, f"-datadir={datadir}", "-daemon", "-blocksonly=1", "-connect=0", f"-dbbatchsize={dbbatchsize}", f"-dbcache={440}"], cwd=bitcoin_core_path)
    103        time.sleep(5)
    104
    105        print("Loading UTXO set")
    106        subprocess.run([bitcoin_cli, f"-datadir={datadir}", "loadtxoutset", utxo_file], cwd=bitcoin_core_path)
    107    except Exception as e:
    108        print(f"Error during loadtxoutset for dbbatchsize={dbbatchsize}: {e}")
    109        raise
    110    finally:
    111        print("Stopping bitcoind...")
    112        subprocess.run([bitcoin_cli, f"-datadir={datadir}", "stop"], cwd=bitcoin_core_path, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
    113        time.sleep(5)
    114
    115    shutil.copy2(debug_log, archive)
    116    print(f"Archived logs to {archive}")
    117
    118
    119if __name__ == "__main__":
    120    # Parse script arguments
    121    parser = argparse.ArgumentParser(description="Benchmark Bitcoin dbbatchsize configurations.")
    122    parser.add_argument("--utxo-file", required=True, help="Path to the UTXO snapshot file.")
    123    parser.add_argument("--bitcoin-core-path", required=True, help="Path to the Bitcoin Core project directory.")
    124    args = parser.parse_args()
    125
    126    utxo_file = args.utxo_file
    127    bitcoin_core_path = args.bitcoin_core_path
    128    datadir = os.path.join(bitcoin_core_path, "demo")
    129    debug_log = os.path.join(datadir, "debug.log")
    130    bitcoin_cli = os.path.join(bitcoin_core_path, "build/src/bitcoin-cli")
    131    bitcoind = os.path.join(bitcoin_core_path, "build/src/bitcoind")
    132
    133    # Build Bitcoin Core
    134    print("Building Bitcoin Core...")
    135    subprocess.run(["cmake", "-B", "build", "-DCMAKE_BUILD_TYPE=Release"], cwd=bitcoin_core_path, check=True)
    136    subprocess.run(["cmake", "--build", "build", "-j", str(os.cpu_count())], cwd=bitcoin_core_path, check=True)
    137
    138    # Run tests for each dbbatchsize
    139    results = []
    140    for dbbatchsize in [16777216, 67108864]:  # Original and proposed
    141        loadtxoutset(dbbatchsize, datadir, bitcoin_cli, bitcoind, utxo_file)
    142        archive = os.path.join(datadir, f"results_dbbatch-{dbbatchsize}.log")
    143        elapsed, batchwrite_times, usage_snapshots = parse_log(archive)
    144        results.append((dbbatchsize, elapsed, batchwrite_times, usage_snapshots))
    145
    146    # Plot results
    147    plot_results(results, bitcoin_core_path)
    148    print("All configurations processed.")
    

    For standard dbcache values the results are very close (though the memory measurements aren’t as scientific as I’d like them to be (probably because there is still enough memory), some runs even indicate that 16MiB consumes a bit more memory than the 64MiB version), but the trend seems to be clear from the produced plots: the batch writes are faster (and seem more predictable) with bigger batches, while the memory usage is only slightly higher.

    plot_flush_times plot_memory_usage

    Is there any other way that you’d like me to test this @sipa, @luke-jr, @1440000bytes?

  12. DrahtBot added the label Needs rebase on Jan 16, 2025
  13. coins: bump default LevelDB write batch size to 64 MiB
    The UTXO set has grown significantly, and flushing it from memory to LevelDB often takes over 20 minutes after a successful IBD with large dbcache values.
    The final UTXO set is written to disk in batches, which LevelDB sorts into SST files.
    By increasing the default batch size, we can reduce overhead from repeated compaction cycles, minimize constant overhead per batch, and achieve more sequential writes.
    
    Experiments with different batch sizes (loaded via assumeutxo at block 840k, then measuring final flush time) show that 64 MiB batches significantly reduce flush time without notably increasing memory usage:
    
    | dbbatchsize | flush_sum (ms) |
    |-------------|----------------|
    | 8 MiB       | ~240,000       |
    | 16 MiB      | ~220,000       |
    | 32 MiB      | ~200,000       |
    | *64 MiB*    | *~150,000*     |
    | 128 MiB     | ~156,000       |
    | 256 MiB     | ~166,000       |
    | 512 MiB     | ~186,000       |
    | 1 GiB       | ~186,000       |
    
    Checking the impact of a `-reindex-chainstate` with `-stopatheight=878000` and `-dbcache=30000` gives:
    16 << 20
    ```
    2025-01-12T07:31:05Z Flushed fee estimates to fee_estimates.dat.
    2025-01-12T07:31:05Z [warning] Flushing large (26 GiB) UTXO set to disk, it may take several minutes
    2025-01-12T07:53:51Z Shutdown: done
    ```
    Flush time: 22 minutes and 46 seconds
    
    64 >> 20
    ```
    2025-01-12T18:30:00Z Flushed fee estimates to fee_estimates.dat.
    2025-01-12T18:30:00Z [warning] Flushing large (26 GiB) UTXO set to disk, it may take several minutes
    2025-01-12T18:44:43Z Shutdown: done
    ```
    Flush time: ~14 minutes 43 seconds.
    868413340f
  14. l0rinc force-pushed on Jan 16, 2025
  15. luke-jr commented at 10:56 pm on January 16, 2025: member

    I think those graphs need to be on height rather than seconds. The larger dbbatchsize making it faster means it gets further in the chain, leading to the higher max at the end…

    I would expect both lines to be essentially overlapping except during flushes.

  16. l0rinc commented at 10:14 am on January 17, 2025: contributor

    I would expect both lines to be essentially overlapping except during flushes.

    I was only measuring the memory here during flushes. There is no direct height available there, but if we instrument UpdateTipLog instead (and fetch some data from the assumeUTXO height), we’d get:

    dbbatchsize=16MiB:

    image

    dbbatchsize=64MiB (+ experimental sorting):

    image


    overlapped (blue 16, green 64): image

  17. DrahtBot removed the label Needs rebase on Jan 17, 2025

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-01-21 03:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me