index: Force database compaction in coinstatsindex #33306

pull fjahr wants to merge 2 commits into bitcoin:master from fjahr:2025-09-csi-compaction changing 6 files +21 −1
  1. fjahr commented at 1:56 pm on September 4, 2025: contributor

    This PR forces full database compaction on coinstatsindex every 10,000 blocks. This does not seem to come with any considerable performance impact, though I did not test on many different systems.

    Motivation: Currently, coinstatsindex ends up using a lot of 55kb files if this is not the case. This is likely related to very fast writes occurring during sync but so far I could not pin down where exactly the difference in behavior is coming from. It seems to be a LevelDB internal thing though, since I can not make out any difference in configuration between coinstatsindex and the other indexes.

    This was first reported here: #30469 (comment)

  2. DrahtBot added the label UTXO Db and Indexes on Sep 4, 2025
  3. DrahtBot commented at 1:56 pm on September 4, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33306.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    User requested bot ignore l0rinc

    If your review is incorrectly listed, please copy-paste <!–meta-tag:bot-skip–> into the comment that the bot should ignore.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #30342 (kernel, logging: Pass Logger instances to kernel objects by ryanofsky)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

    LLM Linter (✨ experimental)

    Possible typos and grammar issues:

    • synce -> sync [typo: “synce” is misspelled; should be “sync” to read “upon finishing sync”]

    2026-02-01 21:13:26

  4. mzumsande commented at 3:48 pm on September 4, 2025: contributor

    Do you know if the current state with many small files and one large file has actual downsides, or does it just look weird?

    I am not an expert on LevelDB, just mentioning that two AIs I consulted say that benefits of regular compaction would be only be of cosmetic value and suggest to just let LevelDB do its thing.

  5. fjahr commented at 4:56 pm on September 4, 2025: contributor

    Good question and I am definitely not a LevelDB expert either. I think that at the size and usage pattern of the coinstatsindex we are not really facing any significant problems on a modern computer and system since the size of the data we are storing is very small relatively speaking for what LevelDB is capable of handling. But I think the impact is probably also non-zero and might increase over time as the blockchain grows. Very high-level rationale: there must be a reason LevelDB does these compactions usually.

    As far as I can tell the best resource on this topic is this docs page: https://chromium.googlesource.com/external/leveldb/%2B/HEAD/doc/impl.md#timing From my understanding these small files are L0 files and having many of them means we potentially have overlapping keys among them, meaning we are wasting disk space, and “This may significantly increase the cost of reads due to the overhead of merging more files together on every read.” per the document.

  6. fjahr commented at 5:01 pm on September 4, 2025: contributor
    FWIW, from the patterns I have observed so far it seems likely to me that the problem is only occurring during sync and afterwards the data of new blocks are actually compacted as it should, just those 55kb files are left over from the sync phase. So I am thinking about changing the approach to only compact once when the index is being set to m_synced = true, though this would make it a more complex change and I need to do some more testing if my anecdotal observation is correct.
  7. in src/dbwrapper.cpp: in 62d3806127 outdated
    347@@ -348,6 +348,11 @@ bool CDBWrapper::IsEmpty()
    348     return !(it->Valid());
    349 }
    350 
    351+void CDBWrapper::CompactFull() const
    


    l0rinc commented at 7:03 pm on September 4, 2025:
    could you please separate the low risk changes from the riskier ones? This extraction seems like a simple refactor, would be great to see this in a very simple first commit, so that we can focus on the new call in src/index/coinstatsindex.cpp

    fjahr commented at 2:31 pm on September 8, 2025:
    DOne
  8. in src/index/coinstatsindex.cpp: in 62d3806127 outdated
    223@@ -224,6 +224,14 @@ bool CoinStatsIndex::CustomAppend(const interfaces::BlockInfo& block)
    224     m_muhash.Finalize(out);
    225     value.second.muhash = out;
    226 
    227+    // Force compaction of the database from time to time since LevelDB doesn't
    228+    // seem to be handling this well itself at the speed coinstats index is
    229+    // syncing.
    


    l0rinc commented at 7:06 pm on September 4, 2025:
    as other have also mentioned, compaction is already a background process, it is triggered regularly, it should already take care of these as far as I can tell. Is it urgent to do the expensive compaction regularly? Doing more compactions should take more time overall, can you please share your benchmarks in this regard?

    fjahr commented at 2:31 pm on September 8, 2025:

    As I and others have mentioned (see the linked PR), this doesn’t work for coinstatsindex and we end up with hundreds of 55kb files that seem to stay around forever or until we trigger a full compaction manually.

    Benchmarking didn’t seem to make sense to me because the compaction completes almost infinitely so comparing two runs with compaction and without may not have given a result that gives an accurate answer. But I figured if I time the runtime of each compaction to the millisecond and then add them up that gives me a good indicator for how much time the compaction takes. I have been using Signet for this test and the individual runtimes of the compactions range from 12ms to 220ms. Overall the time for the coinstatsindex to sync (time between thread start and thread exit messages) took 5 minutes and 6 seconds. The cummulative time spend on compaction during that sync was 1,367ms. So the performance impact was making full sync of the index ~0.45% slower.

    Here is the patch I used for this: https://github.com/fjahr/bitcoin/commit/817990e515c1b64de59b2838c91032069651da5d In case you want to reproduce the result.


    l0rinc commented at 6:05 pm on September 8, 2025:

    I haven’t checked the linked PR in detail (I have to have some stopping condition, and I think the PR should explain the important context) but I didn’t get the impression until now that we weren’t doing any compaction here before, I just thought you meant that “it’s struggling to keep up”.

    Have a few questions here:

    • could we rather add this to the other background compaction thread? Do we understand why there’s no automatic compaction?
    • does it make sense to do the compaction so often? There’s a reason that’s usually not done for every change, garbage collection is usually a rare event. Wouldn’t it suffice to to it once at the very end only? Maybe 10k isn’t that often, I’ll see if there’s a way to measure it.
    • We’re doing a compactionand logging for genesis (height == 0) as well, right? Is it deliberate?
    • We seem to be writing each entry separately to LevelDB in a single-value batch and fsync-ing immediately? Could use some kind of batching like we do with the UTXO set? https://github.com/bitcoin/bitcoin/blob/e419b0e17f8acfe577c35c62a8a71a19aad249f3/src/txdb.cpp#L94 and TxIndex::CustomAppend and BaseIndex::Commit already write in batches - wouldn’t that fix the coinstatsindex small files better than doing compactions manually (it should also be a lot faster)

    I want to see a global, macro benchmark as well, since the continuous compaction most likely slows it down - but I haven’t worked with this area of the code yet so it may take a while for me to set it up


    Here’s what I did to benchmark the performance of only the indexing - created 3 commits:

    For simplicity the application terminates after the indexes are successfully created and we’re measuring the time it took to iterate and index them.

    I’ve started the application with -connect=0 -coinstatsindex=1 to show the progress without any additional activity for stability and measure the output folder’s size and number of files.

     0COMMITS="62d3ad137f70803b210623c51ea3b8a20996b39b 7779473a6e2f3b9fa5b8b97800e29d6fb3299aad abab616a2ba4701816c582dab24593f3f8200b0a 7d3fa1c8e99828751187b13fb1e2de11052c6215";\
     1DATA_DIR="/mnt/my_storage/BitcoinData";\
     2for COMMIT in $COMMITS; do\
     3    killall bitcoind 2>&1; rm -rf "$DATA_DIR/indexes" "$DATA_DIR/debug.log" &&\
     4    git fetch -q origin "$COMMIT" >/dev/null 2>&1 || true && git checkout -q "$COMMIT" && git log -1 --pretty='%h %s' "$COMMIT" &&\
     5    cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release >/dev/null 2>&1 && ninja -C build >/dev/null 2>&1 &&\
     6    ./build/bin/bitcoind -datadir="$DATA_DIR" -connect=0 -coinstatsindex=1 -printtoconsole=0 2>&1;\
     7    grep 'Indexing finished migrating' "$DATA_DIR/debug.log";\
     8    du -h "$DATA_DIR/indexes/coinstatsindex/db";\
     9    find "$DATA_DIR/indexes/coinstatsindex/db" -type f -name '*.ldb' | wc -l;\
    10done
    

    It seems the indexing takes several hours, I’ll post the result here after it finishes Edit:

    • no compaction: 33869s, 211MB, 112 files
    • compact at every 10k blocks: 33998s, 211MB, 16 files
    • compact at every 100k blocks: 34055s, 211M, 36 files
    • compact at every block: 56403s, 301M, 131 files (seems suspicious)

    l0rinc commented at 2:31 am on September 11, 2025:

    Instead of the current single-entry LevelDB batching (creating a batch, serializing the DBHeightKey and std::pair<uint256, DBVal> and writing the batch, containing a single entry), I have extracted the CDBBatch outside of the loops and committing and writing that regularly, see https://github.com/l0rinc/bitcoin/pull/37/files for a draft (I will need to break into commits and maybe adjust some tests if needed).

    I will run a follow-up benchmark to see how well this performs, since the original 9.5 hours sync time is (with or without compaction) is at least an order of magnitude slower than I expected it to be.

    I expect this to solve the the small-file fragmentation issue as well - will get back on both. Edit:

    • indexes: batch index writes: 34188s, 211M, 8 files

    It solved the fragmentation, but didn’t speed up anything. I still think this is a better direction than adding manual compactions.


    fjahr commented at 9:16 pm on February 1, 2026:

    could we rather add this to the other background compaction thread? Do we understand why there’s no automatic compaction?

    I am not aware of a compaction thread that we control, can you elaborate what you mean? And no, as I have tried to explain there seems to be no easy answer but the evidence points to the fact that LEVELDB can not keep up at a certain write speed.

    does it make sense to do the compaction so often? There’s a reason that’s usually not done for every change, garbage collection is usually a rare event. Wouldn’t it suffice to to it once at the very end only? Maybe 10k isn’t that often, I’ll see if there’s a way to measure it.

    I had thought about doing it once at the end of sync and I am doing this now in the latest push. Just necessitates a hook to be added for this.

    Instead of the current single-entry LevelDB batching (creating a batch, serializing the DBHeightKey and std::pair<uint256, DBVal> and writing the batch, containing a single entry), I have extracted the CDBBatch outside of the loops and committing and writing that regularly,

    Seems like a pretty big refactor and IMO not worth it only for the issue that we are dealing with here, given the edge cases and inconsistencies we have fixed in index over last few years.

  9. in src/index/coinstatsindex.cpp: in 62d3806127 outdated
    223@@ -224,6 +224,14 @@ bool CoinStatsIndex::CustomAppend(const interfaces::BlockInfo& block)
    224     m_muhash.Finalize(out);
    225     value.second.muhash = out;
    226 
    227+    // Force compaction of the database from time to time since LevelDB doesn't
    228+    // seem to be handling this well itself at the speed coinstats index is
    229+    // syncing.
    230+    if (block.height % 10000 == 0) {
    231+        LogInfo("Compacting database of coinstatsindex");
    


    l0rinc commented at 7:08 pm on September 4, 2025:
    shouldn’t this be a debug? Do we need 91 instances of this? (haven’t checked, but will this be triggered for genesis?)

    fjahr commented at 2:31 pm on September 8, 2025:
    I am pretty indifferent, so making it debug. I had to add a new category since none of the existing seemed to fit.
  10. in src/dbwrapper.h: in 62d3806127 outdated
    283@@ -284,6 +284,8 @@ class CDBWrapper
    284         ssKey2 << key_end;
    285         return EstimateSizeImpl(ssKey1, ssKey2);
    286     }
    287+
    288+    void CompactFull() const;
    


    l0rinc commented at 7:09 pm on September 4, 2025:
    is const the right thing to advertise here? And is it a noexcept?

    fjahr commented at 2:31 pm on September 8, 2025:
    It doesn’t modify the data but the storage on disk but since there are changes I think you are right and this shouldn’t be const. I don’t think it’s noexcept because we are doing stuff on disk there are a lot issues that could come from that.
  11. in src/index/coinstatsindex.cpp: in 62d3806127 outdated
    223@@ -224,6 +224,14 @@ bool CoinStatsIndex::CustomAppend(const interfaces::BlockInfo& block)
    224     m_muhash.Finalize(out);
    225     value.second.muhash = out;
    226 
    227+    // Force compaction of the database from time to time since LevelDB doesn't
    228+    // seem to be handling this well itself at the speed coinstats index is
    229+    // syncing.
    230+    if (block.height % 10000 == 0) {
    


    l0rinc commented at 7:13 pm on September 4, 2025:

    nit: to avoid counting the 0s manually:

    0if (block.height % 10'000 == 0) {
    

    or

    0if (!(block.height % 10'000)) {
    

    fjahr commented at 2:31 pm on September 8, 2025:
    Done
  12. fjahr force-pushed on Sep 8, 2025
  13. fjahr force-pushed on Sep 8, 2025
  14. in src/logging.h:97 in bf99e7fbc6 outdated
    93@@ -94,6 +94,7 @@ namespace BCLog {
    94         TXRECONCILIATION = (CategoryMask{1} << 26),
    95         SCAN        = (CategoryMask{1} << 27),
    96         TXPACKAGES  = (CategoryMask{1} << 28),
    97+        INDEX       = (CategoryMask{1} << 29),
    


    l0rinc commented at 5:46 pm on September 8, 2025:
    do you think this is better than using the existing LEVELDB category? Seems these categories are in short supply and compaction seems like a LevelDB thing to me.

    fjahr commented at 9:13 pm on February 1, 2026:
    Ok, I was thinking an INDEX category would be useful anyway, but I am using the LEVELDB one now and dropped the commit adding the new category.
  15. DrahtBot added the label Needs rebase on Sep 9, 2025
  16. in src/dbwrapper.cpp:353 in 7bc77c5c37
    347@@ -348,6 +348,11 @@ bool CDBWrapper::IsEmpty()
    348     return !(it->Valid());
    349 }
    350 
    351+void CDBWrapper::CompactFull()
    352+{
    353+    DBContext().pdb->CompactRange(nullptr, nullptr);
    


    l0rinc commented at 2:58 am on September 9, 2025:

    nit: since we’re touching it:

    0    DBContext().pdb->CompactRange(/*begin=*/nullptr, /*end=*/nullptr);
    

    fjahr commented at 9:13 pm on February 1, 2026:
    Done
  17. l0rinc changes_requested
  18. l0rinc commented at 5:06 am on September 9, 2025: contributor

    I’m missing benchmark results to see how this change affects the already very slow performance. I will post my own results as soon as they finish.

    I would also like to understand why database compaction isn’t enabled for this LevelDB instance.

  19. db, refactor: Add full compaction helper method 8fd285b24f
  20. index: Force database compaction in coinstatsindex
    The coinstatsindex ends up using a lot of 55kb files if this is not the case. This is likely related to very fast writes occurring during sync.
    cb7f981018
  21. fjahr force-pushed on Feb 1, 2026
  22. fjahr commented at 9:25 pm on February 1, 2026: contributor

    I am giving this another chance to receive some concept acks and see if the pain is big enough to add a fix.

    Addressed comments by @l0rinc and particularly only compact once upon completion of sync. This required adding a hook for this to the base index but it’s still a very minimal change.

    There is still virtually zero performance impact from this change. I synced the coinstatsindex on signet on my machine and at some point there were 2500 of these 55kb files there, after compaction it was reduced to 4 files. That compaction process took less than a second (I didn’t have more granular logging in place).

  23. l0rinc commented at 10:26 pm on February 1, 2026: contributor

    Approach NACK

    I think we’re just treating the symptoms here, the problem is that we write in batches of a single element - the solution would be to batch the writes instead of writing the one-by-one and compacting, please see my comments:

    We seem to be writing each entry separately to LevelDB in a single-value batch and fsync-ing immediately? Could use some kind of batching like we do with the UTXO set? https://github.com/bitcoin/bitcoin/blob/e419b0e17f8acfe577c35c62a8a71a19aad249f3/src/txdb.cpp#L94 and TxIndex::CustomAppend and BaseIndex::Commit already write in batches - wouldn’t that fix the coinstatsindex small files better than doing compactions manually (it should also be a lot faster)

  24. fjahr commented at 10:38 pm on February 1, 2026: contributor

    the problem is that we write in batches of a single element

    How can this be the problem when we do the exact same thing in BlockFilterIndex and we don’t see the problem there? The problem is the write speed of coinstatsindex.

  25. l0rinc commented at 10:51 pm on February 1, 2026: contributor

    exact same thing in BlockFilterIndex and we don’t see the problem there

    I will investigate if you don’t think it works, I remembered that I have tried it before - removed the nack in the meantime

  26. DrahtBot removed the label Needs rebase on Feb 1, 2026
  27. furszy commented at 11:22 pm on February 1, 2026: member
    The second commit of #26966 already allows us to process batches of blocks (5eaf553d047aa720924c05b2fb8580fe744d940f). It includes some extra code because I implemented it with parallelization in mind, not just batching. But it shouldn’t take me much effort to decouple it (if needed). Happy to reduce the size of that PR too.
  28. fjahr commented at 11:38 pm on February 17, 2026: contributor
    Closing for now since this will likely be resolved with #34489
  29. fjahr closed this on Feb 17, 2026


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-02-27 09:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me