node: allocate index caches proportional to usage patterns #34636

pull svanstaa wants to merge 1 commits into bitcoin:master from svanstaa:improve-index-cache-allocation changing 1 files +14 −5
  1. svanstaa commented at 1:14 PM on February 20, 2026: none

    The current cache allocation for optional indexes (txindex, txospenderindex, blockfilterindex) uses a sequential total_cache / 8 approach where each index gets 1/8 of the remaining budget after the previous index has been allocated. This means the order in which indexes appear in the code silently determines how much cache each one gets.

    Index Current share of total
    txindex ~12%
    txospenderindex ~11%
    blockfilterindex ~10%

    This is unintuitive, undocumented, and probably doesn't reflect actual usage patterns. This PR replaces the sequential 1/8 allocation with explicit percentages based on how the indexes are typically used. The current values are an educated guess, and subject to further benchmark and research of typical client usage patterns.

    Index Allocation Rationale
    txindex 10% Serves getrawtransaction RPCs with mostly unique lookups across the entire blockchain: low cache reuse
    txospenderindex 5% Serves gettxspendingprevout RPCs with very specific outpoint queries: likely the least repetitive access pattern
    blockfilterindex 5% Serves BIP 157 light clients that repeatedly query the same recent blocks: highest cache benefit

    UPDATE: blockfilterindex allocation changed from 15% to 5% in the course of the discussion

    This is a continuation of the related discussion: #24539 (review) and #31483.

    Further feedback and input is very much appreciated.

  2. DrahtBot commented at 1:14 PM on February 20, 2026: contributor

    <!--e57a25ab6845829454e8d69fc972939a-->

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    <!--021abf342d371248e50ceaed478a90ca-->

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    Concept ACK fjahr, hodlinator

    If your review is incorrectly listed, please copy-paste <code>&lt;!--meta-tag:bot-skip--&gt;</code> into the comment that the bot should ignore.

    <!--174a7506f384e20aa4161008e828411d-->

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #31260 (scripted-diff: Type-safe settings retrieval by ryanofsky)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

    <!--5faf32d7da4f0f540f40219e4f7537a3-->

  3. in src/node/caches.cpp:60 in 223f4ee0f1 outdated
      58 |      total_cache -= index_sizes.tx_index;
      59 | -    index_sizes.txospender_index = std::min(total_cache / 8, args.GetBoolArg("-txospenderindex", DEFAULT_TXOSPENDERINDEX) ? MAX_TXOSPENDER_INDEX_CACHE : 0);
      60 |      total_cache -= index_sizes.txospender_index;
      61 |      if (n_indexes > 0) {
      62 | -        size_t max_cache = std::min(total_cache / 8, MAX_FILTER_INDEX_CACHE);
      63 | +        size_t max_cache = std::min(total_cache * 15 / 100, MAX_FILTER_INDEX_CACHE);
    


    fjahr commented at 1:48 PM on February 20, 2026:

    since the txindex and txospenderindex sizes are subtracted above this isn't the real 15 percent anymore, it's only relative like before whereas the txospenderindex above does get the real 5% now because of the reordering. I am not sure what the right behavior is yet but I have a feeling that using the actual number rather than relative makes more sense. But it should definitely be consistent so either all are relative or none.


  4. in src/node/caches.cpp:53 in 223f4ee0f1 outdated
      48 | +    // - blockfilterindex (15%): serves BIP 157 light clients that repeatedly
      49 | +    //   query recent blocks, benefiting most from LevelDB cache.
      50 | +    // - txindex (10%): serves getrawtransaction RPCs with mostly unique,
      51 | +    //   non-repetitive lookups across the entire blockchain.
      52 | +    // - txospenderindex (5%): serves gettxspendingprevout RPCs with very
      53 | +    //   specific, rarely repeated outpoint queries.
    


    fjahr commented at 1:49 PM on February 20, 2026:

    Please add a line about coinstatsindex missing here being a conscious decision since the usage pattern doesn't seem to suggest it would be necessary


  5. fjahr commented at 1:54 PM on February 20, 2026: contributor

    Concept ACK

    Thanks for kicking this off, I think we can find a distribution logic that makes more sense than what we currently have and better documentation is needed.

  6. svanstaa force-pushed on Feb 20, 2026
  7. svanstaa force-pushed on Feb 20, 2026
  8. svanstaa marked this as a draft on Feb 21, 2026
  9. DrahtBot added the label CI failed on Feb 22, 2026
  10. DrahtBot removed the label CI failed on Feb 22, 2026
  11. svanstaa force-pushed on Mar 13, 2026
  12. fjahr commented at 8:32 PM on March 15, 2026: contributor

    Finally found the time to think about this in a bit more detail.

    First, what I learned of the LevelDB cache: It keeps uncompressed blocks of data. The block size is ~4 kb. Caching works particularly well when there is sequential reading, i.e. multiple values in one block are used aside from of course whether the values are being read more than once.

    Blockfilterindex

    I am purposefully ignoring here that the blockfilter based light client adoption is probably not where people expected it to be by now when the index was introduced. I think there are still chances that adoption will increase with new types of wallets and light clients being built on this tech. The pattern of usage from the client side is, they come online after some time and sync by requesting the filters from the last time they were awake. Potentially a single client queries multiple nodes so they can compare if they did get the correct filters. So the index is being used with sequential requests and there is a great likelyhood that these recent blocks will be re-requested many times, so this is perfect for the cache.

    Since the cache should work really well here, if possible we should give it as much as we can up until the point where we don't expect it would be utilized anymore. To calculate what data we would expect to be queried typically, let's assume they come online every two weeks. Somehow that is my mental model for these kind of light clients, probably shaped by the default LN channel closing dispute period. Also it feels a bit weird to optimize for some hypothetical user that only comes online every couple of months since the would likely not have overall much impact on the network, but happy to hear if I am overlooking something. Also we may occasionally receive see a client that is just syncing the chain but that may also not guide our decisions. The sizes stored in the index are Key: 1-byte type prefix + 4-byte height and Value: 4-byte file number + 4-byte offset + filter hash 32 bytes + filter header 32 bytes => total 77 bytes. For two weeks: 2016 blocks * 77 bytes = ~155 KB. Well, let's assume there is some overhead from LevelDB and I probably missed something else and go with 200 KB. The actual filters are stored as flat files anyway and the LevelDB cache doesn't help us.

    Comparing the 200 KB that we will need to likely serve >80% of the cache hits the current setting actually seems more reasonable than the larger one that is suggested here. Even with our minimal 450 MB cache, 10% (45MB) would allow us to store 225 times that. And logically the actual level DB on mainnet for the basic blockfilter index is actually just 60 MB on my node. So it seems nodes that increased their DB cache from the old default have already been configuring this value too high for a long time and with the new default of 2 GB we would give if 200 MB which is way too much.

    My suggestion here would be: Give it 5% and cap it at 80 MB. It could also be 10%, given the cap it doesn't matter that much I think. The cap would need to be raised in the future when the whole DB exceeds it.

    Txospenderindex

    This only helps gettxspendingprevout to find on-chain prevouts and targets LN and other L2s as users. A bit hard to say because it's new but it seems like the cache effectiveness should be really low here. The queried values will likely be pretty randomly distributed across the blockchain, though of course not before the L2 existed. I also don't see how the values would be requested multiple times usually. When a relevant spender is found likely the L2 implementation will store it in it's own DB and not query core for it again. The records are also tiny since only the key is used.

    I would suggest setting this to 1% or maybe 2-3% if we don't want to be that aggressive for now since the index is new. Maybe I am overlooking something and a single node is shared between multipe L2 instances? But that does not seem like something we should be worrying about.

    Txindex

    I found this the hardest to judge because it's involved in a few things via GetTransaction. The queried values are likely very random as well, sequential access will probably only be with one-time syncs of some wallet or block explorer. There may be other application and use cases where the cache could be useful that I am missing but the best I could come up with is a blockexplorer where a specific transaction is getting a hype (interesting message in an op_return or so) and then is requested by many users in a short period of time.

    I think a decrease to 5% would be reasonable given our default increase to 2 GB. A node that doesn't have this available should hopefully not power a block explorer. If we want to be extra careful we could keep 10% but for this one I am hoping for other reviewers to have a bit clearer ideas of what it should be.

    EDIT: I seem to have misremembered and our new default is actually at 1 GB, so then maybe let's stay with 10%.

    Benchmarking

    I would suggest benchmarking IBD with all indexes enabled before and after the change to see if this slows it down or speeds it up. If there is a difference maybe drill down where the indexes have the biggest impact.

    I got this idea from the comment that is on top of the file linking to an old gmax comment. While it's interesting to check that, the comment could be clarified because it doesn't say what there is a "meaningful difference" of and even the linked comment alone doesn't actually clarify this.

    Scattered thoughts

    Not sure if we need to touch the MAX_ values at the top. The feel kind of arbitrary too but I guess let's focus on getting agreement on the allocation and the max values can follow.

    You can take this PR out of draft status unless you are still running tests, it's ready to be reviewed. The bikeshedding on the exact allocation may take a while but there is nothing blocking this so I think it should be marked ready for review, otherwise maybe people will hold off posting their opinion because they think you are working on more changes.

  13. svanstaa marked this as ready for review on Mar 17, 2026
  14. svanstaa commented at 11:16 PM on March 17, 2026: none

    Thanks for the input! To check for potential impact on IBD, I ran some benchmarks. It seems the overall impact on IBD times is marginal. For the 2 different cache configurations (v31.0rc1 / PR34636), the difference is less than 2 minutes. Only with optional indexes disabled there is a drastic speed improvement.

    IBD Benchmark Results

    All runs: mainnet, default dbcache (1024 MiB), assumevalid default (height 938343), blocks served locally, CPU: AMD Ryzen 9 9950X3D

    Run Binary Indexes Coins cache Duration Final height
    1 v31.0rc1 txindex + blockfilterindex + txospenderindex 676.0 MiB 5h 45m 45s 940,911
    2 PR #34636 txindex + blockfilterindex + txospenderindex 706.8 MiB 5h 47m 00s 940,944
    3 PR #34636 none ~1014 MiB 3h 36m 44s 940,994

    Cache allocation breakdown (1024 MiB dbcache, all indexes enabled)

    Component v31.0rc1 PR #34636
    txindex 128.0 MiB 102.4 MiB
    txospenderindex 112.0 MiB 51.2 MiB
    blockfilterindex 98.0 MiB 153.6 MiB
    coins (UTXO set) 676.0 MiB 706.8 MiB

    The only thing measurable so far is the IBD time, which in theory depends on the coins cache size, that is, the space not allocated to the optional indexes - but the impact seems to be negligible. So unless someone comes up with a clever way to benchmark the optional caches sizes for benefits relative to size, we should use the above argumentation by @fjahr for the cache sizes:

    • txindex: leave at 10% (hard to find arguments, leave as it is)
    • txospenderindex: lower to 2% (targets LN and L2 users, sequential lookups improbable)
    • blockfiterindex: lower to 5% (light client going online at every other week would need only up to 200kb)

    With the discussed usage patterns, the last two indexes could be allocated even less, but as the impact on IBD is negligible, it could as well be left like this.

    Currently running more benchmarks to check the proposed values do not hurt the IBD time either, even though that is not expected.

    UPDATE: the smaller allocations to txospenderindex and blockfilterindex actually do hurt the IBD time. This was unexpected, and run twice to confirm.

    Run Binary Indexes Coins cache Duration Final height
    1 v31.0rc1 (old /8) all 676.0 MiB 5h 45m 45s 940,911
    2 PR #34636 (10%/5%/15%) all 706.8 MiB 5h 47m 00s 940,944
    3 PR #34636 none ~1014 MiB 3h 36m 44s 940,994
    4 fjahr (10%/2%/5%) all 839.9 MiB 6h 11m 43s 941,019
    5 fjahr (10%/2%/5%) all 839.9 MiB 6h 21m 01s 941,110

    UPDATE:

    Run Binary Indexes Coins cache Duration Final height
    6 10%/5%/5% all ~787 MiB 5h 39m 17s 941,187

    After bumping the txospenderindex cache up to 5%, the IBD time returned to normal baseline close to 5:45. So with the above arguments by @fjahr about the size of the caches more than sufficient for the envisioned use cases, and the benchmark confirming that the IBD is not impacted, the final proposal would be 10%/5%/5% for txindex/txospenderindex/blockfilterindex.

  15. svanstaa force-pushed on Mar 19, 2026
  16. in src/node/caches.cpp:1 in dca43fa7b8 outdated


    hodlinator commented at 11:49 AM on April 7, 2026:

    commit message in dca43fa7b8ad85a57df765eba51ef326581898b6:

    - allocate index caches proportional to usage patterns
    + node: allocate index caches proportional to usage patterns
    
      add comment explaining coinstatsindex cache exclusion
    
      update cache allocations to 10%/5%/5%
    -
    - node: allocate index caches proportional to usage patterns
    

    hodlinator commented at 12:07 PM on April 7, 2026:

    PR description currently specifies 15% for the blockfilterindex, dca43fa7b8ad85a57df765eba51ef326581898b6 has 5% in the code & comment.


    hodlinator commented at 12:45 PM on April 7, 2026:

    kernel::CacheSizes still uses the same sequentially dependent style of calculating it's sizes, should we change that too? https://github.com/bitcoin/bitcoin/blob/04480c255832359364a6114ca1d7d023eaef8041/src/kernel/caches.h#L28-L34


    svanstaa commented at 7:21 PM on April 7, 2026:

    cleaned up, thanks


    svanstaa commented at 7:23 PM on April 7, 2026:

    Clarified and changed to 5% in the intial proposal to avoid confusion


    svanstaa commented at 7:34 PM on April 7, 2026:

    For all practical purposes (plausible system memory sizes), the MAX_BLOCK_DB_CACHE and MAX_COINS_DB_CACHE will be capped at 2MiB and 8MiB anyway, so changes here would be cosmetic. I'd rather not touch this now.


    hodlinator commented at 8:20 PM on April 7, 2026:

    Oh, didn't mean to suggest adding a 2-space indent for the body, was meant to compensate for the +- diff symbols.


    svanstaa commented at 10:04 AM on April 8, 2026:

    Still looks way better than before :) . Fixed the indents too.

  17. in src/node/caches.cpp:73 in dca43fa7b8 outdated
      72 | -    total_cache -= index_sizes.tx_index;
      73 | -    index_sizes.txospender_index = std::min(total_cache / 8, args.GetBoolArg("-txospenderindex", DEFAULT_TXOSPENDERINDEX) ? MAX_TXOSPENDER_INDEX_CACHE : 0);
      74 | -    total_cache -= index_sizes.txospender_index;
      75 | +    index_sizes.tx_index = std::min(total_cache * 10 / 100, args.GetBoolArg("-txindex", DEFAULT_TXINDEX) ? MAX_TX_INDEX_CACHE : 0);
      76 | +    index_sizes.txospender_index = std::min(total_cache * 5 / 100, args.GetBoolArg("-txospenderindex", DEFAULT_TXOSPENDERINDEX) ? MAX_TXOSPENDER_INDEX_CACHE : 0);
      77 |      if (n_indexes > 0) {
    


    hodlinator commented at 12:27 PM on April 7, 2026:

    nit: I can understand wanting to have forwards compatibility of block filters on a protocol level, but this code is annoying. We only had one type since year 2018 AFAIK.

    n_indexes should more appropriately be named block_filter_indexes, and maybe we could scale back the forwards compatibility to just bool block_filter_index. But let's skip that aspect for this PR.

    Maybe we could add an assert to nail down the logic a bit?

        if (n_indexes > 0) {
            Assert(n_indexes == 1); // Currently only support one type of block filter index
    

    svanstaa commented at 7:43 PM on April 7, 2026:

    Yes, ACK. Assertions do not hurt, and it makes it clearer what is going on here.


    fjahr commented at 8:48 PM on April 7, 2026:

    I would prefer if changes to blockfilter index flexibility would be kept out of scope for this PR. As far as I remember originally there were two indexes in the PR but only one was merged. The idea was that protocols could still implement their custom block filter indexes. While that hasn't played out yet afaict I wouldn't be surprised if the recent sharp increase in Layer 2 concepts doesn't lead to some additional demand of this feature. So I think it would do this PR a disservice to open this discussion here.


    hodlinator commented at 7:04 AM on April 8, 2026:

    The only way I can interpret the current code under a multi-blockfilter-index future is that index_sizes.filter_index specifies an equal allocation size for all blockfilter index types. With the current 5% allocation for all blockfilter types, adding a second one would result in a halving of the allocation. Furthermore, this assumes an equal need for cache across speculative block filter types. Seems more reasonable to add a separate independent IndexCacheSizes-field for the new block filter index if we ever add one.

    But I'd be okay with dropping the assert as it's controversial.


    fjahr commented at 10:07 AM on April 8, 2026:

    Furthermore, this assumes an equal need for cache across speculative block filter types.

    I think it's reasonable to assume the future/custom filters would also be stored in flat files and the indexing code that is relevant for caching and should be constant across indexes. Maybe usage would be a bit different (clients come only more or less frequently) but I think at least that part of the assumptions here is reasonable.

    The only way I can interpret the current code under a multi-blockfilter-index future is that index_sizes.filter_index specifies an equal allocation size for all blockfilter index types. With the current 5% allocation for all blockfilter types, adding a second one would result in a halving of the allocation.

    Yeah, that was not my intention. I think each blockfilterindex should probably get 5% (up to a limited number of indexes of course). But it seems questionable if multiple of these would even be running at the same time on the same node when someone has a use-case for other blockfilterindex. Presumably they would make their custom index so that it fits all the needs of their protocol.

    On the other hand even if somebody would add a second custom index and wouldn't change this number it probably wouldn't be the end of the world. Hopefully they would change the MAX_FILTER_INDEX_CACHE if they are smart enough to build their own index.

    Seems more reasonable to add a separate independent IndexCacheSizes-field for the new block filter index if we ever add one.

    Going more explicit/expressive and less flexible is reasonable looking at the situation of this code at the moment. But another way to take would be to Increase the whole blockfilterindex allocation to 10% but cap the individual indexes at 5%. And then each index get's 5% unless there are >2 blockfilterindexes which then have to share 10% or so (could also be 15% and >3, haven't thought about what's reasonable here). This keeps the same level of flexibility and I think I would prefer this since it preserves the current behavior of the code more. If we are making the code less flexible in terms of introducing a separate blockfilterindex, as I said, I think it's better to keep it in a separate PR so it can be discussed independently. I am not against exploring that idea, it's rather that I would like this to get as much feedback as possible and potentially, if the feedback is positive, we may want to simplify the code in other places in a similar way.

    For the PR here I would prefer something like this I think (with the necessary documentation change of course):

        index_sizes.tx_index = std::min(total_cache * 10 / 100, args.GetBoolArg("-txindex", DEFAULT_TXINDEX) ? MAX_TX_INDEX_CACHE : 0);
        index_sizes.txospender_index = std::min(total_cache * 5 / 100, args.GetBoolArg("-txospenderindex", DEFAULT_TXOSPENDERINDEX) ? MAX_TXOSPENDER_INDEX_CACHE : 0);
        if (n_indexes > 0) {
    -       Assert(n_indexes == 1); // Currently only support one type of block filter index
    -       size_t max_cache = std::min(total_cache * 5 / 100, MAX_FILTER_INDEX_CACHE);
    +       size_t pct = std::min(n_indexes, size_t{2}) * 5;
    +       size_t max_cache = std::min(total_cache * pct / 100, MAX_FILTER_INDEX_CACHE);
            index_sizes.filter_index = max_cache / n_indexes;
            total_cache -= index_sizes.filter_index * n_indexes;
        }
    

    hodlinator commented at 12:27 PM on April 8, 2026:

    The fact that we only have one type of blockfilter index in the current code, and not even an open PR to add one makes the case for Assert(n_indexes == 1) stronger than std::min(n_indexes, size_t{2}) or std::min(n_indexes, size_t{3}) IMO. The min()-expression leaves the code fuzzy and less well defined for a hypothetical future. Can't we wait until we need to cross that bridge?


    fjahr commented at 1:53 PM on April 8, 2026:

    We could leave the code here as is. Like I said in my initial response in this thread, this is a tricky topic that should not hold up this PR which is a clearly an improvement overall.

    The fact that we only have one type of blockfilter index in the current code, and not even an open PR to add one makes the case for Assert(n_indexes == 1) stronger than std::min(n_indexes, size_t{2}) or std::min(n_indexes, size_t{3}) IMO.

    I disagree because the assert works against the intention of the code as it was originally written while my suggestion tries to preserve it. This is not only about having new filter types on the horizon or not. The code as it is now was written with the intention to make it easy to add a new blockfilter index type. The assert makes it a little bit harder. As far as I can remember the index types that were rejected conributed to that the code was preserved to be flexible so that other protocols can add their custom filter type more easily. This is what I mean by having a more broader discussion on it. If we just look at the state of the PRs and conclude there is no interest in new filter types we may rather just want to remove ~200 LoCs that make it possible to add new filter types easily instead of adding the assert.

    The min()-expression leaves the code fuzzy and less well defined for a hypothetical future. Can't we wait until we need to cross that bridge?

    I don't understand why it's fuzzy and less well defined? The assert renders the parts of the code below it useless, so we are not getting rid of the "hypothetical future" with it alone and I think that inconsistency could be pretty confusing.

    FWIW, it's not completely hypothetical. Here is a fork of Bitcoin Core that adds a custom filter type: https://github.com/bitcoinknots/bitcoin/blob/29.x-knots/src/blockfilter.cpp#L24 Wasabi used the same filter in production for some time. I don't know if there are other projects that do something similar but this question should be asked before making it harder to maintain custom filter types. Usually we don't care much about what forks do but in this case I think the code was intentionally left like this to leave the option open for making these changes easily.


    hodlinator commented at 6:37 PM on April 8, 2026:

    Going min(..., 2) kicks the can down the road for a hypothetical future of 2 filters, requiring further kicking once we have 3. The assert accepts and documents what we implement - but is easy to remove/change if we ever add a second filter type.

    Interesting regarding Knots having 2 filters and being used by Wasabi Wallet. I scanned through the Core PR which originated the segwit v0 filter type. The justification for it's existence seems to approach 0 as non-segwit transactions become rarer. #18223 (comment)

    If we just look at the state of the PRs and conclude there is no interest in new filter types we may rather just want to remove ~200 LoCs that make it possible to add new filter types easily instead of adding the assert.

    I would be in mild agreement with reducing the line count until a long-term credibly useful second filter type is proposed.


    fjahr commented at 6:52 PM on April 8, 2026:

    Going min(..., 2) kicks the can down the road for a hypothetical future of 2 filters, requiring further kicking once we have 3.

    No, we need to set a maximum somewhere in the theoretical someone runs a lot of those and I would say 10% is reasonable if someone wants to run 3 for example, then they have to work with less memory (3.3%). So IMO this is future proof as far as we can reasonably foresee it.


    hodlinator commented at 7:09 PM on April 8, 2026:

    #18223 specifies that disk usage by index types:

    Total size of serialized filers as of block 619 361:

    Basic = 4.76 GB v0 = 252 MB

    Although they would converge towards equal usage and content given that segwit transactions become more ubiquitous, those numbers are far from equal. It's unknown whether a future filter type would compress the data in the same way. Giving them equal allocations is the least worse choice given that we have no idea what the second filter would be. It would be fairly correct for Knots as long as they keep the second filter I guess. But I disagree on feeding the hypotheticals as far as our codebase is concerned.


    fjahr commented at 7:19 PM on April 8, 2026:

    Although they would converge towards equal usage and content given that segwit transactions become more ubiquitous, those numbers are far from equal.

    Is this disk size of flat files, ldb or both? Because only ldb size is relevant for the ldb caching of course and the ldb size should be equal. I mentioned this above here: #34636 (review), but maybe I am misunderstanding something.


    hodlinator commented at 7:45 PM on April 8, 2026:

    Is this disk size of flat files, ldb or both?

    My guess is that they were specifying the combination.

    maybe I am misunderstanding something.

    I was incorrectly assuming both keys and all values live in ldb. You are right - as ldb just serves as an index for block height + hash + header + FlatFilePos, that aspect would be the same unless the second index chooses to diverge from that design. (Access patterns might differ as you say).

  18. hodlinator commented at 12:59 PM on April 7, 2026: contributor

    Concept ACK dca43fa7b8ad85a57df765eba51ef326581898b6

    Not sure if/when I will find disk space to do benchmarks on. Given that #34692 bumped the default total from 450MiB to 1GiB (assuming memory query succeeds), I think it might be useful to have a table such as:

    Quantity Pre-#34692 Post-dca43fa7this PR
    Total cache 450.00MiB 1024.00MiB
    txindex 56.25MiB 102.40MiB
    txospenderindex 49.22MiB 51.20MiB
    blockfilterindex 43.07MiB 51.20MiB

    These absolute values show that we actually have net-increases for indexes with the current 10%/5%/5% allocations compared to before #34692.

    Tip: use -stopatheight to exclude any variability stemming from differing heights between IBD runs.

  19. svanstaa force-pushed on Apr 7, 2026
  20. svanstaa commented at 7:56 PM on April 7, 2026: none

    Thanks for the review! Much appreciated. I have applied all the suggestions / fixes. (I should probably squash the 2 commits, but will leave them for now, in case there is more opinions on the assert )

  21. node: allocate index caches proportional to usage patterns
    add comment explaining coinstatsindex cache exclusion
    
    update cache allocations to 10%/5%/5%
    d06dabf26b
  22. svanstaa force-pushed on Apr 8, 2026
  23. DrahtBot added the label CI failed on Apr 9, 2026
  24. DrahtBot removed the label CI failed on Apr 9, 2026

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-17 06:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me