improve MallocUsage() accuracy #28531

pull LarryRuane wants to merge 1 commits into bitcoin:master from LarryRuane:2023-09-MallocUsage changing 3 files +119 −102
  1. LarryRuane commented at 10:17 pm on September 25, 2023: contributor

    The MallocUsage() function takes an allocation size as an argument and returns the amount of physical memory consumed, which is greater due to memory allocator overhead and alignment. It was first added in 2015 (first commit of #6102), but its accuracy has degraded as memory allocation libraries have evolved. It’s used when it’s important that large data structures, such as the coins cache and mempool, should use a predictable, configurable (limited) amount of physical memory (see the -dbcache and -maxmempool configuration options), as well as a few other places.

    sipa figured out a concise, efficient expression that this function can use, and that’s what’s implemented here.

    Also add a unit test, which is more helpful than usual in this case since platforms, operating systems, and libraries vary significantly in this area.

  2. DrahtBot commented at 10:17 pm on September 25, 2023: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/28531.

    Reviews

    See the guideline for information on the review process. A summary of reviews will appear here.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #32279 ([IBD] prevector: store P2WSH/P2TR/P2PK scripts inline by l0rinc)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

    LLM Linter (✨ experimental)

    Possible typos and grammar issues:

    • that size -> than size [incorrect preposition]
    • beyojnd -> beyond [typographical error]
  3. LarryRuane commented at 10:17 pm on September 25, 2023: contributor
    The results of the old and new versions of this function should differ only slightly; it would be bad if the new one gave very different results, because node operators might find too much memory being consumed, or (not as bad) not enough. To manually test this, I ran -reindex-chainstate on Ubuntu, and watched memory usage by printing /proc/pid/statm and watching the data (6th) field. On master, the value settled to 315140 (units are 4k pages); with this PR it was 315606, which indicates that the new version produces slightly smaller results, but they’re close.
  4. DrahtBot added the label CI failed on Sep 25, 2023
  5. LarryRuane commented at 3:11 pm on September 27, 2023: contributor
    Changing to draft because CI on a couple of the platforms failed the new test, so the physical memory calculation will need to be different on those platforms, I try to figure that out.
  6. LarryRuane marked this as a draft on Sep 27, 2023
  7. martinus commented at 8:24 am on November 20, 2023: contributor

    I can reproduce the same results with glibc 2.31 and 2.38, in 32bit and 64bit. I don’t think it needs to be perfect on all platforms, I think your new calculation is fine.

    I played a bit with a reproducer, here is mine: https://godbolt.org/z/s971sbnhG I refactored your MallocUsage formula a bit, but it’s basically the same

  8. DrahtBot commented at 11:58 am on February 29, 2024: contributor

    🤔 There hasn’t been much activity lately and the CI seems to be failing.

    If no one reviewed the current pull request by commit hash, a rebase can be considered. While the CI failure may be a false positive, the CI hasn’t been running for some time, so there may be a real issue hiding as well. A rebase triggers the latest CI and makes sure that no silent merge conflicts have snuck in.

  9. LarryRuane marked this as ready for review on Feb 29, 2024
  10. LarryRuane force-pushed on Feb 29, 2024
  11. LarryRuane commented at 8:28 pm on February 29, 2024: contributor
    Force-pushed to 04f3fbe308728d2fb18fad418603cab97f0b9f59 - rebase only. (CI is still expected to fail, I’m working on that now.)
  12. LarryRuane force-pushed on Mar 1, 2024
  13. LarryRuane force-pushed on Mar 1, 2024
  14. LarryRuane force-pushed on Mar 14, 2024
  15. LarryRuane force-pushed on Mar 15, 2024
  16. LarryRuane force-pushed on Mar 15, 2024
  17. LarryRuane commented at 10:34 pm on March 15, 2024: contributor

    @martinus Can you think of why the macOS and Win64 are failing (I have only Ubuntu, which passes)? As an experiment, I changed the memory usage ratio needed to pass to the resource map being just less than std_map, but not even that passes (that is, the resource map uses more space). Both of those platforms fail the same way (you can see this in the CI results, this is Win64):

    0D:/a/bitcoin/bitcoin/src/test/pool_tests.cpp(183): error: in "pool_tests/memusage_test": 
    1  check memusage::DynamicUsage(resource_map) <= memusage::DynamicUsage(std_map) * 100 / 100 
    2  has failed [466624 > 451088]
    

    On earlier runs, some other tests have failed, but I’m not sure if those are spurious failures.

    I’m not sure what to do now, any thoughts or suggestions are welcome!

  18. sipa commented at 6:57 pm on March 16, 2024: member

    @LarryRuane There is no reason to assume that the memory allocation overhead on those platforms, with substantially different C libraries, would match the formula we inferred on x86_64 and arm Linux.

    If our goal is actually accurate memory usage calculations on all systems, we will need to derive a formula for each supported system separately.

  19. LarryRuane force-pushed on Mar 19, 2024
  20. LarryRuane commented at 5:40 pm on March 19, 2024: contributor

    @sipa - I don’t think the test failures could be due to differences in memory allocation overhead across platforms. The actual memory allocation overhead isn’t being tested at all. To do that would probably require a test that uses /proc/meminfo (which doesn’t exist on all platforms) or similar.

    I don’t mind giving up and closing this PR, but as @martinus said in an earlier comment, this calculation doesn’t need to be perfect, just better.

    I think this test failure must be due to differences in std::unordered_map implementations. That container does various allocations for different reasons, and it must be that with this PR’s change in memory usage calculation, the std::unordered_map on macOS and Win64 (at least) are doing more a kind of allocation that the new calculation thinks has higher overhead. Something like that.

    [UPDATE: the following discussion about std::unordered_map::reserve() is incorrect, calling reserve() doesn’t prevent the bucket array from being reallocated multiple times within the pool resource – rather, it’s reallocated using regular memory. Only allocations up to 128 bytes come from the pool allocator. (I’ll probably remove these comments later.)

    I thought I had it figured out; I verified with the debugger that as the test adds the 10k entries to the two maps, it has to repeatedly grow the bucket array. The allocation sizes (in bytes) for the bucket array that I saw here on Ubuntu were: 104, 232, 472, 1026, 2056, 4328, 8872, 18856, 40696, 82184…. The tradeoff one makes when using the pool resource allocator is that memory allocations that are freed but then those same sizes not allocated again, those allocations are, in effect, leaked (until the resource is destroyed). The pool resource works great when freeing and allocating the same sizes (rounded up to the alignment, usually 8 bytes) repeatedly. This is, of course, the case for the individual map entries.

    So the latest force-push simply calls reserve() initially on both maps (this is only really needed for the pool resource map). Debugging verifies that this does prevent the bucket array from being reallocated, here on Ubuntu. But it didn’t fix the problem on at least macOS and Win64. Maybe those platforms’ implementations of std::unordered_map ignore the reserve() hint?

    It’s difficult when CI fails on platforms that I don’t have access to. Do others have a solution to that? Can I install macOS on Ubuntu or Windows 10 (I have both) as a VM? Maybe I can set up my Windows laptop to build and run the debugger, but that seems like a lot of work.

  21. martinus commented at 9:32 am on March 20, 2024: contributor

    @LarryRuane The problem here is that the DynamicUsage(const std::unordered_map<X, Y, Z>& m) just estimates the memory usage by thinking that internally it uses a node with a single pointer (see struct unordered_node : private X in memusage.h). This estimation is probably not correct on all platforms, e.g. windows seems to use a double linked list, so this underestimates actual memory usage.

    The inconcsistency now comes when using DynamicUsage with a pool resource. Here the actual real world memory usage is counted (because it uses a custom allocator, there’s no need for estimating node size), and here the different implementations actually effect the result.

    So, with your new MallocUsage the numbers are most likely more correct, but this seems to trigger the incorrect underestimation for DynamicUsage(const std::unordered_map<X, Y, Z>& m)

  22. LarryRuane commented at 5:37 pm on March 21, 2024: contributor

    @martinus - thanks, that makes perfect sense! I hadn’t thought of the possibility of the nodes having two pointers instead of one (double instead of single linked list). Seems like a bad design, but anyway.

    I don’t think there would be much benefit to fixing DynamicUsage(const std::unordered_map<X, Y, Z>& m) to account for this (make it dependent on platform), because this function is no longer important since we’re using the pool resource. It’s probably really used only in the test. So it’s really just a test problem. I’ll see what I can do to fix this.

  23. LarryRuane force-pushed on Mar 22, 2024
  24. LarryRuane force-pushed on Apr 8, 2024
  25. DrahtBot added the label Needs rebase on Apr 11, 2024
  26. LarryRuane force-pushed on May 3, 2024
  27. DrahtBot removed the label Needs rebase on May 3, 2024
  28. DrahtBot marked this as a draft on May 13, 2024
  29. DrahtBot commented at 8:02 am on May 13, 2024: contributor
    Turned into draft for now, due to failing CI. If this is no longer a WIP, you can move it out of draft.
  30. LarryRuane marked this as ready for review on May 21, 2024
  31. LarryRuane force-pushed on May 21, 2024
  32. DrahtBot added the label Needs rebase on May 21, 2024
  33. LarryRuane force-pushed on May 22, 2024
  34. DrahtBot removed the label Needs rebase on May 22, 2024
  35. DrahtBot removed the label CI failed on May 22, 2024
  36. LarryRuane commented at 7:12 pm on May 22, 2024: contributor

    NOTE, this PR isn’t ready to merge; at the very least it has some debug prints that need to be removed.

    CI was failing on some platforms (Windows and macOS), in test/functional/mempool_limit.py: test_mid_package_eviction(). I think this test is fragile; this very-unrelated PR shouldn’t be able to cause it to fail IMO. The problem turned out to be the following:

    This test calls the test utility function fill_mempool(). This function creates independent physically-large (around 65kb) transactions at varying fee rates until the one with the lowest feerate is evicted due to the mempool becoming full. The functional tests run with -maxmempool=5 (MB), so that requires about 72 transactions.

    After fill_mempool() returns, the mempool likely won’t be completely full, because these were large transactions, so there’s some space left but not enough for another transaction. The test, test_mid_package_eviction, then submits a smaller transaction (weight around 8k) called mempool_evicted_tx with a fee just barely high enough to be accepted, and expects that transaction to be accepted. (It has this name, mempool_evicted_tx, because the test expects it to be evicted later.)

    With this PR, that transaction was not accepted on Windows and macOS. Some debug prints show that on those platforms, when fill_mempool() returns, there are only 1200 bytes of free space in the mempool. This is because the std::unordered_map library implementations are different on these platforms, and this causes the memory usage calculation (for the entire map) to be slightly different in this PR. So it’s really just bad luck. This is why I think the test is fragile.

    After trying a few fixes to the test that didn’t work, what finally worked (the current branch) is, to have fill_mempool(), just before it returns, submit another transaction that replaces (RBF) an existing one in the mempool, but is slightly smaller. This frees up about 10kb of additional space, which allows room for the 8k transaction to be reliably accepted. This doesn’t break the other tests that call fill_mempool().

    This attempt didn’t work the first time, however, because the mempools wouldn’t reliably sync in test/functional/p2p_1p1c_network.py. This test starts 4 nodes, and calls fill_mempool() on node 0 then waits for all 4 mempools to become synced. Sometimes they do but other times this never happens. I wonder if there might be an unrelated bug (I can investigate later), but what often happens (it’s timing dependent and thus random) is that the replacement tx (added by this PR) does appear on all nodes, but a low-feerate tx that was evicted on node 0 remains on nodes 1-3. I can explain what I’m seeing in more detail later if needed. What ended up fixing this is to delay a few seconds before submitting the RBF transaction. It’s ugly, maybe there’s a better way, but syncing mempools is kind of non-deterministic anyway, so I don’t think it’s terribly bad. @glozow and @theStack, can you let me know your opinion? You two have worked a lot in this area.

  37. theStack commented at 0:24 am on May 23, 2024: contributor

    @LarryRuane: Interesting problem (and very creative solution with the RBF tx :)). Fwiw, on one of my machines running OpenBSD 7.5, the same behaviour as you described occurs on commit 663e5012d318a7e80a42385ac4e6f71dec46814e: after fill_mempool, mempool usage is very close to the max ('usage': 4998768), and the subtest test_mid_package_eviction fails on the first MiniWallet tx submission with a "mempool full" error.

    One idea I had to solve this at the fill_mempool side is to submit an extra tx at the end of fill_mempool that is smaller (I chose half the size) than the filler txs. Given that this causes eviction, a large filler tx is removed, and the smaller-sized tx is getting in, freeing up some space (around ~32.5KB in terms of bytes). It’s still very ugly, especially since this code is only ever executed for edge cases (note that we can’t unconditionally submit the smaller tx, as this could cause the problem rather than solve it, if no eviction happens), but passes all the tests that use fill_mempool for me: https://github.com/theStack/bitcoin/commit/f758e73a943b8e45acac1ab8834305f7f0bfbf14

    However, is this even necessary to be solved in fill_mempool? Haven’t looked closer at the failing test yet, but maybe it’s just a bug in the fee calculation (or MiniWallet’s treatment of target_weight).

  38. LarryRuane force-pushed on May 23, 2024
  39. LarryRuane commented at 8:58 pm on May 23, 2024: contributor
    Your patch is much better, @theStack, thanks! It’s simpler and doesn’t need that weird sleep. Force-pushed your patch. If it passes CI, I think this is the way to go. I had tried something similar, but I didn’t think to make the sending of that extra (smaller) transaction conditional based on available mempool space.
  40. LarryRuane force-pushed on May 24, 2024
  41. LarryRuane commented at 6:31 pm on May 24, 2024: contributor
    CI passed, thanks @theStack. I just force-pushed swapping of the two commits, the test commit has to be first for each commit to pass CI.
  42. DrahtBot added the label CI failed on Sep 12, 2024
  43. DrahtBot removed the label CI failed on Sep 15, 2024
  44. achow101 requested review from sipa on Oct 15, 2024
  45. DrahtBot added the label CI failed on Mar 14, 2025
  46. DrahtBot commented at 11:33 am on March 15, 2025: contributor
    Needs rebase, if still relevant
  47. LarryRuane force-pushed on Mar 17, 2025
  48. in test/functional/test_framework/mempool_util.py:101 in 28dc105626 outdated
     92@@ -92,6 +93,14 @@ def send_batch(fee):
     93         send_batch(fee)
     94     tx_sync_fun() if tx_sync_fun else test_framework.sync_mempools()  # sync after all evictions
     95 
     96+    # if mempool is almost full (<10k usage bytes left), submit one extra middle-sized tx,
     97+    # in order to evict a large filler tx and leave some room; this will enable tests to submit
     98+    # txs with just the mempoolminfee without immediate eviction ("mempool full" error)
     99+    if node.getmempoolinfo()['usage'] >= 4_990_000:
    100+        ephemeral_miniwallet.send_self_transfer(from_node=node, fee=num_of_batches * (base_fee / 2),
    101+                                                utxo_to_spend=confirmed_utxos.pop(0), target_weight = 32500 * 4)
    


    l0rinc commented at 8:37 am on March 19, 2025:

    After #30718 this should likely be:

    0                                                utxo_to_spend=confirmed_utxos.pop(0), target_vsize=32500)
    

    l0rinc commented at 9:44 am on April 10, 2025:
    This is introduced here incorrectly and fixed in a later commit - as discussed, could you please add it in the same or after the fix commit?
  49. in src/memusage.h:51 in 28dc105626 outdated
    47@@ -48,19 +48,31 @@ template<typename X> static inline size_t DynamicUsage(const X * const &v) { ret
    48  *  application data structures require more accurate inner accounting, they should
    49  *  iterate themselves, or use more efficient caching + updating on modification.
    50  */
    51-
    52 static inline size_t MallocUsage(size_t alloc)
    


    l0rinc commented at 9:27 am on March 19, 2025:

    ‘inline’ may be redundant on a global static function, but we should be able to make it constexpr regardless:

    0static constexpr size_t MallocUsage(size_t alloc)
    1...
    2    constexpr size_t overhead{sizeof(size_t)};
    
  50. in src/memusage.h:73 in 28dc105626 outdated
    79+    static constexpr size_t step = 16U;
    80+    static constexpr size_t min_alloc = sizeof(void*) == 8 ? 9 : 0;
    81+#endif
    82+    // step should be nonzero and a power of 2
    83+    static_assert(step > 0);
    84+    static_assert((step & (step-1)) == 0);
    


    l0rinc commented at 9:35 am on March 19, 2025:

    we could use the C++20 https://en.cppreference.com/w/cpp/numeric/has_single_bit here:

    0    static_assert(std::has_single_bit(step));
    
  51. in src/memusage.h:62 in 28dc105626 outdated
    81+#endif
    82+    // step should be nonzero and a power of 2
    83+    static_assert(step > 0);
    84+    static_assert((step & (step-1)) == 0);
    85+    // tested with Linux glibc 2.31 and 2.38, 32bit and 64bit
    86+    return (std::max(min_alloc, alloc) + overhead + (step - 1)) & ~(step - 1);
    


    l0rinc commented at 9:50 am on March 19, 2025:

    Since step is guaranteed to be a power of 2, the expression ~(step - 1) simplifies to -step.

    0BOOST_AUTO_TEST_CASE(BitwiseStepSimplificationTest)
    1{
    2    for (uint64_t step{1}; step != 0; step <<= 1) {
    3        std::cout << step << std::endl;
    4        BOOST_CHECK(std::has_single_bit(step));
    5        BOOST_CHECK_EQUAL((~(step - 1)), -step);
    6    }
    7}
    

    so we should be able to write:

    0    return (std::max(min_alloc, alloc) + overhead + (step - 1)) & -step;
    

    LarryRuane commented at 9:39 pm on March 28, 2025:
    I had to change this back to ~(step - 1) because CI complained; some compilers don’t accept the negation of an unsigned type.

    l0rinc commented at 1:51 pm on May 27, 2025:
    Indeed, please resolve the comment
  52. in src/memusage.h:56 in 28dc105626 outdated
    76+    static constexpr size_t step = sizeof(void*) * 2;
    77+    static constexpr size_t min_alloc = 9;
    78+#else
    79+    static constexpr size_t step = 16U;
    80+    static constexpr size_t min_alloc = sizeof(void*) == 8 ? 9 : 0;
    81+#endif
    


    l0rinc commented at 10:11 am on March 19, 2025:

    I’m not exactly sure how we got to the min_alloc of 9 on these platforms, but we can probably simplify the platform dependent code to:

    0#if defined(__arm__) || SIZE_MAX == UINT64_MAX
    1    constexpr size_t min_alloc{9};
    2#else
    3    constexpr size_t min_alloc{0};
    4#endif
    
  53. in src/memusage.h:72 in 28dc105626 outdated
    78+#else
    79+    static constexpr size_t step = 16U;
    80+    static constexpr size_t min_alloc = sizeof(void*) == 8 ? 9 : 0;
    81+#endif
    82+    // step should be nonzero and a power of 2
    83+    static_assert(step > 0);
    


    l0rinc commented at 10:27 am on March 19, 2025:
    what’s the purpose of checking positivity for a size_t value? Is this some weird compiler trick or a refactoring leftover? Or if it’s purpose is to check the two instantiations have the same type, we could rather create a boolean with the necessary conditions and use that to initialize step once.

    LarryRuane commented at 7:08 pm on March 19, 2025:
    I wrote this because the following line (static_assert((step & (step-1)) == 0)) doesn’t actually assert that step has a single bit set (which is what I wanted to check); it also succeeds if step is zero. But using std::has_single_bit() is better, so we can remove static_assert(step > 0), good catch.
  54. in src/memusage.h:65 in 28dc105626 outdated
    71+    if (alloc == 0) return 0;
    72+
    73+    static constexpr size_t overhead = sizeof(void*);
    74+#if defined(__arm__)
    75+    // tested with ARM 32bit
    76+    static constexpr size_t step = sizeof(void*) * 2;
    


    l0rinc commented at 10:29 am on March 19, 2025:

    step is basically the alignment, right? We could probably do it in a platform-independent way (and get rid of static_assert(step > 0); while we’re here):

    0constexpr size_t alignment{alignof(std::max_align_t)};
    

    l0rinc commented at 8:39 pm on March 19, 2025:
    I forgot to include this in the full patch, my bad
  55. in src/memusage.h:50 in 28dc105626 outdated
    66+    // for example, there has not been a zero-byte allocation (which would require
    67+    // some physical space). std::vector has optimized this case. Experimental
    68+    // evidence indicates the same is true of other data structures -- there are
    69+    // no actual zero-length allocations observed, although of course this is
    70+    // library-dependent.
    71+    if (alloc == 0) return 0;
    


    l0rinc commented at 10:50 am on March 19, 2025:

    Adding the current implementation to the reproducer of @martinus shows this is a behavior change compared to his version - https://godbolt.org/z/cPWz6YnWd

    00: actual=16, estimate=0
    
  56. in src/memusage.h:194 in 28dc105626 outdated
    188@@ -177,11 +189,16 @@ static inline size_t DynamicUsage(const std::list<X>& l)
    189     return MallocUsage(sizeof(list_node<X>)) * l.size();
    190 }
    191 
    192+// struct unordered_node adds the container overhead to the given structure X,
    193+// although this varies across library container implementations (platforms).
    194+// It is believed that some containers use two pointers per node, so this
    


    l0rinc commented at 11:03 am on March 19, 2025:
    can we add some references to avoid having to rely on beliefs?

    LarryRuane commented at 7:42 pm on March 19, 2025:

    Good point, I am going to back this change out until we see evidence for needing it. I think I “deduced” this based on macOS CI failing while the others were passing. But it’s been too long.

    Just now, after searching for a while with no luck, I asked X’s Grok (I know, we shouldn’t trust it too much), and it said LLVM’s implementation of std::unordered_map that the node includes a next pointer but no previous pointer. You can indeed see that in the source code: https://github.com/llvm/llvm-project/blob/main/libcxx/include/__hash_table#L95

  57. l0rinc changes_requested
  58. l0rinc commented at 11:06 am on March 19, 2025: contributor

    I think this is an important change, I’m running a full reindex-chainstate to check for any noticeable behavior changes.

    I left a few recommendations:

     0diff --git a/src/memusage.h b/src/memusage.h
     1index 501f7068ca..5855e9d18b 100644
     2--- a/src/memusage.h
     3+++ b/src/memusage.h
     4@@ -24,9 +24,6 @@
     5 namespace memusage
     6 {
     7 
     8-/** Compute the total memory used by allocating alloc bytes. */
     9-static size_t MallocUsage(size_t alloc);
    10-
    11 /** Dynamic memory usage for built-in types is zero. */
    12 static inline size_t DynamicUsage(const int8_t& v) { return 0; }
    13 static inline size_t DynamicUsage(const uint8_t& v) { return 0; }
    14@@ -48,7 +45,7 @@ template<typename X> static inline size_t DynamicUsage(const X * const &v) { ret
    15  *  application data structures require more accurate inner accounting, they should
    16  *  iterate themselves, or use more efficient caching + updating on modification.
    17  */
    18-static inline size_t MallocUsage(size_t alloc)
    19+static constexpr size_t MallocUsage(size_t alloc)
    20 {
    21     // There are few if any actual zero-length allocations; when
    22     // DynamicUsage(std::vector<X>& v) calls this function, and v.capacity() == 0,
    23@@ -59,20 +56,16 @@ static inline size_t MallocUsage(size_t alloc)
    24     // library-dependent.
    25     if (alloc == 0) return 0;
    26 
    27-    static constexpr size_t overhead = sizeof(void*);
    28-#if defined(__arm__)
    29-    // tested with ARM 32bit
    30-    static constexpr size_t step = sizeof(void*) * 2;
    31-    static constexpr size_t min_alloc = 9;
    32+#if defined(__arm__) || SIZE_MAX == UINT64_MAX
    33+    constexpr size_t min_alloc{9};
    34 #else
    35-    static constexpr size_t step = 16U;
    36-    static constexpr size_t min_alloc = sizeof(void*) == 8 ? 9 : 0;
    37+    constexpr size_t min_alloc{0};
    38 #endif
    39-    // step should be nonzero and a power of 2
    40-    static_assert(step > 0);
    41-    static_assert((step & (step-1)) == 0);
    42-    // tested with Linux glibc 2.31 and 2.38, 32bit and 64bit
    43-    return (std::max(min_alloc, alloc) + overhead + (step - 1)) & ~(step - 1);
    44+    constexpr size_t overhead{sizeof(size_t)};
    45+    constexpr size_t step{alignof(std::max_align_t)};
    46+    static_assert(std::has_single_bit(step));
    47+
    48+    return (std::max(min_alloc, alloc) + overhead + (step - 1)) & -step;
    49 }
    50 
    51 // STL data structures
    52diff --git a/src/test/mempool_tests.cpp b/src/test/mempool_tests.cpp
    53index f0094dce59..9e9ae4d73c 100644
    54--- a/src/test/mempool_tests.cpp
    55+++ b/src/test/mempool_tests.cpp
    56@@ -804,4 +804,13 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTestsDiamond)
    57     BOOST_CHECK_EQUAL(descendants, 4ULL);
    58 }
    59 
    60+BOOST_AUTO_TEST_CASE(BitwiseStepSimplificationTest)
    61+{
    62+    for (uint64_t step{1}; step != 0; step <<= 1) {
    63+        std::cout << step << std::endl;
    64+        BOOST_CHECK(std::has_single_bit(step));
    65+        BOOST_CHECK_EQUAL((~(step - 1)), -step);
    66+    }
    67+}
    68+
    69 BOOST_AUTO_TEST_SUITE_END()
    

    the updated https://godbolt.org/z/45EnoP16T shows that the output is the same as currently

  59. in src/memusage.h:64 in 28dc105626 outdated
    70+    // library-dependent.
    71+    if (alloc == 0) return 0;
    72+
    73+    static constexpr size_t overhead = sizeof(void*);
    74+#if defined(__arm__)
    75+    // tested with ARM 32bit
    


    l0rinc commented at 11:20 am on March 19, 2025:
    __arm__ only applies to 32 bits, no need for the comments (I’d prefer having them in the commit messages). Before merging we have to make sure we’re not just testing ARM on 32 bits
  60. in test/functional/test_framework/mempool_util.py:99 in 18a2bd4b04 outdated
    92@@ -92,6 +93,14 @@ def send_batch(fee):
    93         send_batch(fee)
    94     tx_sync_fun() if tx_sync_fun else test_framework.sync_mempools()  # sync after all evictions
    95 
    96+    # if mempool is almost full (<10k usage bytes left), submit one extra middle-sized tx,
    97+    # in order to evict a large filler tx and leave some room; this will enable tests to submit
    98+    # txs with just the mempoolminfee without immediate eviction ("mempool full" error)
    99+    if node.getmempoolinfo()['usage'] >= 4_990_000:
    


    l0rinc commented at 11:23 am on March 19, 2025:

    I see you’ve put the test before the fix after the recent rebase - but it never enters this condition before the fix, so basically dead code was added.

    0    if node.getmempoolinfo()['usage'] >= 4_990_000:
    1        raise ""
    

    Could we either make sure we enter, and update it in the next commit accordingly, or add it after the fix commit? Also, the commit message claims that “fill mempool leaves room” - I think this needs some explanation.


    LarryRuane commented at 8:04 pm on March 19, 2025:

    I think it’s okay that it’s initially dead code because it makes the test less fragile in general. In theory, a different project could cherry-pick just this test-only commit, and later make some other changes that would have caused this test to break (when it’s really a false-positive failure), but because of taking this commit, that doesn’t happen (the test continues to pass). Or, a similar way to think of it is, the test could have been written this way to begin with (and that would not have seemed strange). But I don’t feel strongly about, I can make that change if you’d like.

    I’ll update the commit comment, good catch.


    l0rinc commented at 8:36 pm on March 19, 2025:
    I would like to avoid having a false sense of security by putting this before the fix, which usually indicates that the desired behavior is reproduced and is retained after this fix - which isn’t the case here exactly. And the dead code part also bothers me for a similar reason.

    LarryRuane commented at 10:24 pm on March 28, 2025:
    I just realized that I can’t add this test commit after the functional commit, because then the functional commit doesn’t pass on some platforms (macOS is one for sure). So it would have to be part of the functional commit. Does that sound okay to you?

    l0rinc commented at 9:39 am on April 10, 2025:
    yes, it’s fine if it’s in the same commit, of course.

    LarryRuane commented at 11:26 pm on May 15, 2025:
    Fixed.
  61. in src/memusage.h:62 in 28dc105626 outdated
    68+    // evidence indicates the same is true of other data structures -- there are
    69+    // no actual zero-length allocations observed, although of course this is
    70+    // library-dependent.
    71+    if (alloc == 0) return 0;
    72+
    73+    static constexpr size_t overhead = sizeof(void*);
    


    l0rinc commented at 11:49 am on March 19, 2025:
    If it’s simpler, we might as well use the size of size_t here: https://github.com/bitcoin/bitcoin/blob/master/src/compat/assumptions.h#L37
  62. l0rinc commented at 12:05 pm on March 19, 2025: contributor

    Some of the usages seem to contain some magical values, tailored to the previous behavior:

    Do any of them need adjustments now?

    Edit: finished reindexing, it doesn’t seem to result in any performance regression or speedup

     0COMMITS="3171d7465c237cb752eb500285b272f00027d97d 7ff32e40735c3379da4a14fca1af029d67d43488"; \
     1STOP_HEIGHT=888000; DBCACHE=5000; \
     2C_COMPILER=gcc; CXX_COMPILER=g++; \
     3git fetch -aq && (for c in $COMMITS; do git log -1 --oneline $c || exit 1; done) && \
     4hyperfine \
     5  --export-json "/mnt/my_storage/rdx-${COMMITS// /-}-${STOP_HEIGHT}-${DBCACHE}-${C_COMPILER}.json" \
     6  --runs 1 \
     7  --parameter-list COMMIT ${COMMITS// /,} \
     8  --prepare "killall bitcoind || true; rm -f /mnt/my_storage/BitcoinData/debug.log; git checkout {COMMIT}; git clean -fxd; git reset --hard; cmake -B build -DCMAKE_BUILD_TYPE=Release -DENABLE_WALLET=OFF -DCMAKE_C_COMPILER=$C_COMPILER -DCMAKE_CXX_COMPILER=$CXX_COMPILER && cmake --build build -j$(nproc) --target bitcoind && ./build/bin/bitcoind -datadir=/mnt/my_storage/BitcoinData -stopatheight=$STOP_HEIGHT -dbcache=10000 -printtoconsole=0 || true" \
     9  --cleanup "cp /mnt/my_storage/BitcoinData/debug.log /mnt/my_storage/logs/debug-{COMMIT}-$(date +%s).log || true" \
    10  "COMPILER=$C_COMPILER COMMIT={COMMIT} ./build/bin/bitcoind -datadir=/mnt/my_storage/BitcoinData -stopatheight=$STOP_HEIGHT -dbcache=$DBCACHE -blocksonly -reindex-chainstate -connect=0 -printtoconsole=0"
    11
    123171d7465c test: ensure that `fill_mempool` leaves some room in mempool
    137ff32e4073 Test fix
    14
    15Benchmark 1: COMPILER=gcc COMMIT=3171d7465c237cb752eb500285b272f00027d97d ./build/bin/bitcoind -datadir=/mnt/my_storage/BitcoinData -stopatheight=888000 -dbcache=5000 -blocksonly -reindex-chainstate -connect=0 -printtoconsole=0
    16  Time (abs ):        13876.186 s               [User: 17593.222 s, System: 731.343 s]
    17 
    18Benchmark 2: COMPILER=gcc COMMIT=7ff32e40735c3379da4a14fca1af029d67d43488 ./build/bin/bitcoind -datadir=/mnt/my_storage/BitcoinData -stopatheight=888000 -dbcache=5000 -blocksonly -reindex-chainstate -connect=0 -printtoconsole=0
    19  Time (abs ):        13912.878 s               [User: 17549.791 s, System: 709.007 s]
    20 
    21Summary
    22  COMPILER=gcc COMMIT=3171d7465c237cb752eb500285b272f00027d97d ./build/bin/bitcoind -datadir=/mnt/my_storage/BitcoinData -stopatheight=888000 -dbcache=5000 -blocksonly -reindex-chainstate -connect=0 -printtoconsole=0 ran
    23    1.00 times faster than COMPILER=gcc COMMIT=7ff32e40735c3379da4a14fca1af029d67d43488 ./build/bin/bitcoind -datadir=/mnt/my_storage/BitcoinData -stopatheight=888000 -dbcache=5000 -blocksonly -reindex-chainstate -connect=0 -printtoconsole=0
    
  63. LarryRuane commented at 8:24 pm on March 19, 2025: contributor

    @l0rinc - thank you for the excellent review and great suggestions. I especially like making MallocUsage() a constexpr function.

    I’ll force-push a commit taking all of them except moving the first (test-only) commit (I left a comment explaining my reasoning).

  64. LarryRuane force-pushed on Mar 19, 2025
  65. in src/test/mempool_tests.cpp:810 in d4b18c5f12 outdated
    803@@ -804,4 +804,13 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTestsDiamond)
    804     BOOST_CHECK_EQUAL(descendants, 4ULL);
    805 }
    806 
    807+BOOST_AUTO_TEST_CASE(BitwiseStepSimplificationTest)
    808+{
    809+    for (uint64_t step{1}; step != 0; step <<= 1) {
    810+        std::cout << step << std::endl;
    


    l0rinc commented at 8:40 pm on March 19, 2025:
    if you decide to keep this test (which was meant to demonstrate the safety of the change), please remove the logging part
  66. LarryRuane force-pushed on Mar 19, 2025
  67. LarryRuane force-pushed on Mar 21, 2025
  68. LarryRuane force-pushed on Mar 21, 2025
  69. LarryRuane force-pushed on Mar 21, 2025
  70. LarryRuane force-pushed on Mar 21, 2025
  71. LarryRuane force-pushed on Mar 22, 2025
  72. LarryRuane force-pushed on Mar 22, 2025
  73. LarryRuane force-pushed on Mar 24, 2025
  74. LarryRuane force-pushed on Mar 24, 2025
  75. DrahtBot removed the label CI failed on Mar 25, 2025
  76. LarryRuane force-pushed on Mar 26, 2025
  77. LarryRuane force-pushed on Mar 26, 2025
  78. LarryRuane force-pushed on Mar 27, 2025
  79. in src/memusage.h:188 in 77ebad9651 outdated
    183 template<typename X>
    184 struct unordered_node : private X
    185 {
    186 private:
    187     void* ptr;
    188+    void* ptr2;
    


    l0rinc commented at 7:57 pm on March 27, 2025:
    was this added back by accident?

    LarryRuane commented at 8:14 pm on March 27, 2025:
    No, I’ll write a detailed comment later today or tomorrow, but it turns out the lack of this second pointer caused the pool test to fail, because on macOS and Windows; they really do have two pointers per node. I was able to determine this with certainty with debug printing (you can see it in some of the force-pushes above). I added a comment just before the start of this struct unordered_node definition to explain this second pointer, did you see that? I didn’t call it forward and previous because I really don’t know that, I only know that there are 16 bytes in this part of the struct (but I am going to assume that there are two pointers).
  80. l0rinc changes_requested
  81. in src/memusage.h:224 in 77ebad9651 outdated
    220@@ -212,7 +221,7 @@ static inline size_t DynamicUsage(const std::unordered_map<Key,
    221     size_t estimated_list_node_size = MallocUsage(sizeof(void*) * 3);
    222     size_t usage_resource = estimated_list_node_size * pool_resource->NumAllocatedChunks();
    223     size_t usage_chunks = MallocUsage(pool_resource->ChunkSizeBytes()) * pool_resource->NumAllocatedChunks();
    224-    return usage_resource + usage_chunks + MallocUsage(sizeof(void*) * m.bucket_count());
    225+    return usage_resource + usage_chunks + MallocUsage(2 * sizeof(void*) * m.bucket_count());
    


    LarryRuane commented at 9:48 pm on March 28, 2025:

    I’m making this and two similar changes above (for std::unordered_set and std::unordered_map because CI debug prints show that on some platforms (macOS and Win64), each element of the bucket array uses 16 bytes (likely two pointers) rather than 8 bytes (one pointer). This is probably because the bucket lists are doubly-linked. (This gives a performance advantage for removal; it’s not necessary to hash the key to find the bucket, then walk that bucket’s list looking for the element being removed.)

    There is no way to tell at compile time if the bucket lists should be assumed to be one or two pointers, so it’s better to be conservative and assume it’s two. (This may result in actual memory usage being slightly less than what the user configured for large data structures like the mempool and the coins cache.)


    l0rinc commented at 9:47 am on April 10, 2025:

    likely two pointers

    Is there a way to verify and document that explicitly (pointing to doc or code)?

  82. in src/memusage.h:199 in 77ebad9651 outdated
    195+// on some platforms (Windows and macOS), so be conservative.
    196 template<typename X, typename Y>
    197 static inline size_t DynamicUsage(const std::unordered_set<X, Y>& s)
    198 {
    199-    return MallocUsage(sizeof(unordered_node<X>)) * s.size() + MallocUsage(sizeof(void*) * s.bucket_count());
    200+    return MallocUsage(sizeof(unordered_node<X>)) * s.size() + MallocUsage(2 * sizeof(void*) * s.bucket_count());
    


    LarryRuane commented at 9:48 pm on March 28, 2025:
    see review comment below
  83. in src/memusage.h:205 in 77ebad9651 outdated
    202 
    203 template<typename X, typename Y, typename Z>
    204 static inline size_t DynamicUsage(const std::unordered_map<X, Y, Z>& m)
    205 {
    206-    return MallocUsage(sizeof(unordered_node<std::pair<const X, Y> >)) * m.size() + MallocUsage(sizeof(void*) * m.bucket_count());
    207+    return MallocUsage(sizeof(unordered_node<std::pair<const X, Y> >)) * m.size() + MallocUsage(2 * sizeof(void*) * m.bucket_count());
    


    LarryRuane commented at 9:48 pm on March 28, 2025:
    see review comment below

    l0rinc commented at 9:46 am on April 10, 2025:
    2 * sizeof(void*) refers to ptr + ptr2, right? Can we extract that to an explanatory constant?

    LarryRuane commented at 11:24 pm on May 15, 2025:
    Yes, and there is a fairly detailed comment on lines 194-195 (just before the unordered_set version of DynamicUsage), so I think that’s sufficient. But if you feel strongly, I’ll try to come up with something.

    l0rinc commented at 1:52 pm on May 27, 2025:

    I really dislike comments when code could be used, i.e. when I see sizeof(void*), it doesn’t tell me anything about what we’re actually measuring … 2 * sizeof(void*) is even worse.

    I was thinking of showing exactly what we’re measuring here, something like:

     0
     1// Empirically, an std::unordered_map node has two pointers (likely
     2// forward and backward pointers) on some platforms (Windows and macOS),
     3// so be conservative in estimating memory usage by assuming this is
     4// the case for all platforms.
     5template <typename X>
     6struct unordered_node : private X
     7{
     8    void* next_ptr;
     9    void* prev_ptr;
    10};
    11
    12// The memory used by an unordered_set or unordered_map is the sum of the
    13// sizes of the individual nodes (which are separately allocated) plus
    14// the size of the bucket array (which is a single allocation).
    15// Empirically, each element of the bucket array consists of two pointers
    16// on some platforms (Windows and macOS), so be conservative.
    17template <typename X, typename Y>
    18static size_t DynamicUsage(const std::unordered_set<X, Y>& s)
    19{
    20    return MallocUsage(sizeof(unordered_node<X>)) * s.size() +
    21           MallocUsage((sizeof(unordered_node<X>::next_ptr) + sizeof(unordered_node<X>::prev_ptr)) * s.bucket_count());
    22}
    23
    24template <typename X, typename Y, typename Z>
    25static size_t DynamicUsage(const std::unordered_map<X, Y, Z>& m)
    26{
    27    return MallocUsage(sizeof(unordered_node<std::pair<const X, Y>>)) * m.size() +
    28           MallocUsage((sizeof(unordered_node<std::pair<const X, Y>>::next_ptr) + sizeof(unordered_node<std::pair<const X, Y>>::prev_ptr)) * m.bucket_count());
    29}
    

    Also note that the current test doesn’t doesn’t exercise these modified lines at all, so we’ve kinda’ in the dark here…

  84. in src/test/validation_flush_tests.cpp:33 in 77ebad9651 outdated
    29@@ -30,7 +30,7 @@ BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
    30     // (prevector<28, unsigned char>) when assigned 56 bytes of data per above.
    31     //
    32     // See also: Coin::DynamicMemoryUsage().
    33-    constexpr unsigned int COIN_SIZE = is_64_bit ? 80 : 64;
    34+    constexpr size_t COIN_SIZE{64};
    


    l0rinc commented at 9:41 am on April 10, 2025:
    This is a surprising change, can you please explain it? Do we even get to BOOST_CHECK_EQUAL(view.DynamicMemoryUsage(), is_64_bit ? 32U : 16U) here - and if so, how come none of the other bitness-related values need changing?

    l0rinc commented at 8:45 am on April 15, 2025:

    I’ve checked and most of the test is dead for the past 3 years, cc @martinus.

    Could you update the modified test all the way to revive it and define the desired behavior?

  85. in src/memusage.h:53 in 77ebad9651 outdated
    60-        assert(0);
    61-    }
    62+    if (alloc == 0) return 0;
    63+
    64+#if defined(__arm__) || SIZE_MAX == UINT64_MAX
    65+    constexpr size_t min_alloc{9};
    


    l0rinc commented at 9:45 am on April 10, 2025:
    could we document this number here, it’s definitely not obvious

    LarryRuane commented at 11:22 pm on May 15, 2025:
    That constant came from your comment (which I’ve adopted into this PR), so I really don’t know how to document this value. Where did you get it from? This stuff tends to have magic numbers; I don’t think this PR makes it any worse.
  86. in test/functional/test_framework/mempool_util.py:62 in 77ebad9651 outdated
    56@@ -57,16 +57,17 @@ def fill_mempool(test_framework, node, *, tx_sync_fun=None):
    57     # Generate UTXOs to flood the mempool
    58     # 1 to create a tx initially that will be evicted from the mempool later
    59     # 75 transactions each with a fee rate higher than the previous one
    60+    # 1 to create a final middle-sized tx to evict a filler tx and create room (if needed)
    61     ephemeral_miniwallet = MiniWallet(node, tag_name="fill_mempool_ephemeral_wallet")
    62-    test_framework.generate(ephemeral_miniwallet, 1 + num_of_batches * tx_batch_size)
    63+    test_framework.generate(ephemeral_miniwallet, 2 + num_of_batches * tx_batch_size)
    


    l0rinc commented at 9:49 am on April 10, 2025:
    same, can we extract the 2 to a named constant?

    LarryRuane commented at 11:26 pm on May 15, 2025:
    I think the comment immediately above explains it sufficiently, and the “1” was not a named constant either (so this pull is not making it worse). But if you feel strongly, I can do something. (As someone once said, the hardest part of programming is coming up with good names!)
  87. LarryRuane force-pushed on May 15, 2025
  88. LarryRuane force-pushed on May 15, 2025
  89. DrahtBot added the label CI failed on May 15, 2025
  90. LarryRuane commented at 11:15 pm on May 15, 2025: contributor

    The main change in the latest force push is to address the review comment about validation_flush_tests.cpp. I ended up almost completely rewriting the getcoinscachesizestate test case; it was kind of a mess. It was far too fragile because it required equality of various values related to memory usage (which is why this test started failing with this PR, even though nothing had actually broken). This is especially difficult when a test must run in both the 32-bit and 64-bit environments. A test like this should be more of a sanity check.

    This test case also tested things outside of the GetCoinsCacheSizeState() function, which is what the test case name implies. The new version tests exactly that function, driving it to return its various possible results. It’s more focused and easier to understand now, in my opinion.

    Also, I squashed the three commits, two of them test-only, down to a single commit, as suggested above; it’s still a pretty small commit.

    Reviewers, please let me know if the commit message is too wordy. It’s almost entirely about the test changes (I preserved the text from what used to be separate commits), because those were the trickiest parts of this PR. But maybe it would be better for some of that explanation to be here in the PR instead of the commit message. @l0rinc, thank you for your excellent review comments. There were a few suggestions I didn’t take (so far, at least); I’ll leave comments as replies to yours.

  91. DrahtBot commented at 11:15 pm on May 15, 2025: contributor

    🚧 At least one of the CI tasks failed. Task multiprocess, i686, DEBUG: https://github.com/bitcoin/bitcoin/runs/42323610465 LLM reason (✨ experimental): (empty)

    Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:

    • Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.

    • A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.

    • An intermittent issue.

    Leave a comment here, if you need help tracking down a confusing failure.

  92. improve MallocUsage() accuracy
    Co-authored-by: Sebastian Falbesoner <sebastian.falbesoner@gmail.com>
    Co-authored-by: Pieter Wuille <pieter@wuille.net>
    Co-authored-by: Martin Leitner-Ankerl <martin.ankerl@gmail.com>
    Co-authored-by: l0rinc <pap.lorinc@gmail.com>
    
    # Changes to test/functional/test_framework/mempool_util.py:
    
    ## fill_mempool() should call sync_mempools() before returning
    
    We saw a case where a test (p2p_1p1c_network.py) called
    raise_network_minfee(), which called fill_mempool() using node 0.
    
    Then raise_network_minfee() returned, and the test called rescan_utxos(),
    which called getrawmempool() using a different node (node 1) followed by
    getrawtransaction() on each returned transaction, and the test asserted
    because a transaction was not found.
    
    This was caused by the timing window between the call to getrawmempool()
    and fetching the individual transactions; the transactions were still
    being propagated on the P2P network. During this window, a transaction
    (returned by getrawmempool()) was evicted (the mempool is close to full
    during this test), and did not exist in the mempool by the time it was
    attempted to be fetched.
    
    It might make more sense for rescan_utxos() to call sync_mempools() just
    before calling getrawmempool(), but it can't because rescan_utxos() is
    part of the MiniWallet class, which doesn't have access to test_framework
    (but that could probably be changed).
    
    ## ensure that `fill_mempool` leaves some room in mempool
    
    Without this change, fill_mempool() may leave the mempool very close
    to its memory size limit (-maxmempool). This can cause tests to
    incorrectly fail when they submit another transaction expecting it
    to succeed. Note that without this change, the same test that fails on
    one platform may succeed on another, because their memory allocation
    accounting algorithms (how they calculate memory usage, that is,
    MallocUsage()) may be slightly different.
    4228018ace
  93. LarryRuane force-pushed on May 16, 2025
  94. LarryRuane commented at 5:02 pm on May 16, 2025: contributor
    Rebased to see if that fixes CI failure (which seems unrelated), the branch was previously rebased on Mar 17 2025.
  95. DrahtBot removed the label CI failed on May 16, 2025
  96. DrahtBot added the label CI failed on May 24, 2025
  97. in test/functional/test_framework/mempool_util.py:113 in 4228018ace
    109@@ -101,6 +110,7 @@ def send_batch(fee):
    110     test_framework.log.debug("Check that mempoolminfee is larger than minrelaytxfee")
    111     assert_equal(node.getmempoolinfo()['minrelaytxfee'], Decimal('0.00001000'))
    112     assert_greater_than(node.getmempoolinfo()['mempoolminfee'], Decimal('0.00001000'))
    113+    test_framework.sync_mempools()
    


    maflcko commented at 1:01 pm on May 27, 2025:

    https://cirrus-ci.com/task/4825274484785152?logs=ci#L3624

    0in sync_mempools
    1[23:01:19.556]                                        raise AssertionError("Mempool sync timed out after {}s:{}".format(
    2[23:01:19.556]                                    AssertionError: Mempool sync timed out after 2400s:
    3[23:01:19.556] 
    
  98. in src/test/validation_flush_tests.cpp:82 in 4228018ace
    136-            CoinsCacheSizeState::CRITICAL) {
    137-            break;
    138-        }
    139-    }
    140 
    141+        // Fill to just beyojnd the cache size limit.
    


    maflcko commented at 1:02 pm on May 27, 2025:
    that size -> than size [incorrect preposition]
    beyojnd -> beyond [typographical error]
    
  99. in src/memusage.h:1 in 4228018ace


    l0rinc commented at 1:40 pm on May 27, 2025:
    @LarryRuane I think you should probably be the author of the commit, given that you’ve already added @theStack as a co-author
  100. in src/memusage.h:191 in 4228018ace
    186 private:
    187     void* ptr;
    188+    void* ptr2;
    189 };
    190 
    191+// The memory used by an unordered_set or unordered map is the sum of the
    


    l0rinc commented at 2:08 pm on May 27, 2025:
    0// The memory used by an unordered_set or unordered_map is the sum of the
    
  101. in src/memusage.h:224 in 4228018ace
    220@@ -212,7 +221,9 @@ static inline size_t DynamicUsage(const std::unordered_map<Key,
    221     size_t estimated_list_node_size = MallocUsage(sizeof(void*) * 3);
    222     size_t usage_resource = estimated_list_node_size * pool_resource->NumAllocatedChunks();
    223     size_t usage_chunks = MallocUsage(pool_resource->ChunkSizeBytes()) * pool_resource->NumAllocatedChunks();
    224-    return usage_resource + usage_chunks + MallocUsage(sizeof(void*) * m.bucket_count());
    225+    // Empirically, each element of the bucket array has two pointers on some platforms (Windows and macOS).
    


    l0rinc commented at 2:09 pm on May 27, 2025:
    Instead of empirically, can we add documentation or source code links here?
  102. in src/test/validation_flush_tests.cpp:49 in 4228018ace
    71 
    72-        BOOST_CHECK_EQUAL(
    73-            chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/0),
    74-            CoinsCacheSizeState::CRITICAL);
    75+    // There should be plenty of space.
    76+    BOOST_CHECK_EQUAL(empty_cache, CoinsCacheSizeState::OK);
    


    l0rinc commented at 2:15 pm on May 27, 2025:
    If we’re already checking equality, what’s the point of checking that it’s not equal to something else? This should also enable us inlining empty_cache
  103. in src/test/validation_flush_tests.cpp:64 in 4228018ace
    101-        CoinsCacheSizeState::OK);
    102+        BOOST_REQUIRE_EQUAL(
    103+            chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/0),
    104+            CoinsCacheSizeState::OK);
    105+    }
    106+    BOOST_CHECK_LT(i, 80'000);
    


    l0rinc commented at 2:17 pm on May 27, 2025:

    I really dislike that we’re exposing the loop variable just to see if we finished without transitioning. We seem to exit at worst around 32415 for me locally, maybe we can reduce the attempt count and extract it to a variable:

    0constexpr size_t MAX_ATTEMPTS{50'000};               //  runaway-loop safety cap
    

    And we can add a worst-case condition for the very last iteration which should always be LARGE, e.g.:

     0// OK → LARGE
     1for (size_t i{1}; i <= MAX_ATTEMPTS; ++i) {
     2    AddTestCoin(m_rng, view);
     3
     4    auto state{chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, max_mempool_size_bytes)};
     5    if (i == MAX_ATTEMPTS || view.DynamicMemoryUsage() >= full_cap * 90 / 100) {
     6        BOOST_CHECK_EQUAL(state, CoinsCacheSizeState::LARGE);
     7        break;
     8    }
     9    BOOST_CHECK_EQUAL(state, CoinsCacheSizeState::OK);
    10}
    
  104. in src/test/validation_flush_tests.cpp:29 in 4228018ace
    31-    //
    32-    // See also: Coin::DynamicMemoryUsage().
    33-    constexpr unsigned int COIN_SIZE = is_64_bit ? 80 : 64;
    34+    // An empty coins cache still allocates one PoolResource 'chunk', which is 256 KiB, and there
    35+    // is some overhead. As a sanity check, an empty coins cache should be only slightly larger.
    36+    BOOST_CHECK_LT(view.DynamicMemoryUsage(), 256 * 1024 * 100 / 98);
    


    l0rinc commented at 2:20 pm on May 27, 2025:

    I find the 100/98 a weird way to express a percentage - if we simply want to check that it’s roughly 256 KiB, we could check their ratio:

    0    BOOST_CHECK_LT(view.DynamicMemoryUsage() / (256 * 1024.0), 1.1);
    
  105. in src/test/validation_flush_tests.cpp:93 in 4228018ace
    149-    // Passing non-zero max mempool usage (512 KiB) should allow us more headroom.
    150+    // Repeat (continuing with the existing cache) but with a non-zero max mempool;
    151+    // this allows more headroom.
    152     BOOST_CHECK_EQUAL(
    153-        chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/ 1 << 19),
    154+        chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/MAX_MEMPOOL_CACHE_BYTES),
    


    l0rinc commented at 2:37 pm on May 27, 2025:

    it makes sense to add the /*max_mempool_size_bytes=* for primitives - but we don’t usually do it for self-explanatory variables:

    0        chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, MAX_MEMPOOL_CACHE_BYTES),
    
  106. in src/test/validation_flush_tests.cpp:90 in 4228018ace
    145     BOOST_CHECK_EQUAL(
    146         chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*max_mempool_size_bytes=*/0),
    147         CoinsCacheSizeState::CRITICAL);
    148 
    149-    // Passing non-zero max mempool usage (512 KiB) should allow us more headroom.
    150+    // Repeat (continuing with the existing cache) but with a non-zero max mempool;
    


    l0rinc commented at 2:50 pm on May 27, 2025:

    hmmm, we’re repeating the same steps but with different data? Can we separate the data from the algorithm and do the iteration in something like:

    0for (size_t max_mempool_size_bytes : {size_t{0}, MAX_MEMPOOL_CACHE_BYTES}) {
    1  // The steps from OK to CRITICAL
    2}
    
     0// Copyright (c) 2019-present The Bitcoin Core developers
     1// Distributed under the MIT software license, see the accompanying
     2// file COPYING or http://www.opensource.org/licenses/mit-license.php.
     3
     4#include <sync.h>
     5#include <test/util/coins.h>
     6#include <test/util/random.h>
     7#include <test/util/setup_common.h>
     8#include <validation.h>
     9
    10#include <boost/test/unit_test.hpp>
    11
    12BOOST_FIXTURE_TEST_SUITE(validation_flush_tests, TestingSetup)
    13
    14//! Verify that Chainstate::GetCoinsCacheSizeState() switches from OK→LARGE→CRITICAL
    15//! at the expected utilization thresholds, first with *no* mempool head-room,
    16//! then with additional mempool head-room.
    17BOOST_AUTO_TEST_CASE(getcoinscachesizestate)
    18{
    19    Chainstate& chainstate{m_node.chainman->ActiveChainstate()};
    20
    21    LOCK(::cs_main);
    22    CCoinsViewCache& view = chainstate.CoinsTip();
    23    BOOST_CHECK_LT(view.DynamicMemoryUsage() / (256 * 1024.0), 1.1);
    24
    25    constexpr size_t MAX_COINS_CACHE_BYTES{8'000'000};   //  ~8 MB cache size for the test
    26    constexpr size_t MAX_MEMPOOL_CACHE_BYTES{4'000'000}; //  ~4 MB extra head-room
    27    constexpr size_t MAX_ATTEMPTS{50'000};               //  runaway-loop safety cap
    28
    29    // Run the same growth-path twice: first with 0 head-room, then with extra head-room
    30    for (size_t max_mempool_size_bytes : {size_t{0}, MAX_MEMPOOL_CACHE_BYTES}) {
    31        BOOST_CHECK_EQUAL(chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, max_mempool_size_bytes), CoinsCacheSizeState::OK);
    32
    33        const size_t full_cap = MAX_COINS_CACHE_BYTES + max_mempool_size_bytes;
    34
    35        // OK → LARGE
    36        for (size_t i{1}; i <= MAX_ATTEMPTS; ++i) {
    37            AddTestCoin(m_rng, view);
    38
    39            auto state{chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, max_mempool_size_bytes)};
    40            if (i == MAX_ATTEMPTS || view.DynamicMemoryUsage() >= full_cap * 90 / 100) {
    41                BOOST_CHECK_EQUAL(state, CoinsCacheSizeState::LARGE);
    42                break;
    43            }
    44            BOOST_CHECK_EQUAL(state, CoinsCacheSizeState::OK);
    45        }
    46
    47        // LARGE → CRITICAL
    48        for (size_t i{1}; i <= MAX_ATTEMPTS; ++i) {
    49            AddTestCoin(m_rng, view);
    50
    51            auto state{chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, max_mempool_size_bytes)};
    52            if (i == MAX_ATTEMPTS || view.DynamicMemoryUsage() > full_cap) {
    53                BOOST_CHECK_EQUAL(state, CoinsCacheSizeState::CRITICAL);
    54                break;
    55            }
    56            BOOST_CHECK_EQUAL(state, CoinsCacheSizeState::LARGE);
    57        }
    58    }
    59
    60    for (int i{0}; i < 1'000; ++i) {
    61        AddTestCoin(m_rng, view);
    62        BOOST_CHECK_EQUAL(chainstate.GetCoinsCacheSizeState(), CoinsCacheSizeState::OK);
    63    }
    64
    65    // CRITICAL → OK via Flush
    66    BOOST_CHECK_EQUAL(chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*mempool=*/0), CoinsCacheSizeState::CRITICAL);
    67    view.SetBestBlock(m_rng.rand256());
    68    BOOST_CHECK(view.Flush());
    69    BOOST_CHECK_EQUAL(chainstate.GetCoinsCacheSizeState(MAX_COINS_CACHE_BYTES, /*mempool=*/0), CoinsCacheSizeState::OK);
    70}
    71
    72BOOST_AUTO_TEST_SUITE_END()
    
  107. in src/memusage.h:225 in 4228018ace
    220@@ -212,7 +221,9 @@ static inline size_t DynamicUsage(const std::unordered_map<Key,
    221     size_t estimated_list_node_size = MallocUsage(sizeof(void*) * 3);
    222     size_t usage_resource = estimated_list_node_size * pool_resource->NumAllocatedChunks();
    223     size_t usage_chunks = MallocUsage(pool_resource->ChunkSizeBytes()) * pool_resource->NumAllocatedChunks();
    224-    return usage_resource + usage_chunks + MallocUsage(sizeof(void*) * m.bucket_count());
    225+    // Empirically, each element of the bucket array has two pointers on some platforms (Windows and macOS).
    226+    size_t usage_bucket_array = MallocUsage(2 * sizeof(void*) * m.bucket_count());
    


    l0rinc commented at 3:30 pm on May 27, 2025:
    https://github.com/bitcoin/bitcoin/blob/master/src/coins.h#L219 already claims to account for 1-2 pointers, do we need any other change to synchronize it with the current PR?
  108. l0rinc changes_requested
  109. l0rinc commented at 3:34 pm on May 27, 2025: contributor
    Thanks for getting back to this change. We should synchronize it with CCoinsMap, chich already claims to account for an “overhead of 1 or 2 pointers”. Also, the test should pass before the change as well, so it would be helpful to add it as a separate commit before the change to make it easy to see how it behaves before and after the change as well. The test still contains a lot of repetition, please see my suggestion on how to compact it a bit more. There are still uncovered parts of the code, I’d feel more comfortable if all modified code parts are exercised by tests.
  110. DrahtBot removed the label CI failed on May 28, 2025

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-06-01 12:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me