LarryRuane
commented at 10:17 pm on September 25, 2023:
contributor
The MallocUsage() function takes an allocation size as an argument and returns the amount of physical memory consumed, which is greater due to memory allocator overhead and alignment. It was first added in 2015 (first commit of #6102), but its accuracy has degraded as memory allocation libraries have evolved. It’s used when it’s important that large data structures, such as the coins cache and mempool, should use a predictable, configurable (limited) amount of physical memory (see the -dbcache and -maxmempool configuration options), as well as a few other places.
sipa figured out a concise, efficient expression that this function can use, and that’s what’s implemented here.
Also add a unit test, which is more helpful than usual in this case since platforms, operating systems, and libraries vary significantly in this area.
DrahtBot
commented at 10:17 pm on September 25, 2023:
contributor
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
See the guideline for information on the review process.
A summary of reviews will appear here.
Conflicts
No conflicts as of last run.
LarryRuane
commented at 10:17 pm on September 25, 2023:
contributor
The results of the old and new versions of this function should differ only slightly; it would be bad if the new one gave very different results, because node operators might find too much memory being consumed, or (not as bad) not enough. To manually test this, I ran -reindex-chainstate on Ubuntu, and watched memory usage by printing /proc/pid/statm and watching the data (6th) field. On master, the value settled to 315140 (units are 4k pages); with this PR it was 315606, which indicates that the new version produces slightly smaller results, but they’re close.
DrahtBot added the label
CI failed
on Sep 25, 2023
LarryRuane
commented at 3:11 pm on September 27, 2023:
contributor
Changing to draft because CI on a couple of the platforms failed the new test, so the physical memory calculation will need to be different on those platforms, I try to figure that out.
LarryRuane marked this as a draft
on Sep 27, 2023
martinus
commented at 8:24 am on November 20, 2023:
contributor
I can reproduce the same results with glibc 2.31 and 2.38, in 32bit and 64bit. I don’t think it needs to be perfect on all platforms, I think your new calculation is fine.
I played a bit with a reproducer, here is mine: https://godbolt.org/z/s971sbnhG I refactored your MallocUsage formula a bit, but it’s basically the same
DrahtBot
commented at 11:58 am on February 29, 2024:
contributor
🤔 There hasn’t been much activity lately and the CI seems to be failing.
If no one reviewed the current pull request by commit hash, a rebase can be considered. While the CI failure may be a false positive, the CI hasn’t been running for some time, so there may be a real issue hiding as well. A rebase triggers the latest CI and makes sure that no silent merge conflicts have snuck in.
LarryRuane marked this as ready for review
on Feb 29, 2024
LarryRuane force-pushed
on Feb 29, 2024
LarryRuane
commented at 8:28 pm on February 29, 2024:
contributor
Force-pushed to 04f3fbe308728d2fb18fad418603cab97f0b9f59 - rebase only. (CI is still expected to fail, I’m working on that now.)
LarryRuane force-pushed
on Mar 1, 2024
LarryRuane force-pushed
on Mar 1, 2024
LarryRuane force-pushed
on Mar 14, 2024
LarryRuane force-pushed
on Mar 15, 2024
LarryRuane force-pushed
on Mar 15, 2024
LarryRuane
commented at 10:34 pm on March 15, 2024:
contributor
@martinus
Can you think of why the macOS and Win64 are failing (I have only Ubuntu, which passes)? As an experiment, I changed the memory usage ratio needed to pass to the resource map being just less than std_map, but not even that passes (that is, the resource map uses more space). Both of those platforms fail the same way (you can see this in the CI results, this is Win64):
0D:/a/bitcoin/bitcoin/src/test/pool_tests.cpp(183): error: in "pool_tests/memusage_test":
1 check memusage::DynamicUsage(resource_map) <= memusage::DynamicUsage(std_map) * 100 / 100
2 has failed [466624 > 451088]
On earlier runs, some other tests have failed, but I’m not sure if those are spurious failures.
I’m not sure what to do now, any thoughts or suggestions are welcome!
sipa
commented at 6:57 pm on March 16, 2024:
member
@LarryRuane There is no reason to assume that the memory allocation overhead on those platforms, with substantially different C libraries, would match the formula we inferred on x86_64 and arm Linux.
If our goal is actually accurate memory usage calculations on all systems, we will need to derive a formula for each supported system separately.
LarryRuane force-pushed
on Mar 19, 2024
LarryRuane
commented at 5:40 pm on March 19, 2024:
contributor
@sipa - I don’t think the test failures could be due to differences in memory allocation overhead across platforms. The actual memory allocation overhead isn’t being tested at all. To do that would probably require a test that uses /proc/meminfo (which doesn’t exist on all platforms) or similar.
I don’t mind giving up and closing this PR, but as @martinus said in an earlier comment, this calculation doesn’t need to be perfect, just better.
I think this test failure must be due to differences in std::unordered_map implementations. That container does various allocations for different reasons, and it must be that with this PR’s change in memory usage calculation, the std::unordered_map on macOS and Win64 (at least) are doing more a kind of allocation that the new calculation thinks has higher overhead. Something like that.
[UPDATE: the following discussion about std::unordered_map::reserve() is incorrect, calling reserve() doesn’t prevent the bucket array from being reallocated multiple times within the pool resource – rather, it’s reallocated using regular memory. Only allocations up to 128 bytes come from the pool allocator. (I’ll probably remove these comments later.)
I thought I had it figured out; I verified with the debugger that as the test adds the 10k entries to the two maps, it has to repeatedly grow the bucket array. The allocation sizes (in bytes) for the bucket array that I saw here on Ubuntu were: 104, 232, 472, 1026, 2056, 4328, 8872, 18856, 40696, 82184…. The tradeoff one makes when using the pool resource allocator is that memory allocations that are freed but then those same sizes not allocated again, those allocations are, in effect, leaked (until the resource is destroyed). The pool resource works great when freeing and allocating the same sizes (rounded up to the alignment, usually 8 bytes) repeatedly. This is, of course, the case for the individual map entries.
So the latest force-push simply calls reserve() initially on both maps (this is only really needed for the pool resource map). Debugging verifies that this does prevent the bucket array from being reallocated, here on Ubuntu. But it didn’t fix the problem on at least macOS and Win64. Maybe those platforms’ implementations of std::unordered_map ignore the reserve() hint?
It’s difficult when CI fails on platforms that I don’t have access to. Do others have a solution to that? Can I install macOS on Ubuntu or Windows 10 (I have both) as a VM? Maybe I can set up my Windows laptop to build and run the debugger, but that seems like a lot of work.
martinus
commented at 9:32 am on March 20, 2024:
contributor
@LarryRuane The problem here is that the DynamicUsage(const std::unordered_map<X, Y, Z>& m) just estimates the memory usage by thinking that internally it uses a node with a single pointer (see struct unordered_node : private X in memusage.h). This estimation is probably not correct on all platforms, e.g. windows seems to use a double linked list, so this underestimates actual memory usage.
The inconcsistency now comes when using DynamicUsage with a pool resource. Here the actual real world memory usage is counted (because it uses a custom allocator, there’s no need for estimating node size), and here the different implementations actually effect the result.
So, with your new MallocUsage the numbers are most likely more correct, but this seems to trigger the incorrect underestimation for DynamicUsage(const std::unordered_map<X, Y, Z>& m)
LarryRuane
commented at 5:37 pm on March 21, 2024:
contributor
@martinus - thanks, that makes perfect sense! I hadn’t thought of the possibility of the nodes having two pointers instead of one (double instead of single linked list). Seems like a bad design, but anyway.
I don’t think there would be much benefit to fixing DynamicUsage(const std::unordered_map<X, Y, Z>& m) to account for this (make it dependent on platform), because this function is no longer important since we’re using the pool resource. It’s probably really used only in the test. So it’s really just a test problem. I’ll see what I can do to fix this.
LarryRuane force-pushed
on Mar 22, 2024
LarryRuane force-pushed
on Apr 8, 2024
DrahtBot added the label
Needs rebase
on Apr 11, 2024
LarryRuane force-pushed
on May 3, 2024
DrahtBot removed the label
Needs rebase
on May 3, 2024
DrahtBot marked this as a draft
on May 13, 2024
DrahtBot
commented at 8:02 am on May 13, 2024:
contributor
Turned into draft for now, due to failing CI. If this is no longer a WIP, you can move it out of draft.
LarryRuane marked this as ready for review
on May 21, 2024
LarryRuane force-pushed
on May 21, 2024
DrahtBot added the label
Needs rebase
on May 21, 2024
LarryRuane force-pushed
on May 22, 2024
DrahtBot removed the label
Needs rebase
on May 22, 2024
DrahtBot removed the label
CI failed
on May 22, 2024
LarryRuane
commented at 7:12 pm on May 22, 2024:
contributor
NOTE, this PR isn’t ready to merge; at the very least it has some debug prints that need to be removed.
CI was failing on some platforms (Windows and macOS), in test/functional/mempool_limit.py: test_mid_package_eviction(). I think this test is fragile; this very-unrelated PR shouldn’t be able to cause it to fail IMO. The problem turned out to be the following:
This test calls the test utility function fill_mempool(). This function creates independent physically-large (around 65kb) transactions at varying fee rates until the one with the lowest feerate is evicted due to the mempool becoming full. The functional tests run with -maxmempool=5 (MB), so that requires about 72 transactions.
After fill_mempool() returns, the mempool likely won’t be completely full, because these were large transactions, so there’s some space left but not enough for another transaction. The test, test_mid_package_eviction, then submits a smaller transaction (weight around 8k) called mempool_evicted_tx with a fee just barely high enough to be accepted, and expects that transaction to be accepted. (It has this name, mempool_evicted_tx, because the test expects it to be evicted later.)
With this PR, that transaction was not accepted on Windows and macOS. Some debug prints show that on those platforms, when fill_mempool() returns, there are only 1200 bytes of free space in the mempool. This is because the std::unordered_map library implementations are different on these platforms, and this causes the memory usage calculation (for the entire map) to be slightly different in this PR. So it’s really just bad luck. This is why I think the test is fragile.
After trying a few fixes to the test that didn’t work, what finally worked (the current branch) is, to have fill_mempool(), just before it returns, submit another transaction that replaces (RBF) an existing one in the mempool, but is slightly smaller. This frees up about 10kb of additional space, which allows room for the 8k transaction to be reliably accepted. This doesn’t break the other tests that call fill_mempool().
This attempt didn’t work the first time, however, because the mempools wouldn’t reliably sync in test/functional/p2p_1p1c_network.py. This test starts 4 nodes, and calls fill_mempool() on node 0 then waits for all 4 mempools to become synced. Sometimes they do but other times this never happens. I wonder if there might be an unrelated bug (I can investigate later), but what often happens (it’s timing dependent and thus random) is that the replacement tx (added by this PR) does appear on all nodes, but a low-feerate tx that was evicted on node 0 remains on nodes 1-3. I can explain what I’m seeing in more detail later if needed. What ended up fixing this is to delay a few seconds before submitting the RBF transaction. It’s ugly, maybe there’s a better way, but syncing mempools is kind of non-deterministic anyway, so I don’t think it’s terribly bad.
@glozow and @theStack, can you let me know your opinion? You two have worked a lot in this area.
theStack
commented at 0:24 am on May 23, 2024:
contributor
@LarryRuane: Interesting problem (and very creative solution with the RBF tx :)). Fwiw, on one of my machines running OpenBSD 7.5, the same behaviour as you described occurs on commit 663e5012d318a7e80a42385ac4e6f71dec46814e: after fill_mempool, mempool usage is very close to the max ('usage': 4998768), and the subtest test_mid_package_eviction fails on the first MiniWallet tx submission with a "mempool full" error.
One idea I had to solve this at the fill_mempool side is to submit an extra tx at the end of fill_mempool that is smaller (I chose half the size) than the filler txs. Given that this causes eviction, a large filler tx is removed, and the smaller-sized tx is getting in, freeing up some space (around ~32.5KB in terms of bytes). It’s still very ugly, especially since this code is only ever executed for edge cases (note that we can’t unconditionally submit the smaller tx, as this could cause the problem rather than solve it, if no eviction happens), but passes all the tests that use fill_mempool for me: https://github.com/theStack/bitcoin/commit/f758e73a943b8e45acac1ab8834305f7f0bfbf14
However, is this even necessary to be solved in fill_mempool? Haven’t looked closer at the failing test yet, but maybe it’s just a bug in the fee calculation (or MiniWallet’s treatment of target_weight).
LarryRuane force-pushed
on May 23, 2024
LarryRuane
commented at 8:58 pm on May 23, 2024:
contributor
Your patch is much better, @theStack, thanks! It’s simpler and doesn’t need that weird sleep. Force-pushed your patch. If it passes CI, I think this is the way to go. I had tried something similar, but I didn’t think to make the sending of that extra (smaller) transaction conditional based on available mempool space.
LarryRuane force-pushed
on May 24, 2024
LarryRuane
commented at 6:31 pm on May 24, 2024:
contributor
CI passed, thanks @theStack. I just force-pushed swapping of the two commits, the test commit has to be first for each commit to pass CI.
DrahtBot added the label
CI failed
on Sep 12, 2024
DrahtBot removed the label
CI failed
on Sep 15, 2024
achow101 requested review from sipa
on Oct 15, 2024
DrahtBot added the label
CI failed
on Mar 14, 2025
DrahtBot
commented at 11:33 am on March 15, 2025:
contributor
Needs rebase, if still relevant
LarryRuane force-pushed
on Mar 17, 2025
in
test/functional/test_framework/mempool_util.py:101
in
28dc105626outdated
92@@ -92,6 +93,14 @@ def send_batch(fee):
93 send_batch(fee)
94 tx_sync_fun() if tx_sync_fun else test_framework.sync_mempools() # sync after all evictions
95 96+ # if mempool is almost full (<10k usage bytes left), submit one extra middle-sized tx,
97+ # in order to evict a large filler tx and leave some room; this will enable tests to submit
98+ # txs with just the mempoolminfee without immediate eviction ("mempool full" error)
99+ if node.getmempoolinfo()['usage'] >= 4_990_000:
100+ ephemeral_miniwallet.send_self_transfer(from_node=node, fee=num_of_batches * (base_fee / 2),
101+ utxo_to_spend=confirmed_utxos.pop(0), target_weight = 32500 * 4)
47@@ -48,19 +48,31 @@ template<typename X> static inline size_t DynamicUsage(const X * const &v) { ret
48 * application data structures require more accurate inner accounting, they should
49 * iterate themselves, or use more efficient caching + updating on modification.
50 */
51-
52 static inline size_t MallocUsage(size_t alloc)
81+#endif
82+ // step should be nonzero and a power of 2
83+ static_assert(step > 0);
84+ static_assert((step & (step-1)) == 0);
85+ // tested with Linux glibc 2.31 and 2.38, 32bit and 64bit
86+ return (std::max(min_alloc, alloc) + overhead + (step - 1)) & ~(step - 1);
what’s the purpose of checking positivity for a size_t value? Is this some weird compiler trick or a refactoring leftover? Or if it’s purpose is to check the two instantiations have the same type, we could rather create a boolean with the necessary conditions and use that to initialize step once.
LarryRuane
commented at 7:08 pm on March 19, 2025:
I wrote this because the following line (static_assert((step & (step-1)) == 0)) doesn’t actually assert that step has a single bit set (which is what I wanted to check); it also succeeds if step is zero. But using std::has_single_bit() is better, so we can remove static_assert(step > 0), good catch.
step is basically the alignment, right?
We could probably do it in a platform-independent way (and get rid of static_assert(step > 0); while we’re here):
I forgot to include this in the full patch, my bad
in
src/memusage.h:50
in
28dc105626outdated
66+ // for example, there has not been a zero-byte allocation (which would require
67+ // some physical space). std::vector has optimized this case. Experimental
68+ // evidence indicates the same is true of other data structures -- there are
69+ // no actual zero-length allocations observed, although of course this is
70+ // library-dependent.
71+ if (alloc == 0) return 0;
188@@ -177,11 +189,16 @@ static inline size_t DynamicUsage(const std::list<X>& l)
189 return MallocUsage(sizeof(list_node<X>)) * l.size();
190 }
191192+// struct unordered_node adds the container overhead to the given structure X,
193+// although this varies across library container implementations (platforms).
194+// It is believed that some containers use two pointers per node, so this
can we add some references to avoid having to rely on beliefs?
LarryRuane
commented at 7:42 pm on March 19, 2025:
Good point, I am going to back this change out until we see evidence for needing it. I think I “deduced” this based on macOS CI failing while the others were passing. But it’s been too long.
Just now, after searching for a while with no luck, I asked X’s Grok (I know, we shouldn’t trust it too much), and it said LLVM’s implementation of std::unordered_map that the node includes a next pointer but no previous pointer. You can indeed see that in the source code: https://github.com/llvm/llvm-project/blob/main/libcxx/include/__hash_table#L95
l0rinc changes_requested
l0rinc
commented at 11:06 am on March 19, 2025:
contributor
I think this is an important change, I’m running a full reindex-chainstate to check for any noticeable behavior changes.
I left a few recommendations:
0diff --git a/src/memusage.h b/src/memusage.h
1index 501f7068ca..5855e9d18b 100644
2--- a/src/memusage.h
3+++ b/src/memusage.h
4@@ -24,9 +24,6 @@
5 namespace memusage
6 {
7 8-/** Compute the total memory used by allocating alloc bytes. */
9-static size_t MallocUsage(size_t alloc);
10-
11 /** Dynamic memory usage for built-in types is zero. */
12 static inline size_t DynamicUsage(const int8_t& v) { return 0; }
13 static inline size_t DynamicUsage(const uint8_t& v) { return 0; }
14@@ -48,7 +45,7 @@ template<typename X> static inline size_t DynamicUsage(const X * const &v) { ret
15 * application data structures require more accurate inner accounting, they should
16 * iterate themselves, or use more efficient caching + updating on modification.
17 */
18-static inline size_t MallocUsage(size_t alloc)
19+static constexpr size_t MallocUsage(size_t alloc)
20 {
21 // There are few if any actual zero-length allocations; when
22 // DynamicUsage(std::vector<X>& v) calls this function, and v.capacity() == 0,
23@@ -59,20 +56,16 @@ static inline size_t MallocUsage(size_t alloc)
24 // library-dependent.
25 if (alloc == 0) return 0;
2627- static constexpr size_t overhead = sizeof(void*);
28-#if defined(__arm__)
29- // tested with ARM 32bit
30- static constexpr size_t step = sizeof(void*) * 2;
31- static constexpr size_t min_alloc = 9;
32+#if defined(__arm__) || SIZE_MAX == UINT64_MAX
33+ constexpr size_t min_alloc{9};
34 #else
35- static constexpr size_t step = 16U;
36- static constexpr size_t min_alloc = sizeof(void*) == 8 ? 9 : 0;
37+ constexpr size_t min_alloc{0};
38 #endif
39- // step should be nonzero and a power of 2
40- static_assert(step > 0);
41- static_assert((step & (step-1)) == 0);
42- // tested with Linux glibc 2.31 and 2.38, 32bit and 64bit
43- return (std::max(min_alloc, alloc) + overhead + (step - 1)) & ~(step - 1);
44+ constexpr size_t overhead{sizeof(size_t)};
45+ constexpr size_t step{alignof(std::max_align_t)};
46+ static_assert(std::has_single_bit(step));
47+
48+ return (std::max(min_alloc, alloc) + overhead + (step - 1)) & -step;
49 }
5051 // STL data structures
52diff --git a/src/test/mempool_tests.cpp b/src/test/mempool_tests.cpp
53index f0094dce59..9e9ae4d73c 100644
54--- a/src/test/mempool_tests.cpp
55+++ b/src/test/mempool_tests.cpp
56@@ -804,4 +804,13 @@ BOOST_AUTO_TEST_CASE(MempoolAncestryTestsDiamond)
57 BOOST_CHECK_EQUAL(descendants, 4ULL);
58 }
5960+BOOST_AUTO_TEST_CASE(BitwiseStepSimplificationTest)
61+{
62+ for (uint64_t step{1}; step != 0; step <<= 1) {
63+ std::cout << step << std::endl;
64+ BOOST_CHECK(std::has_single_bit(step));
65+ BOOST_CHECK_EQUAL((~(step - 1)), -step);
66+ }
67+}
68+
69 BOOST_AUTO_TEST_SUITE_END()
__arm__ only applies to 32 bits, no need for the comments (I’d prefer having them in the commit messages). Before merging we have to make sure we’re not just testing ARM on 32 bits
in
test/functional/test_framework/mempool_util.py:99
in
18a2bd4b04outdated
92@@ -92,6 +93,14 @@ def send_batch(fee):
93 send_batch(fee)
94 tx_sync_fun() if tx_sync_fun else test_framework.sync_mempools() # sync after all evictions
9596+ # if mempool is almost full (<10k usage bytes left), submit one extra middle-sized tx,
97+ # in order to evict a large filler tx and leave some room; this will enable tests to submit
98+ # txs with just the mempoolminfee without immediate eviction ("mempool full" error)
99+ if node.getmempoolinfo()['usage'] >= 4_990_000:
I see you’ve put the test before the fix after the recent rebase - but it never enters this condition before the fix, so basically dead code was added.
Could we either make sure we enter, and update it in the next commit accordingly, or add it after the fix commit?
Also, the commit message claims that “fill mempool leaves room” - I think this needs some explanation.
LarryRuane
commented at 8:04 pm on March 19, 2025:
I think it’s okay that it’s initially dead code because it makes the test less fragile in general. In theory, a different project could cherry-pick just this test-only commit, and later make some other changes that would have caused this test to break (when it’s really a false-positive failure), but because of taking this commit, that doesn’t happen (the test continues to pass). Or, a similar way to think of it is, the test could have been written this way to begin with (and that would not have seemed strange). But I don’t feel strongly about, I can make that change if you’d like.
I would like to avoid having a false sense of security by putting this before the fix, which usually indicates that the desired behavior is reproduced and is retained after this fix - which isn’t the case here exactly. And the dead code part also bothers me for a similar reason.
LarryRuane
commented at 10:24 pm on March 28, 2025:
I just realized that I can’t add this test commit after the functional commit, because then the functional commit doesn’t pass on some platforms (macOS is one for sure). So it would have to be part of the functional commit. Does that sound okay to you?
in
src/memusage.h:62
in
28dc105626outdated
68+ // evidence indicates the same is true of other data structures -- there are
69+ // no actual zero-length allocations observed, although of course this is
70+ // library-dependent.
71+ if (alloc == 0) return 0;
72+
73+ static constexpr size_t overhead = sizeof(void*);
if you decide to keep this test (which was meant to demonstrate the safety of the change), please remove the logging part
LarryRuane force-pushed
on Mar 19, 2025
LarryRuane force-pushed
on Mar 21, 2025
LarryRuane force-pushed
on Mar 21, 2025
LarryRuane force-pushed
on Mar 21, 2025
LarryRuane force-pushed
on Mar 21, 2025
LarryRuane force-pushed
on Mar 22, 2025
LarryRuane force-pushed
on Mar 22, 2025
LarryRuane force-pushed
on Mar 24, 2025
LarryRuane force-pushed
on Mar 24, 2025
DrahtBot removed the label
CI failed
on Mar 25, 2025
LarryRuane force-pushed
on Mar 26, 2025
LarryRuane force-pushed
on Mar 26, 2025
test: ensure that `fill_mempool` leaves some room in mempool
Without this change, fill_mempool() may leave the mempool very close
to its memory size limit (-maxmempool). This can cause tests to
incorrectly fail when they submit another transaction expecting it
to succeed. Note that without this change, the same test that fails on
one platform may succeed on another, because their memory allocation
accounting algorithms (how they calculate memory usage, that is,
MallocUsage()) may be slightly different.
a6f50e791e
test: fill_mempool() should call sync_mempools() before returning
We saw a case where a test (p2p_1p1c_network.py) called
raise_network_minfee(), which called fill_mempool() using node 0.
Then raise_network_minfee() returned, and the test called rescan_utxos(),
which called getrawmempool() using a different node (node 1) followed by
getrawtransaction() on each returned transaction, and the test asserted
because a transaction was not found.
This was caused by the timing window between the call to getrawmempool()
and fetching the individual transactions; the transactions were still
being propagated on the P2P network. During this window, a transaction
(returned by getrawmempool()) was evicted (the mempool is close to full
during this test), and did not exist in the mempool by the time it was
attempted to be fetched.
It might make more sense for rescan_utxos() to call sync_mempools() just
before calling getrawmempool(), but it can't because rescan_utxos() is
part of the MiniWallet class, which doesn't have access to test_framework
(but that could probably be changed).
LarryRuane
commented at 8:14 pm on March 27, 2025:
No, I’ll write a detailed comment later today or tomorrow, but it turns out the lack of this second pointer caused the pool test to fail, because on macOS and Windows; they really do have two pointers per node. I was able to determine this with certainty with debug printing (you can see it in some of the force-pushes above). I added a comment just before the start of this struct unordered_node definition to explain this second pointer, did you see that? I didn’t call it forward and previous because I really don’t know that, I only know that there are 16 bytes in this part of the struct (but I am going to assume that there are two pointers).
LarryRuane
commented at 9:48 pm on March 28, 2025:
I’m making this and two similar changes above (for std::unordered_set and std::unordered_map because CI debug prints show that on some platforms (macOS and Win64), each element of the bucket array uses 16 bytes (likely two pointers) rather than 8 bytes (one pointer). This is probably because the bucket lists are doubly-linked. (This gives a performance advantage for removal; it’s not necessary to hash the key to find the bucket, then walk that bucket’s list looking for the element being removed.)
There is no way to tell at compile time if the bucket lists should be assumed to be one or two pointers, so it’s better to be conservative and assume it’s two. (This may result in actual memory usage being slightly less than what the user configured for large data structures like the mempool and the coins cache.)
in
src/memusage.h:199
in
77ebad9651
195+// on some platforms (Windows and macOS), so be conservative.
196 template<typename X, typename Y>
197 static inline size_t DynamicUsage(const std::unordered_set<X, Y>& s)
198 {
199- return MallocUsage(sizeof(unordered_node<X>)) * s.size() + MallocUsage(sizeof(void*) * s.bucket_count());
200+ return MallocUsage(sizeof(unordered_node<X>)) * s.size() + MallocUsage(2 * sizeof(void*) * s.bucket_count());
LarryRuane
commented at 9:48 pm on March 28, 2025:
This is a metadata mirror of the GitHub repository
bitcoin/bitcoin.
This site is not affiliated with GitHub.
Content is generated from a GitHub metadata backup.
generated: 2025-04-03 00:12 UTC
This site is hosted by @0xB10C More mirrored repositories can be found on mirror.b10c.me