sdaftuar
commented at 6:59 pm on October 18, 2023:
member
This is a draft implementation of the cluster mempool design described in #27677. I’m opening this as a draft PR now to share the branch I’m working on with others, so that we can start to think about in-progress projects (like package relay, package validation, and package rbf) in the context of this design. Also, I can use some help from others for parts of this work, including the interaction between the mempool and the wallet, and also reworking some of our existing test cases to fit a cluster-mempool world.
Note that the design of this implementation is subject to change as I continue to iterate on the code (to make the code more hygienic and robust, in particular). At this point though I think the performance is pretty reasonable and I’m not currently aware of any bugs. There are some microbenchmarks added here, and some improved fuzz tests; it would be great if others ran both of those on their own hardware as well and reported back on any findings.
This branch implements the following observable behavior changes:
Maintains a partitioning of the mempool into connected clusters
Each cluster is sorted (“linearized”) either using an optimal sort, or an ancestor-feerate-based one, depending on the size of the cluster (thanks to @sipa for this logic)
Transaction selection for mining is updated to use the cluster linearizations
Mempool eviction is updated to use the cluster linearizations
The RBF rules are updated to drop the requirement that no new inputs are introduced, and to change the feerate requirement to instead check that the mining score of a replacement transaction exceed the mining score of the conflicted transactions
The CPFP carveout rule is eliminated (it doesn’t make sense in a cluster-limited mempool)
The ancestor and descendant limits are no longer enforced.
New cluster count/cluster vsize limits are now enforced instead.
Some less observable behavior changes:
The cached ancestor and descendant data are dropped from the mempool, along with the multi_index indices that were maintained to sort the mempool by ancestor and descendant feerates. For compatibility (eg with wallet behavior or RPCs exposing this), this information is now calculated dynamically instead.
The ancestor and descendant walking algorithms are now implemented using epochs (resulting in a significant performance improvement, according to the benchmarks I’ve looked at)
Still to do:
More comparisons between this branch and master on historical data to compare validation speed (accepting loose transactions, processing RBF transactions, validating a block/postprocessing, updating the mempool for a reorg).
More historical data analysis to try to evaluate the likely impact of setting the cluster size limits to varying values (to motivate what values we should ultimately pick). [DONE, see this post]
Updating wallet code to be cluster-aware (including mini_miner and coin selection)
Rework many of our functional tests to be cluster-aware
Figure out what package validation and package RBF rules should be in this design
Rework the partially_downloaded_block fuzz target to not add duplicate transactions to the mempool (#29990).
Update RBF logic to ensure that replacements always strictly improve the mempool.
Figure out how we want to document our RBF policy (preserve historical references to BIP 125 or previous Bitcoin Core behaviors vs clean slate documentation?)
For discussion/feedback:
How significant is it to be dropping the CPFP carveout rule? Does that affect how we will ultimately want to stage new mempool deployment?
How well do the proposed RBF rules meet everyone’s use cases?
What design improvements can we make to the cluster tracking implementation?
The ZMQ callbacks that occur when a block is found will happen in a slightly different order, because we now will fully remove all transactions occurring in a block from the mempool before removing any conflicts. Is this a problem?
DrahtBot
commented at 6:59 pm on October 18, 2023:
contributor
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
See the guideline for information on the review process.
A summary of reviews will appear here.
Conflicts
Reviewers, this pull request conflicts with the following ones:
#31240 (doc: Fix dead links to mailing list archives by edilmedeiros)
#30882 (wip: Split fuzz binary (take 2) by dergoegge)
#29954 (RPC: Return permitbaremultisig and maxdatacarriersize in getmempoolinfo by kristapsk)
#29641 (scripted-diff: Use LogInfo over LogPrintf [WIP, NOMERGE, DRAFT] by maflcko)
#28121 (include verbose “reject-details” field in testmempoolaccept response by pinheadmz)
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.
in
doc/policy/mempool-replacements.md:22
in
b0f771bb55outdated
23- currently-unconfirmed transaction.
24-
25- *Rationale*: When RBF was originally implemented, the mempool did not keep track of
26- ancestor feerates yet. This rule was suggested as a temporary restriction.
27-
28-3. The replacement transaction pays an absolute fee of at least the sum paid by the original
instagibbs
commented at 7:11 pm on October 18, 2023:
might be easier to just “strike out” this rule and mark it deprecated instead of changing numbers since that’s become engineer lingo
sdaftuar
commented at 4:52 pm on December 7, 2023:
I’m not sure the best way to document things, but I added a to-do item to the OP so that we don’t lose track of figuring out how we want the docs to look. (Will have to conform all the code comments as well.)
DrahtBot added the label
CI failed
on Oct 18, 2023
instagibbs
commented at 3:13 pm on October 19, 2023:
shouldn’t this be 0?
in
src/policy/rbf.cpp:65
in
3a42ba255aoutdated
88- std::set<uint256> parents_of_conflicts;
89- for (const auto& mi : iters_conflicting) {
90- for (const CTxIn& txin : mi->GetTx().vin) {
91- parents_of_conflicts.insert(txin.prevout.hash);
92+ // Exit early if we're going to fail (see below)
93+ if (all_conflicts.size() > MAX_REPLACEMENT_CANDIDATES) {
instagibbs
commented at 3:40 pm on October 19, 2023:
Note: this is sticking with the same rule#5 instead of number of effected clusters. It would be more ideal if it were number of clusters to allow for better usage of adversarial-ish batched CPFPs
sdaftuar
commented at 12:48 pm on October 25, 2023:
There is room to relax this rule some, so if this is important we can do so. I think the requirement is a bound on the number of clusters that would have to be re-sorted in order to accept the new transaction. We can approximate that as the number of clusters that would be non-empty as a result of removing all the conflicting transactions from the mempool, and only process replacements for which that is below some target.
That would be a more complex logic though, so before implementing it I wanted to have some sense of whether we need to. Has the historical 100-transaction-conflict limit been problematic for use cases in the past? Note also that in the new code, we are calculating the number of conflicts exactly (the old code used an approximation, which could be gamed by an adversary).
instagibbs
commented at 1:52 pm on October 25, 2023:
Ah! I wrote a huge response to this, then looked up our previous discussions, and realized I didn’t actually read the code: #27677 (comment)
IIUC now, this is only counting direct conflicts, and not the descendants that are booted.
I think that’s fine.
Actually no, the existing code comments were just misleading, looks like the issue still exists, see: #27677 (comment)
sdaftuar
commented at 1:59 pm on October 25, 2023:
Ah yes, in the new code I’m only counting direct conflicts right now, because every descendant of a direct conflict must be in the same cluster as that conflict. So this is already a relaxation of the existing rule.
instagibbs
commented at 2:01 pm on October 25, 2023:
I think the requirement is a bound on the number of clusters that would have to be re-sorted in order to accept the new transaction.
As an alternative, we drop the replacement limit to like, 10 or something, and then only count the direct conflicts, not the direct conflicts and all the descendants?
sdaftuar
commented at 5:00 pm on December 7, 2023:
I believe I (finally) actually fixed this behavior to count the number of direct conflicts.
sdaftuar
commented at 5:15 pm on September 5, 2024:
Will comment elsewhere as well, but I just updated this limit to be a limit on the number of distinct clusters that are conflicted.
This is still needed for the same reason as before: transaction that is above minrelay, but would be in a chunk below minrelay. We could immediately evict below minrelay chunks post-re-linearization f.e. which would allow 0-fee parents then relax this maybe.
in
src/bench/mempool_stress.cpp:107
in
c5d3c42da4outdated
95+ }
96+ const auto testing_setup = MakeNoLogFileContext<const TestingSetup>(ChainType::MAIN);
97+ CTxMemPool& pool = *testing_setup.get()->m_node.mempool;
98+
99+ std::vector<CTransactionRef> transactions;
100+ // Create 1000 clusters of 100 transactions each
instagibbs
commented at 9:01 pm on October 19, 2023:
numbers in comments are off
shower thought: Should we/can we bound the number of clusters in addition to total memory in TrimToSize? I can’t think of a good way to do that that doesn’t complicate things quite a bit, and perhaps practical mempool sizes make this moot. Just something to consider in case I missed something obvious.
sdaftuar
commented at 7:24 pm on October 24, 2023:
The immediate downside to a cap on number of clusters is that singleton, high-feerate transactions would not be accepted. And I don’t think we need to – the only places where having more clusters makes us slower is in eviction and mining, and for both of those use cases we could improve performance (if we need to) by maintaining the relevant heap data structures (or something equivalent) as chunks are modified, rather than all at once.
For now in this branch I’ve created these from scratch each time, but if it turns out that performance is meaningfully impacted when the mempool is busy, then I can optimize this further by just using a bit more memory.
sdaftuar force-pushed
on Oct 19, 2023
glozow added the label
Mempool
on Oct 20, 2023
in
src/txmempool.cpp:1481
in
26e831d5f2outdated
1477+ // TODO: branch on size of fee to do this as 32-bit calculation
1478+ // instead? etc
1479+ return a.first->fee*b.first->size > b.first->fee*a.first->size;
1480+ };
1481+
1482+ std::make_heap(heap_chunks.begin(), heap_chunks.end(), cmp);
instagibbs
commented at 7:39 pm on October 20, 2023:
You probably didn’t mean to call make_heap in the loop for 3N work each time. fwiw I don’t see any performance difference between push_heaping all the elements on vs one make_heap.
once this is changed, add+trimming seems to be faster than in master regardless of topology tested
sdaftuar
commented at 5:03 pm on December 7, 2023:
Should be fixed now.
in
src/node/miner.cpp:248
in
26e831d5f2outdated
381+ // TODO: branch on size of fee to do this as 32-bit calculation
382+ // instead? etc
383+ return a.first->fee*b.first->size < b.first->fee*a.first->size;
384+ };
385+ // TODO: replace the heap with a priority queue
386+ std::make_heap(heap_chunks.begin(), heap_chunks.end(), cmp);
instagibbs
commented at 7:46 pm on October 20, 2023:
don’t ask why but I’m getting significant performance improvement(>10%) just push_heaping everything from scratch, and similarly with priority_queue
DrahtBot added the label
Needs rebase
on Oct 30, 2023
in
src/bench/mempool_stress.cpp:269
in
dd6684ab66outdated
dd6684ab665bc8ae76b76fdd2e578fc77d562a52: While you’re touching this, can you rename MempoolCheck to MemPoolCheck, MempoolEviction to MemPoolEviction and ComplexMemPool to MempoolComplex? That makes -filter=MemPool.* work
As a workaround, -filter=.*Mem.* does work.
Sjors
commented at 10:29 am on November 14, 2023:
member
It would be useful to add a mempool_backwards_compatibility.py test to illustrate how the new rules interact with older nodes. It could have two modern nodes and one v25 (or v26) node. Some of the tests you deleted in this branch could be moved there. E.g. the test could demonstrate how RBF rule 2 is not enforced when relaying to the new node, but it is when relaying to the v25 node.
Benchmarks on a 2019 MacBook Pro (2,3 GHz 8-Core Intel Core i9), plugged in:
573@@ -574,6 +574,8 @@ void SetupServerArgs(ArgsManager& argsman)
574 argsman.AddArg("-limitancestorsize=<n>", strprintf("Do not accept transactions whose size with all in-mempool ancestors exceeds <n> kilobytes (default: %u)", DEFAULT_ANCESTOR_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
575 argsman.AddArg("-limitdescendantcount=<n>", strprintf("Do not accept transactions if any ancestor would have <n> or more in-mempool descendants (default: %u)", DEFAULT_DESCENDANT_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
576 argsman.AddArg("-limitdescendantsize=<n>", strprintf("Do not accept transactions if any ancestor would have more than <n> kilobytes of in-mempool descendants (default: %u).", DEFAULT_DESCENDANT_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
577+ argsman.AddArg("-limitclustercount=<n>", strprintf("Do not accept transactions connected to <n> or more existing in-mempool transactions (default: %u)", DEFAULT_CLUSTER_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
I (naively) replaced ancestor and descendent limits in coin selection with the new cluster limit. At least the tests pass *.
When we drop these settings anyone who uses them will get an error when starting the node. That’s probably a good thing, since they should read about this change.
* = well, wallet_basic.py fails with:
0Internal bug detected: Shared UTXOs among selection results
1wallet/coinselection.h:340 (InsertInputs)
in
src/txmempool.cpp:1857
in
54f39ca8f1outdated
1549+
1550+void Cluster::RemoveTransaction(const CTxMemPoolEntry& entry)
1551+{
1552+ m_chunks[entry.m_loc.first].txs.erase(entry.m_loc.second);
1553+
1554+ // Chunk (or cluster) may now be empty, but this will get cleaned up
54f39ca8f101483f5f82707689ca49431d4091e5: what if the deleted transaction makes it so there are now two clusters? This is also safe to ignore?
sdaftuar
commented at 1:33 am on December 10, 2023:
I wouldn’t say it’s safe to ignore, but the idea is that we often want to be able to batch deletions, and then clean things up in one pass. So the call sites should all be dealing with this issue and ensuring that we always clean up at some point.
(This is definitely an area where I expect that we’ll be re-engineering all this logic and trying to come up with a better abstraction layer so that this is more robust and easier to think about!)
54f39ca8f101483f5f82707689ca49431d4091e5: So you’re creating a chunk for each new transaction and then erasing it if the fee rate goes down. Why not the other way around?
in
src/txmempool.cpp:1877
in
54f39ca8f1outdated
1569+ m_chunks.emplace_back(txentry.get().GetModifiedFee(), txentry.get().GetTxSize());
1570+ m_chunks.back().txs.emplace_back(txentry);
1571+ while (m_chunks.size() >= 2) {
1572+ auto cur_iter = std::prev(m_chunks.end());
1573+ auto prev_iter = std::prev(cur_iter);
1574+ double feerate_prev = prev_iter->fee*cur_iter->size;
Sjors
commented at 2:27 pm on November 14, 2023:
member
Couple more comments / questions. To be continued…
in
src/txmempool.h:427
in
065e18fff3outdated
633@@ -632,6 +634,22 @@ class CTxMemPool
634 */
635 void UpdateTransactionsFromBlock(const std::vector<uint256>& vHashesToUpdate) EXCLUSIVE_LOCKS_REQUIRED(cs, cs_main) LOCKS_EXCLUDED(m_epoch);
636637+ /**
638+ * Calculate whether cluster size limits would be exceeded if a new tx were
639+ * added to the mempool (assuming no conflicts).
065e18fff30a91c94c47d5c2fc65b13ddc38aa47: do you plan to relax this assumption so that a transaction in (a) full cluster(s) can be replaced?
Update: AcceptSingleTransaction skips ClusterSizeChecks if there’s a replacement, in which case ReplacementChecks checks the new cluster size. So this is not an issue.
0b284b5fd29c06d46c1ec60ea7e1bcd07f36feb1: can you order chunks by mining score?
sdaftuar
commented at 4:55 pm on December 7, 2023:
Chunks for a given cluster are already sorted in descending feerate (ie mining score) order, is that what you’re asking about or is there another issue I’m overlooking?
It would be good to explain the rationale for the miner score somewhere.
Why a * b and b * a?
And why is not the fee rate a_fee / a_size? (inverse score?)
sipa
commented at 1:53 pm on November 20, 2023:
member
@Sjors I believe most if not all of this PR will be rewritten, and split up into several components. The goal here is just to give an idea of the high-level interactions with other changes (wallet behavior, package validation/relay/RBF, …). I don’t think a detailed line-by-line code review at this stage is a good use of your time.
0b284b5fd29c06d46c1ec60ea7e1bcd07f36feb1 : Could this use CompareMiningScore or does it have a different goal?
in
doc/policy/mempool-replacements.md:49
in
bf467f8286outdated
45@@ -53,12 +46,10 @@ other consensus and policy rules, each of the following conditions are met:
46 significant portions of the node's mempool using replacements with multiple directly conflicting
47 transactions, each with large descendant sets.
4849-6. The replacement transaction's feerate is greater than the feerates of all directly conflicting
50+5. The replacement transaction's mining score is greater than the mining score of all directly conflicting
bf467f8286425b692a1736ab6d417d0ba6074658 Maybe make this rule 7. That seems a bit more clear than saying “previously this rule referred to fee rate”
in
doc/policy/mempool-replacements.md:20
in
bf467f8286outdated
27-
28-3. The replacement transaction pays an absolute fee of at least the sum paid by the original
29+2. The replacement transaction pays an absolute fee of at least the sum paid by the original
30 transactions.
3132 *Rationale*: Only requiring the replacement transaction to have a higher feerate could allow an
bf467f8286425b692a1736ab6d417d0ba6074658: “a higher feerate or (using clusters) mining score”
Is the “Additionally” reasoning still valid?
in
doc/policy/mempool-replacements.md:46
in
bf467f8286outdated
54- preferable for block-inclusion, compared to what would be removed from the mempool. This rule
55- predates ancestor feerate-based transaction selection.
56+ *Rationale*: Ensure that the new transaction is more appealing to mine than those being evicted.
5758 This set of rules is similar but distinct from BIP125.
59
bf467f8286425b692a1736ab6d417d0ba6074658: history can be expanded:
0* Cluster mempool introduced, dropping rule 2 and introducing rule 7.
1 As of **v27.0** ([PR 28676](https://github.com/bitcoin/bitcoin/pull/28676)).
in
src/policy/rbf.cpp:116
in
bf467f8286outdated
116+ // directly replaced, because descendant transactions which pay for the
117+ // parent will be reflected in the parent's chunk feerate.
118+ Cluster::Chunk &chunk = mi->m_cluster->m_chunks[mi->m_loc.first];
119+ CFeeRate original_feerate(chunk.fee, chunk.size);
120 if (replacement_feerate <= original_feerate) {
121 return strprintf("rejecting replacement %s; new feerate %s <= old feerate %s",
bf467f8286425b692a1736ab6d417d0ba6074658: ; new chunk feerate %s <= old chunk feerate
Sjors
commented at 3:00 pm on November 20, 2023:
member
I was wondering if we can drop RBF rule 5 (in a followup), but I’m guessing not really. My initial thinking was the cluster limit could be used instead. But CalculateMiningScoreOfReplacementTx only checks that the new cluster doesn’t get too big.
But does the new cluster system make it less of a burden to have large numbers of transactions appear and disappear from the mempool?
—
I’m also trying to understand if dropping the CPFP carveout is fine. The original scenario described on the mailistlist:
0TX_important
1 * output Alice <- child_a_1 <- child_a_2 <- … -< child_a_25
2 (^ intentionally low fees so it doesn't confirm before timeout)
3 * output Bob: child_b_1
4 ^ high fee
The carveout rule allows Bob’s child_b_1, despite Alice having used up the 25 ancestor limit for TX_important. And alice can’t use the carveout to just add child_a_26, because child_26 has more than one unconfirmed ancestor.
So what happens in the cluster world?
For simplicity, let’s set -limitclustercount to 26 (the default is 100 in this PR). Without the CPFP carveout, can Bob still insert child_b_1?
My understanding is that he can’t, because ClusterSizeChecks will fail.
Instead of failing, can we check if it’s possible to evict the lowest value chunk? Though it introduces an implicit RBF-like mechanism… If it were possible, then child_a_25 would be evicted from the cluster as long as it has a longer fee rate than child_b_1.
Alice might make sure that the fee rates of child_a_1 … child_a_25 increase such that there’s only one chunk. That just requires Bob to use a higher fee rate.
Can you clarify what txs, orig_txs and cluster are for, and what the general strategy is here?
Sjors
commented at 5:16 pm on November 20, 2023:
member
I don’t think a detailed line-by-line code review at this stage is a good use of your time.
That’s not what I’m trying to do. I strategically / haphazardly picked a dozen lines to ask questions about to get a better understanding of the whole thing. Based on IRC chat today, I’ll wait now for the upcoming update.
DrahtBot removed the label
Needs rebase
on Dec 7, 2023
sdaftuar force-pushed
on Dec 8, 2023
sdaftuar force-pushed
on Dec 10, 2023
sdaftuar force-pushed
on Dec 10, 2023
sdaftuar force-pushed
on Dec 10, 2023
DrahtBot removed the label
CI failed
on Dec 10, 2023
DrahtBot added the label
Needs rebase
on Dec 18, 2023
achow101 referenced this in commit
7143d43884
on Feb 10, 2024
ariard
commented at 3:45 am on February 19, 2024:
contributor
do you have a simulation environment or script like you’ve done for #25717 to observe the computational complexity diff compared to today’s mempool against different chain of transaction performance payload ?
interesting what is the lowest performance linux host assumed for decentralization of the tx-relay network.
assume always 24/7 internet and on a vpcu instance, not baremetal.
Sjors
commented at 3:05 pm on February 19, 2024:
member
In the context of Stratum v2 #29432 I’m looking for a more efficient way to decide when to generate a new block template.
The current implementation looks at the mempool’s GetTransactionsUpdated() every -sv2interval seconds. If there was any update, it calls CreateNewBlock() on BlockAssembler. It then checks if fees increased by at least -sv2feedelta since the last template and if so, pushes it to connected clients.
GetTransactionsUpdated() changes all the time, especially with a big mempool, so not that useful of a filter. I’d like something like GetFeeEstimateAtTop(size vbytes=4000000). Would it be easy to keep track of that (approximate) number every time something is added or removed from the mempool?
Ideally I’d like to change the meaning of -sv2interval from “check every n seconds and push update if needed” to “push better templates immediately, but wait least n seconds between them”.
sipa
commented at 3:10 pm on February 19, 2024:
member
Would it be easy to keep track of that (approximate) number every time something is added or removed from the mempool?
After clustermempool, yes. Today doing that inherently requires running the mining algorithm to figure out what transactions to put there.
Sjors
commented at 3:12 pm on February 19, 2024:
member
Indeed, I meant on top of this PR (or a later incarnation).
ariard
commented at 5:56 pm on February 22, 2024:
contributor
interesting what is the lowest performance linux host assumed for decentralization of the tx-relay network.
assume always 24/7 internet and on a vpcu instance, not baremetal.
Running this branch mainet on a 2 vcpu instance, with the following performance characteristics:
It would be very interesting to feed cluster-support testing node with all kinds of chain of transactions, and check substantial performance diff compared to non-cluster-support testing node. Check there is no substantial performance regression.
sdaftuar force-pushed
on Mar 7, 2024
sdaftuar force-pushed
on Mar 7, 2024
DrahtBot removed the label
Needs rebase
on Mar 7, 2024
in
src/txmempool.h:839
in
96df5acec3outdated
833@@ -764,6 +834,9 @@ class CTxMemPool
834 */
835 void UpdateForDescendants(txiter updateIt, cacheMap& cachedDescendants,
836 const std::set<uint256>& setExclude, std::set<uint256>& descendants_to_remove) EXCLUSIVE_LOCKS_REQUIRED(cs);
837+ // During reorg we add transactions back to mempool, must reconnect
838+ // clusters with in-mempool descendants.
839+ void UpdateClusterForDescendants(txiter updateIt) EXCLUSIVE_LOCKS_REQUIRED(cs);
I think the introduction of Cluster alters the worst-case computational during re-org, when we reconnect disconnectpool to the current mempool. Before the limit we were strictly bounded with DEFAULT_ANCESTOR_LIMIT / DEFAULT_DESCENDANT_LIMIT. Now we have a DEFAULT_CLUST_LIMIT, where the worst-case graph traversal algorithm might have to visit more element than at max 25.
in
src/txmempool.cpp:122
in
96df5acec3outdated
117+ }
118+ if (clusters_to_merge.size() > 1) {
119+ // Merge the other clusters into this one, but keep this cluster as
120+ // first so that it's topologically sound.
121+ clusters_to_merge[0]->Merge(clusters_to_merge.begin()+1, clusters_to_merge.end(), true);
122+ // TODO: limit the size of the cluster, in case it got too big.
I think this introduces a transaction censorship vector for time-sensitive second-layers. If you can re-org one block and construct a cluster such a a target descendant is not reconnected with other clusters, the non-cluster connected target descendant might have a stale feerate. This descendant could be pinnned in the bottom of the local mempool.
in
src/txmempool.cpp:559
in
96df5acec3outdated
554+ // Only one parent cluster: add to it.
555+ clusters_to_merge[0]->AddTransaction(*newit, true);
556+ cachedInnerUsage += clusters_to_merge[0]->GetMemoryUsage();
557+ } else {
558+ cachedInnerUsage -= clusters_to_merge[0]->GetMemoryUsage();
559+ clusters_to_merge[0]->Merge(clusters_to_merge.begin()+1, clusters_to_merge.end(), false);
I think here a counter should be introduced to avoid iterating on more than DEFAULT_CLUSTER_LIMIT on this work_queue. It’s in RemoveStaged itself call in Finalize, I think you might have temporarily unbounded numbers with conflicting descendants transactions.
in
src/txmempool.cpp:1545
in
96df5acec3outdated
1540+void Cluster::RemoveTransaction(const CTxMemPoolEntry& entry)
1541+{
1542+ m_chunks[entry.m_loc.first].txs.erase(entry.m_loc.second);
1543+
1544+ // Chunk (or cluster) may now be empty, but this will get cleaned up
1545+ // when the cluster is re-sorted (or when the cluster is deleted) Note:
I think you will have computational threshold effect with whatever limits you’re picking up for max cluster size and chunk ordering parameters. Namely, what is the minimal chunk modification that can provoke the maximum of re-ordering of all in-cluster chunks.
in
src/txmempool.cpp:1569
in
96df5acec3outdated
1564+ double feerate_prev = prev_iter->fee*cur_iter->size;
1565+ double feerate_cur = cur_iter->fee*prev_iter->size;
1566+ // We only combine chunks if the feerate would go up; if two
1567+ // chunks have equal feerate, we prefer to keep the smaller
1568+ // chunksize (which is generally better for both mining and
1569+ // eviction).
I think this is a broken assumption. You’re using virtual bytes (GetTxSize()), however an equivalent feerate cluster can have a smaller total witness unit as such a higher feerate w.r.t consensus rules (MAX_BLOCK_WEIGHT) and it should be selected in a mining block template.
in
src/txmempool.cpp:1605
in
96df5acec3outdated
1600+ txs.push_back(chunk_tx.get());
1601+ }
1602+ }
1603+ // Sorting by ancestor count is equivalent to topological sort.
1604+ std::sort(txs.begin(), txs.end(), [](const CTxMemPoolEntry::CTxMemPoolEntryRef& a, const CTxMemPoolEntry::CTxMemPoolEntryRef& b) {
1605+ return a.get().GetCountWithAncestors() < b.get().GetCountWithAncestors();
Note topological sort is not defined. At the very least you can have a and b both being the first child of a single ancestor and still being the 2rd and 3rd child of a common ancestor. Strictly inferior here cannot be say to be equivalent.
in
src/rpc/mempool.cpp:262
in
ce41413592outdated
246@@ -247,34 +247,81 @@ static RPCHelpMan testmempoolaccept()
247 };
248 }
249250+static std::vector<RPCResult> ClusterDescription()
251+{
252+ return {
253+ RPCResult{RPCResult::Type::NUM, "vsize", "virtual transaction size as defined in BIP 141. This is different from actual serialized size for witness transactions as witness data is discounted."},
254+ RPCResult{RPCResult::Type::NUM, "txcount", "number of transactions (including this one)"},
255+ RPCResult{RPCResult::Type::NUM, "clusterid", "id of the cluster containing this tx"},
About the clusterid (m_next_cluster_id), I don’t think it’s a good idea to use it as a tie-breaker in CompareMiningScore as it’s just a monotonic counter, it’s not information perfomative for a mining score. You could use weight at least or reception timestamp e.g oldest the tx-relay reception more well-propagated on the network, already in every miner mempool.
in
src/rpc/mempool.cpp:426
in
ce41413592outdated
419@@ -370,6 +420,76 @@ UniValue MempoolToJSON(const CTxMemPool& pool, bool verbose, bool include_mempoo
420 }
421 }
422423+static RPCHelpMan getmempoolfeesize()
424+{
425+ return RPCHelpMan{"getmempoolfeesize",
426+ "Returns fee/size data for the whole mempool.",
11+/** Data structure storing a fee and size, ordered by increasing fee/size.
12+ *
13+ * The size of a FeeFrac cannot be zero unless the fee is also zero.
14+ *
15+ * FeeFracs have a total ordering, first by increasing feerate (ratio of fee over size), and then
16+ * by decreasing size. The empty FeeFrac (fee and size both 0) sorts last. So for example, the
This is not perfect as you could use weight units and even have to consider what is the minimal satisfying witness under given consensus rules for a given transaction, especially if you have multiple candidate for a spend given a witnessScript. You have already such types of transaction like second-stage LN transactions (either revocation path or preimage/timeout path).
in
src/util/feefrac.h:36
in
bb1bc54a7coutdated
31+ * The >> and << operators only compare feerate and treat equal feerate but different size as
32+ * equivalent. The empty FeeFrac is neither lower or higher in feerate than any other.
33+ *
34+ * These comparisons are only guaranteed to be correct when the product of the highest fee and
35+ * highest size does not exceed 2^64-1. If the fee is a number in sats, and size in bytes, then
36+ * this allows up to 46116.86 BTC at size 4M, and 1844674.4 BTC at size 100k).
I think this should be MAX_MONEY as ReplacementChecks() in PreChecks() are done before PolicyScriptChecks() and ConsensusScriptChecks() so an adversary does not have to own the solution of the scriptpubkey to fake 46116.86 or 1844674.4 of value in your local mempool, exceeds 2^64-1 and mess up with linearization.
ariard
commented at 3:55 am on March 8, 2024:
contributor
I’ll spend time reviewing current CTxMemPool’s code path w.r.t DoS resistance before to pursue review further. This is very unsure the introduction of TxGraph improves on this front as the design is focus on mining incentive-compatibility improvement first.
ariard
commented at 4:01 am on March 8, 2024:
contributor
Running this branch mainet on a 2 vcpu instance, with the following performance characteristics:
Got a disk space issue at block 501691 by syncing from genesis.
02024-02-23T10:53:41Z UpdateTip: new best=000000000000000000753b6b9821750d271e1730d7403a0658c507c88092bdf0 height=501691 version=0x20000000 log2_work=87.760876 tx=287312998 date='2017-12-30T07:48:59Z' progress=0.295839 cache=16.3MiB(151170txo)
12024-02-23T10:53:41Z New outbound-full-relay v1 peer connected: version: 70016, blocks=831673, peer=196
22024-02-23T10:53:47Z *** Disk space is too low!
32024-02-23T10:53:47Z Error: Disk space is too low!
Was configured as a prune node with default pruning settings.
02024-02-22T17:49:33Z Prune configured to target 550 MiB on disk for block and undo files.
DrahtBot added the label
Needs rebase
on Mar 9, 2024
DrahtBot added the label
CI failed
on Apr 3, 2024
DrahtBot
commented at 7:04 pm on April 3, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
DrahtBot removed the label
Needs rebase
on Apr 26, 2024
sdaftuar force-pushed
on Apr 26, 2024
sdaftuar force-pushed
on Apr 26, 2024
sdaftuar force-pushed
on Apr 26, 2024
sdaftuar force-pushed
on Apr 26, 2024
sdaftuar force-pushed
on Apr 26, 2024
sdaftuar force-pushed
on Apr 27, 2024
Dianagram009 approved
sdaftuar force-pushed
on Apr 28, 2024
DrahtBot removed the label
CI failed
on Apr 28, 2024
DrahtBot added the label
Needs rebase
on Apr 30, 2024
in
src/kernel/txgraph.cpp:522
in
c49e0444e5outdated
502+ cluster->Clear();
503+
504+ // The first transaction gets to stay in the existing cluster.
505+ bool first = true;
506+ for (auto& txentry : txs) {
507+ if (txentry.get().m_cluster == nullptr) {
Note also that I expect this implementation of txgraph to be rewritten entirely, so the most relevant thing to review right now relating to this code is the header file, as the interface is the most important part (and the rest of the PR and the mempool code builds on top of that interface).
in
src/rpc/mempool.cpp:659
in
c49e0444e5outdated
644@@ -547,6 +645,40 @@ static RPCHelpMan getmempooldescendants()
645 };
646 }
647648+static RPCHelpMan getmempoolcluster()
649+{
650+ return RPCHelpMan{"getmempoolcluster",
651+ "\nReturns mempool data for given cluster\n",
652+ {
653+ {"id", RPCArg::Type::NUM, RPCArg::Optional::NO, "The cluster id (must be in mempool)"},
Sjors suggested here #28676 (review) that I add a way to do it by transaction hash, which would be easy. Let me know if you have other suggestions…
sdaftuar force-pushed
on May 8, 2024
sdaftuar force-pushed
on Jun 5, 2024
DrahtBot removed the label
Needs rebase
on Jun 5, 2024
sdaftuar force-pushed
on Jun 5, 2024
DrahtBot added the label
CI failed
on Jun 5, 2024
sdaftuar force-pushed
on Jun 5, 2024
DrahtBot removed the label
CI failed
on Jun 5, 2024
sdaftuar force-pushed
on Jun 5, 2024
DrahtBot added the label
Needs rebase
on Jun 7, 2024
sdaftuar force-pushed
on Jun 10, 2024
DrahtBot removed the label
Needs rebase
on Jun 10, 2024
DrahtBot added the label
CI failed
on Jun 10, 2024
DrahtBot added the label
Needs rebase
on Jun 11, 2024
sdaftuar force-pushed
on Jun 12, 2024
sdaftuar force-pushed
on Jun 12, 2024
sdaftuar force-pushed
on Jun 14, 2024
DrahtBot removed the label
Needs rebase
on Jun 14, 2024
DrahtBot removed the label
CI failed
on Jun 14, 2024
DrahtBot added the label
Needs rebase
on Jun 17, 2024
sdaftuar force-pushed
on Jun 18, 2024
DrahtBot removed the label
Needs rebase
on Jun 18, 2024
DrahtBot added the label
CI failed
on Jun 18, 2024
DrahtBot
commented at 8:03 pm on June 18, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
DrahtBot added the label
Needs rebase
on Jun 19, 2024
sdaftuar force-pushed
on Jun 26, 2024
DrahtBot removed the label
Needs rebase
on Jun 26, 2024
sdaftuar force-pushed
on Jun 28, 2024
DrahtBot removed the label
CI failed
on Jun 28, 2024
sdaftuar force-pushed
on Jul 1, 2024
DrahtBot added the label
Needs rebase
on Jul 3, 2024
sdaftuar force-pushed
on Jul 11, 2024
DrahtBot
commented at 3:38 am on July 11, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
Fixed this to use result.second, which indicates whether the linearization is optimal.
Also added a wip commit to track iterations done, so that the benchmark logging is more useful for now.
instagibbs
commented at 3:24 pm on July 23, 2024:
member
(after fixing the reported iterations done) Seeing a lot of linerarizations being done that are kind of odd looking repeats with same number of iterations?
Make sure to run all tests locally, according to the documentation.
The failure may happen due to a number of reasons, for example:
Possibly due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
A sanitizer issue, which can only be found by compiling with the sanitizer and running the
affected test.
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
DrahtBot removed the label
Needs rebase
on Jul 26, 2024
sdaftuar force-pushed
on Jul 26, 2024
DrahtBot removed the label
CI failed
on Jul 27, 2024
sdaftuar
commented at 6:14 pm on July 29, 2024:
member
(after fixing the reported iterations done) Seeing a lot of linerarizations being done that are kind of odd looking repeats with same number of iterations?
I observed some similar behavior on my node as well. Without knowing exactly what was happening here: in the current draft of the PR, the number of iterations in an optimal sort is related to the number of topologically valid orderings of the cluster (since we look at all topologically valid subsets each time we try to find the best candidate), so if there are clusters with similar topologies, you might expect to see the same number of iterations needed.
Additionally, in the current draft, successful RBF candidates cause us to relinearize things twice: once to construct the feerate diagram, and once again when the new transaction is added to the mempool. The intention is to eliminate the linearizations upon acceptance to the mempool from taking place if they were already done as part of the RBF calculation (the design of the TxGraph was done with this in mind), but that is not yet implemented. (In the example you pasted, I noticed there are 4 linearizations happening at the same millisecond, so I wonder if RBF may also be involved.)
glozow referenced this in commit
bba01ba18d
on Aug 5, 2024
DrahtBot added the label
Needs rebase
on Aug 5, 2024
sdaftuar force-pushed
on Aug 29, 2024
DrahtBot removed the label
Needs rebase
on Aug 29, 2024
sdaftuar force-pushed
on Aug 29, 2024
DrahtBot added the label
CI failed
on Aug 30, 2024
DrahtBot added the label
Needs rebase
on Sep 2, 2024
sdaftuar force-pushed
on Sep 3, 2024
DrahtBot removed the label
Needs rebase
on Sep 3, 2024
in
test/functional/mining_prioritisetransaction.py:119
in
76b4f7ded7outdated
sdaftuar
commented at 5:09 pm on September 5, 2024:
Taken, thanks.
in
test/functional/mempool_persist.py:102
in
76b4f7ded7outdated
96@@ -97,6 +97,9 @@ def run_test(self):
97 assert_equal(total_fee_old, sum(v['fees']['base'] for k, v in self.nodes[0].getrawmempool(verbose=True).items()))
98 99 last_entry = self.nodes[0].getmempoolentry(txid=last_txid)
100+ del last_entry["fees"]["chunk"]
101+ del last_entry["clusterid"]
102+ del last_entry["chunksize"]
sdaftuar
commented at 5:17 pm on September 5, 2024:
Taken
in
test/functional/mempool_sigoplimit.py:175
in
76b4f7ded7outdated
173 # When we actually try to submit, the parent makes it into the mempool, but the child would exceed ancestor vsize limits
174 res = self.nodes[0].submitpackage([tx_parent.serialize().hex(), tx_child.serialize().hex()])
175- assert "too-long-mempool-chain" in res["tx-results"][tx_child.getwtxid()]["error"]
176+ assert "too-large-cluster" in res["tx-results"][tx_child.getwtxid()]["error"]
177+ # When we actually try to submit, the parent makes it into the mempool, but the child would exceed cluster vsize limits
178+ #assert_raises_rpc_error(-26, "too-large-cluster", self.nodes[0].submitpackage, [tx_parent.serialize().hex(), tx_child.serialize().hex()])
sdaftuar
commented at 5:18 pm on September 5, 2024:
Removed
in
test/functional/mempool_truc.py:548
in
76b4f7ded7outdated
544@@ -557,10 +545,11 @@ def test_truc_sibling_eviction(self):
545 # Override maxfeerate - it costs a lot to replace these 100 transactions.
546 assert node.testmempoolaccept([tx_v3_replacement_only["hex"]], maxfeerate=0)[0]["allowed"]
547 # Adding another one exceeds the limit.
548- utxos_for_conflict.append(tx_v3_parent["new_utxos"][1])
549- tx_v3_child_2_rule5 = self.wallet.create_self_transfer_multi(utxos_to_spend=utxos_for_conflict, fee_per_output=4000000, version=3)
550- rule5_str = f"too many potential replacements (including sibling eviction), rejecting replacement {tx_v3_child_2_rule5['txid']}; too many potential replacements (101 > 100)"
551- assert_raises_rpc_error(-26, rule5_str, node.sendrawtransaction, tx_v3_child_2_rule5["hex"])
552+ # TODO: rewrite this test given the new RBF rules
d0576c134a5
I don’t think it’s possible to adapt this test so that the tx has 100 direct conflicts, given that it needs to be within 1000vB (the only transactions allowed to have sibling eviction are TRUC children). Having 100 inputs makes it at least 3000-something vB. This would be the case for any new rule that requires 100 inputs.
So this test could just be deleted until we allow sibling eviction for transactions that don’t have this size restriction.
sdaftuar
commented at 5:19 pm on September 5, 2024:
Deleted.
in
test/functional/feature_rbf.py:69
in
76b4f7ded7outdated
71- self.test_too_many_replacements()
72-
73- self.log.info("Running test too many replacements using default mempool params...")
74- self.test_too_many_replacements_with_default_mempool_params()
75+ #self.log.info("Running test too many replacements using default mempool params...")
76+ #self.test_too_many_replacements_with_default_mempool_params()
d0576c134a5
I think we only need to adapt 1 of these 2 tests and it should be quite easy to do. Currently, test_too_many_replacements requires a high descendant limit because it’s 1 parent with 100 children, but we can just add a self.generate(node, 1) in between and it can use node 1 (the default one). Then, we can just delete test_too_many_replacements_with_default_mempool_params
0diff--gita/test/functional/feature_rbf.pyb/test/functional/feature_rbf.py 1indexf1de04008c4..57817380c50100755 2--- a/test/functional/feature_rbf.py
3+++b/test/functional/feature_rbf.py 4@@-61,12+61,8@@classReplaceByFeeTest(BitcoinTestFramework): 5self.log.info("Running test spends of conflicting outputs...") 6self.test_spends_of_conflicting_outputs() 7 8-# TODO: rework too many replacements test to use direct conflicts only
9-#self.log.info("Running test too many replacements...")
10-#self.test_too_many_replacements()
11- 12-#self.log.info("Running test too many replacements using default mempool params...")
13-#self.test_too_many_replacements_with_default_mempool_params()
14+self.log.info("Running test too many replacements...") 15+self.test_too_many_replacements() 16 17self.log.info("Running test opt-in...") 18self.test_opt_in() 19@@-331,17+327,20@@classReplaceByFeeTest(BitcoinTestFramework): 20split_value=int((initial_nValue-fee)/(MAX_REPLACEMENT_LIMIT+1)) 21 22splitting_tx_utxos=self.wallet.send_self_transfer_multi( 23-from_node=self.nodes[0], 24+from_node=self.nodes[1], 25utxos_to_spend=[utxo], 26sequence=0, 27num_outputs=MAX_REPLACEMENT_LIMIT+1, 28amount_per_output=split_value, 29)["new_utxos"] 30 31+# Let this tx confirm so that its children are not connected
32+self.generate(self.nodes[1],1) 33+ 34# Now spend each of those outputs individually
35forutxoinsplitting_tx_utxos: 36self.wallet.send_self_transfer( 37-from_node=self.nodes[0], 38+from_node=self.nodes[1], 39utxo_to_spend=utxo, 40sequence=0, 41fee=Decimal(fee)/COIN, 42@@-359,98+358,13@@classReplaceByFeeTest(BitcoinTestFramework): 43double_tx_hex=double_tx.serialize().hex() 44 45# This will raise an exception
46-assert_raises_rpc_error(-26,"too many potential replacements",self.nodes[0].sendrawtransaction,double_tx_hex,0) 47+assert_raises_rpc_error(-26,"too many potential replacements",self.nodes[1].sendrawtransaction,double_tx_hex,0) 48 49# If we remove an input, it should pass
50double_tx.vin.pop() 51double_tx_hex=double_tx.serialize().hex() 52self.nodes[0].sendrawtransaction(double_tx_hex,0) 53 54-deftest_too_many_replacements_with_default_mempool_params(self): 55-"""
56- Test rule 5 (do not allow replacements that cause more than 100
57- evictions) without having to rely on non-default mempool parameters.
58-
59- In order to do this, create a number of "root" UTXOs, and then hang
60- enough transactions off of each root UTXO to exceed the MAX_REPLACEMENT_LIMIT.
61- Then create a conflicting RBF replacement transaction.
62- """ 63-# Clear mempools to avoid cross-node sync failure.
64-fornodeinself.nodes: 65-self.generate(node,1) 66-normal_node=self.nodes[1] 67-wallet=MiniWallet(normal_node) 68- 69-# This has to be chosen so that the total number of transactions can exceed
70-# MAX_REPLACEMENT_LIMIT without having any one tx graph run into the descendant
71-# limit; 10 works.
72-num_tx_graphs=10 73- 74-# (Number of transactions per graph, rule 5 failure expected)
75-cases=[ 76-# Test the base case of evicting fewer than MAX_REPLACEMENT_LIMIT
77-# transactions.
78-((MAX_REPLACEMENT_LIMIT//num_tx_graphs)-1,False), 79- 80-# Test hitting the rule 5 eviction limit.
81-(MAX_REPLACEMENT_LIMIT//num_tx_graphs,True), 82-] 83- 84-for(txs_per_graph,failure_expected)incases: 85-self.log.debug(f"txs_per_graph: {txs_per_graph}, failure: {failure_expected}") 86-# "Root" utxos of each txn graph that we will attempt to double-spend with
87-# an RBF replacement.
88-root_utxos=[] 89- 90-# For each root UTXO, create a package that contains the spend of that
91-# UTXO and `txs_per_graph` children tx.
92-forgraph_numinrange(num_tx_graphs): 93-root_utxos.append(wallet.get_utxo()) 94- 95-optin_parent_tx=wallet.send_self_transfer_multi( 96-from_node=normal_node, 97-sequence=MAX_BIP125_RBF_SEQUENCE, 98-utxos_to_spend=[root_utxos[graph_num]], 99-num_outputs=txs_per_graph,100-)101-assert_equal(True,normal_node.getmempoolentry(optin_parent_tx['txid'])['bip125-replaceable'])102-new_utxos=optin_parent_tx['new_utxos']103-104-forutxoinnew_utxos:105-# Create spends for each output from the "root" of this graph.
106-child_tx=wallet.send_self_transfer(107-from_node=normal_node,108-utxo_to_spend=utxo,109-)110-111-assertnormal_node.getmempoolentry(child_tx['txid'])112-113-num_txs_invalidated=len(root_utxos)+(num_tx_graphs*txs_per_graph)114-115-iffailure_expected:116-assertnum_txs_invalidated>MAX_REPLACEMENT_LIMIT117-else:118-assertnum_txs_invalidated<=MAX_REPLACEMENT_LIMIT119-120-# Now attempt to submit a tx that double-spends all the root tx inputs, which
121-# would invalidate `num_txs_invalidated` transactions.
122-tx_hex=wallet.create_self_transfer_multi(123-utxos_to_spend=root_utxos,124-fee_per_output=10_000_000,# absurdly high feerate
125-)["hex"]126-127-iffailure_expected:128-assert_raises_rpc_error(129--26,"too many potential replacements",normal_node.sendrawtransaction,tx_hex,0)130-else:131-txid=normal_node.sendrawtransaction(tx_hex,0)132-assertnormal_node.getmempoolentry(txid)133-134-# Clear the mempool once finished, and rescan the other nodes' wallet
135-# to account for the spends we've made on `normal_node`.
136-self.generate(normal_node,1)137-self.wallet.rescan_utxos()138-139deftest_opt_in(self):140"""Replacing should only work if orig tx opted in"""141tx0_outpoint=self.make_utxo(self.nodes[0],int(1.1*COIN))
sdaftuar
commented at 5:23 pm on September 5, 2024:
Thanks, took this.
glozow
commented at 5:33 pm on September 3, 2024:
member
had a look at some tests that are disabled or have TODOs
DrahtBot removed the label
CI failed
on Sep 3, 2024
sdaftuar force-pushed
on Sep 4, 2024
DrahtBot added the label
CI failed
on Sep 4, 2024
DrahtBot
commented at 6:29 pm on September 4, 2024:
contributor
Make sure to run all tests locally, according to the documentation.
The failure may happen due to a number of reasons, for example:
Possibly due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
A sanitizer issue, which can only be found by compiling with the sanitizer and running the
affected test.
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
sdaftuar force-pushed
on Sep 4, 2024
DrahtBot added the label
Needs rebase
on Sep 4, 2024
sdaftuar force-pushed
on Sep 5, 2024
sdaftuar force-pushed
on Sep 5, 2024
DrahtBot removed the label
Needs rebase
on Sep 5, 2024
sdaftuar force-pushed
on Sep 5, 2024
sdaftuar
commented at 5:28 pm on September 5, 2024:
member
Thanks @glozow for the test improvements. I’ve also made a change to an RBF limit that I think is worth mentioning. Rather than limit the number of direct conflicts that a transaction can have, I’ve now implemented the limit to be on the number of distinct clusters that the conflicts of a transaction belong to.
The idea is still to create some bound on the amount of CPU we might spend linearizing clusters and doing feerate diagram checks when processing a single transaction. Limiting the number of conflicting clusters (at 100 for now) should be sufficient to achieve that. I want to point out that we might linearize more than 100 clusters as a result of a replacement (because removing conflicts might cause clusters to split), but since linearizing small clusters is generally much less work than linearizing big clusters, I don’t think this distinction materially affects the way we understand the implied CPU bound.
in
test/functional/feature_rbf.py:33
in
ab35bca097outdated
instagibbs
commented at 6:23 pm on September 5, 2024:
nothing fails when I remove this
sdaftuar
commented at 2:13 pm on September 17, 2024:
Removed (it was needed in an intermediate commit).
in
test/functional/feature_rbf.py:310
in
ab35bca097outdated
314@@ -317,27 +315,6 @@ def test_spends_of_conflicting_outputs(self):
315 # This will raise an exception
316 assert_raises_rpc_error(-26, "bad-txns-spends-conflicting-tx", self.nodes[0].sendrawtransaction, tx2_hex, 0)
317318- def test_new_unconfirmed_inputs(self):
instagibbs
commented at 6:25 pm on September 5, 2024:
do we have functional test coverage that it is allowed elsewhere? Just check that it doesn’t fail rather than rip out?
sdaftuar
commented at 9:40 pm on September 16, 2024:
Done.
in
test/functional/feature_rbf.py:269
in
ab35bca097outdated
273@@ -276,7 +274,7 @@ def test_replacement_feeperkb(self):
274 )["hex"]
275276 # This will raise an exception due to insufficient fee
277- assert_raises_rpc_error(-26, "insufficient fee", self.nodes[0].sendrawtransaction, tx1b_hex, 0)
278+ assert_raises_rpc_error(-26, "does not improve feerate diagram", self.nodes[0].sendrawtransaction, tx1b_hex, 0)
instagibbs
commented at 6:33 pm on September 5, 2024:
I wonder if we want to change this string to “insufficient fee, does not improve feerate diagram” to give a hint at what to do to user
sdaftuar
commented at 9:40 pm on September 16, 2024:
good point. i wonder if we should just drop the feerate diagram language altogether and leave it at “insufficient fee”?
in
test/functional/mempool_limit.py:48
in
ab35bca097outdated
44@@ -45,7 +45,7 @@ def test_rbf_carveout_disallowed(self):
45 # B: First transaction in package, RBFs A by itself under individual evaluation, which would give it +1 descendant limit
46 # C: Second transaction in package, spends B. If the +1 descendant limit persisted, would make it into mempool
4748- self.restart_node(0, extra_args=self.extra_args[0] + ["-limitancestorcount=2", "-limitdescendantcount=1"])
49+ self.restart_node(0, extra_args=self.extra_args[0] + ["-limitclustercount=1"])
instagibbs
commented at 6:36 pm on September 5, 2024:
I think test_rbf_carveout_disallowed is meaningless post-cluster mempool, we should be relying on other tests to ensure we never go above cluster count limit
sdaftuar
commented at 3:24 pm on September 17, 2024:
Dropped this test for now, and left a todo to add more tests to exercise different methods of mempool submission and ensure we never violate the cluster limits.
in
test/functional/mempool_package_rbf.py:199
in
ab35bca097outdated
instagibbs
commented at 8:00 pm on September 5, 2024:
this test is pretty ancient and tbh not sure worth holding around. But in case we want to hold onto it, modifications to make things more sensible in cluster mempool world
0diff --git a/test/functional/mempool_packages.py b/test/functional/mempool_packages.py
1index f7fe381f43..9cf1aa01f5 100755 2--- a/test/functional/mempool_packages.py
3+++ b/test/functional/mempool_packages.py
4@@-1,92+1,91@@ 5#!/usr/bin/env python3 6# Copyright (c) 2014-2022 The Bitcoin Core developers 7# Distributed under the MIT software license, see the accompanying 8# file COPYING or http://www.opensource.org/licenses/mit-license.php. 9"""Test descendant package tracking code.""" 10 11 from decimal import Decimal
12 13 from test_framework.messages import (
14- DEFAULT_ANCESTOR_LIMIT,
15- DEFAULT_DESCENDANT_LIMIT,
16+ DEFAULT_CLUSTER_LIMIT,
17 )
18 from test_framework.p2p import P2PTxInvStore
19 from test_framework.test_framework import BitcoinTestFramework
20 from test_framework.util import (
21 assert_equal,
22 )
23 from test_framework.wallet import MiniWallet
24 25# custom limits for node1 26-CUSTOM_DESCENDANT_LIMIT =10 27+CUSTOM_CLUSTER_LIMIT =10 28+assert CUSTOM_CLUSTER_LIMIT < DEFAULT_CLUSTER_LIMIT
29 30class MempoolPackagesTest(BitcoinTestFramework):
31defset_test_params(self):
32 self.num_nodes =2 33# whitelist peers to speed up tx relay / mempool sync 34 self.noban_tx_relay = True
35 self.extra_args = [
36 [
37 ],
38 [
39-"-limitdescendantcount={}".format(CUSTOM_DESCENDANT_LIMIT),
40-"-limitclustercount={}".format(CUSTOM_DESCENDANT_LIMIT),
41+"-limitclustercount={}".format(CUSTOM_CLUSTER_LIMIT),
42 ],
43 ]
44 45defrun_test(self):
46 self.wallet = MiniWallet(self.nodes[0])
47 self.wallet.rescan_utxos()
48 49 peer_inv_store = self.nodes[0].add_p2p_connection(P2PTxInvStore()) # keep track of invs 50 51-# DEFAULT_ANCESTOR_LIMIT transactions off a confirmed tx should be fine 52- chain = self.wallet.create_self_transfer_chain(chain_length=DEFAULT_ANCESTOR_LIMIT)
53+# DEFAULT_CLUSTER_LIMIT transactions off a confirmed tx should be fine for default node 54+ chain = self.wallet.create_self_transfer_chain(chain_length=DEFAULT_CLUSTER_LIMIT)
55 witness_chain = [t["wtxid"] for t in chain]
56 ancestor_vsize =0 57 ancestor_fees = Decimal(0)
58 59for i, t in enumerate(chain):
60 ancestor_vsize += t["tx"].get_vsize()
61 ancestor_fees += t["fee"]
62 self.wallet.sendrawtransaction(from_node=self.nodes[0], tx_hex=t["hex"])
63 64# Wait until mempool transactions have passed initial broadcast (sent inv and received getdata) 65# Otherwise, getrawmempool may be inconsistent with getmempoolentry if unbroadcast changes in between 66 peer_inv_store.wait_for_broadcast(witness_chain)
67 68-# Check mempool has DEFAULT_ANCESTOR_LIMIT transactions in it, and descendant and ancestor 69+# Check mempool has DEFAULT_CLUSTER_LIMIT transactions in it, and descendant and ancestor 70# count and fees should look correct 71 mempool = self.nodes[0].getrawmempool(True)
72- assert_equal(len(mempool), DEFAULT_ANCESTOR_LIMIT)
73+ assert_equal(len(mempool), DEFAULT_CLUSTER_LIMIT)
74 descendant_count =1 75 descendant_fees =0 76 descendant_vsize =0 77 78 assert_equal(ancestor_vsize, sum([mempool[tx]['vsize'] for tx in mempool]))
79- ancestor_count = DEFAULT_ANCESTOR_LIMIT
80+ ancestor_count = DEFAULT_CLUSTER_LIMIT
81 assert_equal(ancestor_fees, sum([mempool[tx]['fees']['base'] for tx in mempool]))
82 83 descendants = []
84 ancestors = [t["txid"] for t in chain]
85 chain = [t["txid"] for t in chain]
86for x in reversed(chain):
87# Check that getmempoolentry is consistent with getrawmempool 88 entry = self.nodes[0].getmempoolentry(x)
89 assert_equal(entry, mempool[x])
90 91# Check that gettxspendingprevout is consistent with getrawmempool 92 witnesstx = self.nodes[0].getrawtransaction(txid=x, verbose=True)
93for tx_in in witnesstx["vin"]:
94 spending_result = self.nodes[0].gettxspendingprevout([ {'txid' : tx_in["txid"], 'vout' : tx_in["vout"]} ])
95 assert_equal(spending_result, [ {'txid' : tx_in["txid"], 'vout' : tx_in["vout"], 'spendingtxid' : x} ])
96 97# Check that the descendant calculations are correct 98 assert_equal(entry['descendantcount'], descendant_count)
99 descendant_fees += entry['fees']['base']
100 assert_equal(entry['fees']['modified'], entry['fees']['base'])
101 assert_equal(entry['fees']['descendant'], descendant_fees)
102 descendant_vsize += entry['vsize']
103 assert_equal(entry['descendantsize'], descendant_vsize)
104 descendant_count +=1105106@@-169,81+168,81@@class MempoolPackagesTest(BitcoinTestFramework):
107# Prioritise a transaction that has been mined, then add it back to the108# mempool by using invalidateblock.109 self.nodes[0].prioritisetransaction(txid=chain[-1], fee_delta=2000)
110 self.nodes[0].invalidateblock(self.nodes[0].getbestblockhash())
111# Keep node1's tip synced with node0112 self.nodes[1].invalidateblock(self.nodes[1].getbestblockhash())
113114# Now check that the transaction is in the mempool, with the right modified fee115 descendant_fees =0116for x in reversed(chain):
117 entry = self.nodes[0].getmempoolentry(x)
118 descendant_fees += entry['fees']['base']
119if (x == chain[-1]):
120 assert_equal(entry['fees']['modified'], entry['fees']['base'] + Decimal("0.00002"))
121 assert_equal(entry['fees']['descendant'], descendant_fees + Decimal("0.00002"))
122123# Now test descendant chain limits124 tx_children = []
125# First create one parent tx with 10 children126 tx_with_children = self.wallet.send_self_transfer_multi(from_node=self.nodes[0], num_outputs=10)
127 parent_transaction = tx_with_children["txid"]
128 transaction_package = tx_with_children["new_utxos"]
129130# Sign and send up to MAX_DESCENDANT transactions chained off the parent tx131 chain = [] # save sent txs for the purpose of checking node1's mempool later (see below)132-for _ in range(DEFAULT_DESCENDANT_LIMIT -1):
133+for _ in range(DEFAULT_CLUSTER_LIMIT -1):
134 utxo = transaction_package.pop(0)
135 new_tx = self.wallet.send_self_transfer_multi(from_node=self.nodes[0], num_outputs=10, utxos_to_spend=[utxo])
136 txid = new_tx["txid"]
137 chain.append(txid)
138if utxo['txid'] is parent_transaction:
139 tx_children.append(txid)
140 transaction_package.extend(new_tx["new_utxos"])
141142 mempool = self.nodes[0].getrawmempool(True)
143- assert_equal(mempool[parent_transaction]['descendantcount'], DEFAULT_DESCENDANT_LIMIT)
144+ assert_equal(mempool[parent_transaction]['descendantcount'], DEFAULT_CLUSTER_LIMIT)
145 assert_equal(sorted(mempool[parent_transaction]['spentby']), sorted(tx_children))
146147for child in tx_children:
148 assert_equal(mempool[child]['depends'], [parent_transaction])
149150# Check that node1's mempool is as expected, containing:151# - parent tx for descendant test152# - txs chained off parent tx (-> custom descendant limit)153- self.wait_until(lambda: len(self.nodes[1].getrawmempool()) ==2*CUSTOM_DESCENDANT_LIMIT, timeout=10)
154+ self.wait_until(lambda: len(self.nodes[1].getrawmempool()) ==2*CUSTOM_CLUSTER_LIMIT, timeout=10)
155 mempool0 = self.nodes[0].getrawmempool(False)
156 mempool1 = self.nodes[1].getrawmempool(False)
157 assert set(mempool1).issubset(set(mempool0))
158 assert parent_transaction in mempool1
159for tx in chain:
160if tx in mempool1:
161 entry0 = self.nodes[0].getmempoolentry(tx)
162 entry1 = self.nodes[1].getmempoolentry(tx)
163 assert not entry0['unbroadcast']
164 assert not entry1['unbroadcast']
165- assert entry1["descendantcount"] <= CUSTOM_DESCENDANT_LIMIT
166+ assert entry1["descendantcount"] <= CUSTOM_CLUSTER_LIMIT
167 assert_equal(entry1['fees']['base'], entry0['fees']['base'])
168 assert_equal(entry1['vsize'], entry0['vsize'])
169 assert_equal(entry1['depends'], entry0['depends'])
170171# Test reorg handling172# First, the basics:173 self.generate(self.nodes[0], 1)
174 self.nodes[1].invalidateblock(self.nodes[0].getbestblockhash())
175 self.nodes[1].reconsiderblock(self.nodes[0].getbestblockhash())
176177# Now test the case where node1 has a transaction T in its mempool that178# depends on transactions A and B which are in a mined block, and the179# block containing A and B is disconnected, AND B is not accepted back180# into node1's mempool because its ancestor count is too high.181182# Create 8 transactions, like so:183# Tx0 -> Tx1 (vout0)184# \--> Tx2 (vout1) -> Tx3 -> Tx4 -> Tx5 -> Tx6 -> Tx7185#186# Mine them in the next block, then generate a new tx8 that spends187# Tx1 and Tx7, and add to node1's mempool, then disconnect the188# last block.189190# Create tx0 with 2 outputs191 tx0 = self.wallet.send_self_transfer_multi(from_node=self.nodes[0], num_outputs=2)
sdaftuar
commented at 4:20 pm on September 17, 2024:
Thanks! Taken.
in
test/functional/mempool_persist.py:137
in
ab35bca097outdated
instagibbs
commented at 8:06 pm on September 5, 2024:
can obviously ignore this but instead of deleting keys in two spots, could just do:
assert_equal({**last_entry, "clusterid": None}, {**new_entry, "clusterid": None})
sdaftuar
commented at 4:23 pm on September 17, 2024:
Taken.
in
test/functional/mempool_truc.py:532
in
ab35bca097outdated
543@@ -556,11 +544,6 @@ def test_truc_sibling_eviction(self):
544 tx_v3_replacement_only = self.wallet.create_self_transfer_multi(utxos_to_spend=utxos_for_conflict, fee_per_output=4000000)
545 # Override maxfeerate - it costs a lot to replace these 100 transactions.
546 assert node.testmempoolaccept([tx_v3_replacement_only["hex"]], maxfeerate=0)[0]["allowed"]
547- # Adding another one exceeds the limit.
instagibbs
commented at 9:58 pm on September 5, 2024:
heh, if you try 99 cluster conflicts + sibling eviction, that implies the new transaction has a vsize cap of 1kvB since it’s a child, and since it will be north of 5kvB it will be rejected. I don’t think we can have sibling eviction count meaningfully towards exceeding max cluster conflicts. Worth the coverage anyways?
0diff --git a/test/functional/mempool_truc.py b/test/functional/mempool_truc.py
1index e3fcc45059..4e8e5943fc 100755
2--- a/test/functional/mempool_truc.py
3+++ b/test/functional/mempool_truc.py
4@@ -480,92 +480,101 @@ class MempoolTRUC(BitcoinTestFramework):
5 6 child_1 = self.wallet.send_self_transfer(from_node=node, version=3, utxo_to_spend=ancestor_tx["new_utxos"][0])
7 child_2 = self.wallet.send_self_transfer(from_node=node, version=3, utxo_to_spend=ancestor_tx["new_utxos"][1])
8 self.check_mempool([child_1["txid"], child_2["txid"]])
9 10 self.generate(node, 1)
11 self.check_mempool([])
12 13 # Create a reorg, causing ancestor_tx to exceed the 1-child limit
14 node.invalidateblock(block)
15 self.check_mempool([ancestor_tx["txid"], child_1["txid"], child_2["txid"]])
16 assert_equal(node.getmempoolentry(ancestor_tx["txid"])["descendantcount"], 3)
17 18 # Create a replacement of child_1. It does not conflict with child_2.
19 child_1_conflict = self.wallet.send_self_transfer(from_node=node, version=3, utxo_to_spend=ancestor_tx["new_utxos"][0], fee_rate=Decimal("0.01"))
20 21 # Ensure child_1 and child_1_conflict are different transactions
22 assert (child_1_conflict["txid"] != child_1["txid"])
23 self.check_mempool([ancestor_tx["txid"], child_1_conflict["txid"], child_2["txid"]])
24 assert_equal(node.getmempoolentry(ancestor_tx["txid"])["descendantcount"], 3) [@cleanup](/bitcoin-bitcoin/contributor/cleanup/)(extra_args=None)
25 def test_truc_sibling_eviction(self):
26 self.log.info("Test sibling eviction for TRUC")
27 node = self.nodes[0]
28+
29+ # Seed sufficient utxos for the test
30+ self.generate(node, 100)
31+ self.wallet.rescan_utxos()
32+
33 tx_v3_parent = self.wallet.send_self_transfer_multi(from_node=node, num_outputs=2, version=3)
34 # This is the sibling to replace
35 tx_v3_child_1 = self.wallet.send_self_transfer(
36 from_node=node, utxo_to_spend=tx_v3_parent["new_utxos"][0], fee_rate=DEFAULT_FEE * 2, version=3
37 )
38 assert tx_v3_child_1["txid"] in node.getrawmempool()
39 40 self.log.info("Test tx must be higher feerate than sibling to evict it")
41 tx_v3_child_2_rule6 = self.wallet.create_self_transfer(
42 utxo_to_spend=tx_v3_parent["new_utxos"][1], fee_rate=DEFAULT_FEE, version=3
43 )
44 rule6_str = f"insufficient fee (including sibling eviction), rejecting replacement {tx_v3_child_2_rule6['txid']}"
45 assert_raises_rpc_error(-26, rule6_str, node.sendrawtransaction, tx_v3_child_2_rule6["hex"])
46 self.check_mempool([tx_v3_parent['txid'], tx_v3_child_1['txid']])
47 48 self.log.info("Test tx must meet absolute fee rules to evict sibling")
49 tx_v3_child_2_rule4 = self.wallet.create_self_transfer(
50 utxo_to_spend=tx_v3_parent["new_utxos"][1], fee_rate=2 * DEFAULT_FEE + Decimal("0.00000001"), version=3
51 )
52 rule4_str = f"insufficient fee (including sibling eviction), rejecting replacement {tx_v3_child_2_rule4['txid']}, not enough additional fees to relay"
53 assert_raises_rpc_error(-26, rule4_str, node.sendrawtransaction, tx_v3_child_2_rule4["hex"])
54 self.check_mempool([tx_v3_parent['txid'], tx_v3_child_1['txid']])
55 56- self.log.info("Test tx cannot cause more than 100 evictions including RBF and sibling eviction")
57- # First add 4 groups of 25 transactions.
58+ self.log.info("Test tx cannot cause more than 100 cluster conflicts including both explicit conflicts and sibling eviction")
59+
60+ # First make 100 clusters, so sibling eviction counts as 101st conflicted cluster
61 utxos_for_conflict = []
62 txids_v2_100 = []
63- for _ in range(4):
64+ for _ in range(100):
65 confirmed_utxo = self.wallet.get_utxo(confirmed_only=True)
66 utxos_for_conflict.append(confirmed_utxo)
67- # 25 is within descendant limits
68- chain_length = int(MAX_REPLACEMENT_CANDIDATES / 4)
69- chain = self.wallet.create_self_transfer_chain(chain_length=chain_length, utxo_to_spend=confirmed_utxo)
70- for item in chain:
71- txids_v2_100.append(item["txid"])
72- node.sendrawtransaction(item["hex"])
73+ tx = self.wallet.create_self_transfer(utxo_to_spend=confirmed_utxo)
74+ txids_v2_100.append(tx["txid"])
75+ node.sendrawtransaction(tx["hex"])
76 self.check_mempool(txids_v2_100 + [tx_v3_parent["txid"], tx_v3_child_1["txid"]])
77 78 # Replacing 100 transactions is fine
79 tx_v3_replacement_only = self.wallet.create_self_transfer_multi(utxos_to_spend=utxos_for_conflict, fee_per_output=4000000)
80 # Override maxfeerate - it costs a lot to replace these 100 transactions.
81 assert node.testmempoolaccept([tx_v3_replacement_only["hex"]], maxfeerate=0)[0]["allowed"]
82+
83+ # Adding the sibling eviction exceeds the child TRUC vsize limit which is reported, not max clusters conflicted which is checked later.
84+ utxos_for_conflict.append(tx_v3_parent["new_utxos"][1])
85+ tx_v3_child_2_rule5 = self.wallet.create_self_transfer_multi(utxos_to_spend=utxos_for_conflict, fee_per_output=4000000, version=3)
86+ tx_v3_violation_str = f"TRUC-violation, version=3 child tx {tx_v3_child_2_rule5['txid']} (wtxid={tx_v3_child_2_rule5['wtxid']}) is too big: {tx_v3_child_2_rule5['tx'].get_vsize()} > 1000 virtual bytes"
87+ assert_raises_rpc_error(-26, tx_v3_violation_str, node.sendrawtransaction, tx_v3_child_2_rule5["hex"])
88 self.check_mempool(txids_v2_100 + [tx_v3_parent["txid"], tx_v3_child_1["txid"]])
89 90 self.log.info("Test sibling eviction is successful if it meets all RBF rules")
91 tx_v3_child_2 = self.wallet.create_self_transfer(
92 utxo_to_spend=tx_v3_parent["new_utxos"][1], fee_rate=DEFAULT_FEE*10, version=3
93 )
94 node.sendrawtransaction(tx_v3_child_2["hex"])
95 self.check_mempool(txids_v2_100 + [tx_v3_parent["txid"], tx_v3_child_2["txid"]])
96 97 self.log.info("Test that it's possible to do a sibling eviction and RBF at the same time")
98 utxo_unrelated_conflict = self.wallet.get_utxo(confirmed_only=True)
99 tx_unrelated_replacee = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=utxo_unrelated_conflict)
100 assert tx_unrelated_replacee["txid"] in node.getrawmempool()
101102 fee_to_beat = max(int(tx_v3_child_2["fee"] * COIN), int(tx_unrelated_replacee["fee"]*COIN))
103104 tx_v3_child_3 = self.wallet.create_self_transfer_multi(
105 utxos_to_spend=[tx_v3_parent["new_utxos"][0], utxo_unrelated_conflict], fee_per_output=fee_to_beat*4, version=3
106 )
107 node.sendrawtransaction(tx_v3_child_3["hex"])
108 self.check_mempool(txids_v2_100 + [tx_v3_parent["txid"], tx_v3_child_3["txid"]]) [@cleanup](/bitcoin-bitcoin/contributor/cleanup/)(extra_args=None)
109 def test_reorg_sibling_eviction_1p2c(self):
110 node = self.nodes[0]
sdaftuar
commented at 4:30 pm on September 17, 2024:
Ah, right. Given that this falls back to the vsize limit being exercised, I think this isn’t really worth including? Will leave this for now.
in
test/functional/mempool_updatefromblock.py:21
in
ab35bca097outdated
Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:
Possibly due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
A sanitizer issue, which can only be found by compiling with the sanitizer and running the
affected test.
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
sdaftuar force-pushed
on Oct 19, 2024
sdaftuar force-pushed
on Oct 19, 2024
sdaftuar force-pushed
on Oct 20, 2024
sdaftuar force-pushed
on Oct 20, 2024
DrahtBot removed the label
CI failed
on Oct 20, 2024
sdaftuar force-pushed
on Oct 21, 2024
sdaftuar force-pushed
on Oct 21, 2024
sdaftuar force-pushed
on Oct 22, 2024
sdaftuar force-pushed
on Oct 25, 2024
DrahtBot added the label
Needs rebase
on Oct 26, 2024
sdaftuar force-pushed
on Oct 26, 2024
DrahtBot removed the label
Needs rebase
on Oct 26, 2024
DrahtBot added the label
CI failed
on Oct 26, 2024
sdaftuar force-pushed
on Oct 27, 2024
DrahtBot removed the label
CI failed
on Oct 27, 2024
DrahtBot added the label
Needs rebase
on Oct 29, 2024
sdaftuar force-pushed
on Nov 1, 2024
sdaftuar force-pushed
on Nov 14, 2024
DrahtBot removed the label
Needs rebase
on Nov 14, 2024
DrahtBot added the label
Needs rebase
on Nov 14, 2024
glozow referenced this in commit
f34fe0806a
on Nov 20, 2024
Add is_visited helper to epochguard0c3a50cbd5
Add txgraph modulea5e242cb34
wip: track iterations used in linearization24967be6d2
add fuzz test for txgraph1505707d0f
Make CTxMemPoolEntry derive from TxEntry82a4969755
Track clusters in mempool with TxGraphb3b9b62a55
Limit mempool size based on chunk feerate
Rather than evicting the transactions with the lowest descendant feerate,
instead evict transactions that have the lowest chunk feerate.
Once mining is implemented based on choosing transactions with highest chunk
feerate (see next commit), mining and eviction will be opposites, so that we
will evict the transactions that would be mined last.
cd69a536ad
Select transactions for blocks based on chunk feerate4e6feda313
Add new (unused) limits for cluster size/count82c0a26053
Do not allow mempool clusters to exceed configured limits61fb28bf1f
policy: Remove CPFP carveout rule
The addition of a cluster size limit makes the CPFP carveout rule useless,
because carveout cannot be used to bypass the cluster size limit. Remove this
policy rule and update tests to no longer rely on the behavior.
46b2e81357
Implement new RBF logic for cluster mempool
With a total ordering on mempool transactions, we are now able to calculate a
transaction's mining score at all times. Use this to improve the RBF logic:
- we no longer enforce a "no new unconfirmed parents" rule
- we now require that the mempool's feerate diagram must improve in order
to accept a replacement
TODO: update functional test feature_rbf.py to cover all our new scenarios.
Co-authored-by: Gregory Sanders <gsanders87@gmail.com>, glozow <gloriajzhao@gmail.com>
8db4af8822
==== END CLUSTER IMPLEMENTATION ====75fb1a2832
==== BEGIN MEMPOOL CLEANUP ====0e239fee7e
Remove the ancestor and descendant indices from the mempool23a5fd51db
Use cluster linearization for transaction relay sort order
Previously, transaction batches were first sorted by ancestor count and then
feerate, to ensure transactions are announced in a topologically valid order,
while prioritizing higher feerate transactions. Ancestor count is a crude
topological sort criteria, so replace this with linearization order so that the
highest feerate transactions (as would be observed by the mining algorithm) are
relayed before lower feerate ones, in a topologically valid way.
This also fixes a test that only worked due to the ancestor-count-based sort
order.
b0219ef825
Remove CTxMemPool::GetSortedDepthAndScore
The mempool clusters and linearization permit sorting the mempool topologically
without making use of ancestor counts.
e9541038fa
Reimplement GetTransactionAncestry() to not rely on cached data
In preparation for removing ancestor data from CTxMemPoolEntry, recalculate the
ancestor statistics on demand wherever needed.
2905227ff3
rpc: Calculate ancestor data from scratch for mempool rpc calls35953b28d5
Remove dependency on cached ancestor data in mini-miner74b1ec4ad3
Stop enforcing ancestor size/count limits
The cluster limits should be sufficient.
Co-Authored-By: Gregory Sanders <gsanders87@gmail.com>
c067eced07
Add test case for cluster size limits to v3 logic113a0d1561
Use mempool/txgraph to determine if a tx has descendants
Remove a reference to GetCountWithDescendants() in preparation for removing
this function and the associated cached state from the mempool.
e3e9836f2b
Calculate descendant information for mempool RPC output on-the-fly
This is in preparation for removing the cached descendant state from the
mempool.
4b51e97609
test: remove rbf carveout test from mempool_limit.py4e17b1dddb
Stop enforcing descendant size/count limits
Cluster size limits should be enough.
9a541866d1
Eliminate RBF workaround for CPFP carveout transactions
The new cluster mempool RBF rules take into account clusters sizes exactly, so
with the removal of descendant count enforcement this idea is obsolete.
56ae5eb295
wallet: Replace max descendantsize with clustersize
With the descendant size limits removed, replace the concept of "max number of
descendants of any ancestor of a given tx" with the cluster count of the cluster
that the transaction belongs to.
4872530f1e
mempool: Remove unused function CalculateDescendantMaximum5599f2df0a
Eliminate use of cached ancestor data in miniminer_tests and v3_policy9e897d7d24
mempool: eliminate accessors to mempool entry ancestor/descendant cached stated6b45f840f
Remove unused members from CTxMemPoolEntry1d3a2a41be
Remove mempool logic designed to maintain ancestor/descendant state36d775a7e0
mempool: addUnchecked no longer needs ancestors90ab627ebc
Remove unused limits from CalculateMemPoolAncestorsf18c2bf01d
Make removeConflicts private91979c72fc
==== END MEMPOOL CLEANUP ====769d3a8e2a
==== BEGIN OPTIMIZATIONS ====3df438152e
Rework removeForBlock so that clusters are only touched once
Also remove extra linearization that was happening and some logging
Update interface_zmq.py for new block connection behavior
d9d3cf9c6b
Simplify ancestor calculation functions
Now that ancestor calculation never fails (due to ancestor/descendant limits
being eliminated), we can eliminate the error handling from
CalculateMemPoolAncestors.
interface_zmq test is broken
b0bd896238
Use txgraph to calculate ancestorsd58f5309b4
Use txgraph to calculate descendants4fcdc51f65
Move GetNumChildren() to be a mempool function6ff3d80d2a
Make getting parents/children a function of the mempool, not a mempool entry0aa65ac069
Rewrite GatherClusters to use the txgraph implementation2b8b07de80
Switch to using txgraph parents/children in public accessorse5830fd687
Stop tracking parents/children outside of txgraph94a000a8eb
Remove unused argument to RemoveStaged08c4296717
Switch to using the faster CalculateDescendants
The only place we still use the older interface is in policy/rbf.cpp, where
it's helpful to incrementally calculate descendants to avoid calculating too
many at once (or cluttering the CalculateDescendants interface with a
calculation limit).
cfbfedcfe3
Rewrite removeRecursive to use vectors instead of sets5c35f2e3de
Rewrite Expire to only invoke CalculateDescendants oncea398784f18
Rewrite removeForReorg to use a vector instead of a set91d75d9497
Drop unnecessary set in TrimToSizef1b821b690
RemoveStaged takes a vector, not a seta11ed9c780
Only use CalculateDescendants() with vectors, not setsfb28b5204b
Use faster CalculateMemPoolAncestors in rbf62cd5e6a34
Use faster CMPA in rpc/mempool30cfa3db5f
Eliminate need for ancestors in SingleV3Checksf6609a625f
Eliminate need for ancestors in PackageTRUCChecks
TO DO: Rewrite unit tests for this function to not lie about mempool parents, so that we
can push down the parent calculation into truc_policy from validation.
84c24464c0
Don't calculate ancestors except for RBF transactions7f866481ac
==== END OPTIMIZATIONS ====9eb82cb02e
==== BEGIN TESTS ====39356bf27a
bench: add more mempool benchmarks
Add benchmarks for:
- mempool update time when blocks are found
- adding a transaction
- performing the mempool's RBF calculation
- calculating mempool ancestors/descendants
0d7267a6ad
fuzz: try to add more code coverage for mempool fuzzing
Including test coverage for mempool eviction and expiry
f81d43b792
Pass through cluster size limits to TxGraph::check()789219cb33
Expose cluster information via rpc
Co-authored-by: glozow <gloriajzhao@gmail.com>
ff49e0c5cf
doc: Update mempool_replacements.md to reflect feerate diagram checks095f098eab
test: add functional test for new cluster mempool RPCsac9c55a7ab
fuzz: remove comparison between mini_miner block construction and miner
This is in preparation for eliminating the block template building happening in
mini_miner, in favor of directly using the linearizations done in the mempool.
f26dfef626
sdaftuar force-pushed
on Nov 20, 2024
DrahtBot removed the label
Needs rebase
on Nov 20, 2024
This is a metadata mirror of the GitHub repository
bitcoin/bitcoin.
This site is not affiliated with GitHub.
Content is generated from a GitHub metadata backup.
generated: 2024-11-21 18:12 UTC
This site is hosted by @0xB10C More mirrored repositories can be found on mirror.b10c.me