Motivation
Now that we have the full linearization of mempool clusters, whenever a cluster in the mempool gets updated either due to replacement, eviction for various reasons, or block connection — we now have the previous linearization of the affected cluster and the new linearization of the new clusters available.
This brings two advantages — the ability for outside observers to make decisions based on fee rate diagram updates without direct access to the mempool, and reduced lock contention.
It can used primarily for:
- Making
CBlockPolicyEstimatorpackage aware - Preventing redundant block template builds in the proposed block template manager BlockTemplateManager Proposal.
During the development of cluster mempool those usage was discussed. Note: This is not the motivation behind cluster mempool just a part of it.
See discussion in Package aware fee estimator and Determining BlockTemplate Fee Increase Using Fee Rate Diagram.
Currently the fee estimator is asynchronous but not package-aware. After this PR it is possible to make it package-aware — a follow-up branch on top of this does exactly that see commit.
For the mining interface, the node every 1 seconds rebuilds a block template regardless of whether any significant mempool change occurred to the top block.
Even after cluster mempool, building a block template is not free since we have to pause the node (holding cs_main).
With this notification a proposed block template manager that observes the mempool update will only rebuild after an inflow in the mempool top block template warrants it. Redundant builds where no significant fee rate change occurred are eliminated.
And doing this becomes even more useful because of proposed improvements like mempool-based fee rate estimation which also build a block template right now it uses a constant 7-second interval and rebuilds regardless of whether an inflow affects the top block template or not. This will fix that.
A proposed sendtemplate in the p2p layer also build block template, I propose it also use the proposed block template manager which should be built on top of this so that redundant block template builts are totally eliminated.
There is a different approach to determining updates in the top block template, but that still requires some direct interaction with the mempool. Since the fee estimator needs this notification anyway and we already have it, the block template manager can use it too and not interact with the mempool at all — it becomes a purely passive observer.
Changes
-
TxGraph::Chunka new struct that has with transactionTxGraph::Ref’s, and fee rateFeeFracso the before/after diagram carries enough information to compute chunk hashes. -
MemPoolChunk(kernel/mempool_entry.h) — new struct representing a mempool chunk: aFeeFracfee rate and auint256chunk hash identifying the set of witness transactions in the chunk. -
MemPoolChunksUpdate(txmempool.h) — new struct carrying:old_chunks— chunks before the update.new_chunks— chunks after the updatereason—MemPoolRemovalReasonidentifying the update typeblock_height— set only whenreason == BLOCK, carries the connecting block’s height; defaults tostd::nullopt
-
MemPoolUpdate— newValidationInterfacecallback receiving aMemPoolChunksUpdate. -
GetHashFromWitnesses(src/util/witnesses_hash.{h,cpp}) — sorts a vector ofWtxids lexicographically and hashes them deterministically to produce a stable chunk hash.GetPackageHashinsrc/policy/packages.cppis refactored to use this. -
Caches the fee rate diagram computed in
GetAndSaveMainStagingDiagram/GetFeeRateDiagramChunksso it is immediately available when the notification is emitted without recomputing. -
Extract the dependency addition subroutine in
UpdateTransactionsFromBlockto a new method private methodaddDependenciesFromBlock
MempoolUpdated is fired after every mempool mutation. Specifically:
- Transaction addition and RBF replacement: fired inside
CTxMemPool::ChangeSet::Applywith reasonREPLACED. - Block connection: fired inside
removeForBlockwith reasonBLOCK. Confirmed transactions are fully removed before conflicts are evicted. The connecting block’s height is carried inblock_height. - Reorg: fired inside
removeForReorgwith reasonREORG. - Recursive removal: fired inside
removeRecursivewith a caller-supplied reason e.g.CONFLICT,EXPIRY,REORG. - Expiry: fired inside
Expirewith reasonEXPIRY. - Size limit eviction: fired inside
TrimToSizewith reasonSIZELIMIT. - Post-reorg dependency update: fired inside
UpdateTransactionsFromBlockwith reasonSIZELIMIT.
In all cases the before/after chunks are obtained from changeSet->GetFeeRateDiagramChunks() after committing the staging graph.
-
Note on
UpdateTransactionsFromBlock, a naive single-pass approach callingTrim()directly on the main graph fails because oversized clusters cannot be linearized, soGetMainStagingDiagramcannot produce a valid before/after diff. The fix is a two-stage process: -
Phase 1 (optimistic): Create a txgraph staging , call
addDependenciesFromBlockto register all new parent-child relationship, then callTrim(). -
If
Trim()returns nothing — no cluster exceeded the limit — commit staging, compute the diagram, emitMempoolUpdated(SIZELIMIT), done. -
If
Trim()returns evicted transactions, the optimistic path failed. Abort the staging we roll back to main, open a fresh staging, callRemoveTransactionfor each evicted tx to remove them from staging, calladdDependenciesFromBlockagain (evicted txs are silently skipped since they are no longer in the graph), commit, compute the before/after diagram, emitMempoolUpdated(SIZELIMIT), then callRemoveStagedto fully remove evicted transactions from the mempool. -
Reorgs are rare, and reorgs that burst the cluster size limit are also rarer still, so in the optimistic common case we apply dependencies only once.
-
Added a new
mempool_update_tests: unit test suite covering allMempoolUpdatedemission paths: