Test Package Accept #32160

issue glozow openend this issue on March 28, 2025
  1. glozow commented at 2:48 pm on March 28, 2025: member

    Please describe the feature you’d like to see added.

    Description of how the feature should work: The testmempoolaccept RPC should continue to take a list of hex transactions with minimal restrictions (no duplicates, no conflicting transactions, not too many). It should allow singleton transactions, transactions already in mempool, and any topology (multiple disconnected components should be ok). It should attempt to validate as much as possible and return each transaction’s validation result. Of course, it should not modify mempool contents while doing so, and most importantly it should simulate the fee-bumping policies like package CPFP and package RBF.

    Problems today: We added multi-transaction support in testmempoolaccept in #20833, before package policies were decided. It enabled some of the validation functionality that package validation uses, but ultimately has a different codepath and thus different interface. It has roughly the input requirements that we want (no duplicates, no conflicts, max 25, topological). Its results array format is also fine in my opinion. The biggest problem is that it doesn’t apply package policies. So if you have a 1p1c package with the parent bumped by the child, testmempoolaccept says that the parent has too low feerate, but submitpackage says it’s accepted. This is confusing.

    Why fixing this is hard: (1) Our method of “splitting” a package is not compatible with test acceptance because it is trial-and-error based. Instead, it should decide what subpackages will be up front. (2) We can’t stage and unstage the changes made by each subpackage on top of the changes made by previous subpackages without applying the changes to mempool.

    Longer form explanation:

    • Supporting test package acceptance arbitrary lists of transactions, or indeed any list of transactions that isn’t a two-generation tree, requires we first analyze the package and decide what subpackages or chunks to submit together, and in what order. These decisions can very much affect whether/which transactions get in. The decision should involve the fee and vsize of these transactions, which requires fetching UTXOs and linearizing them.
    • Today, we achieve “splitting” through a trial-and-error process. We continuously attempt to submit package subsets in increasing size order (i.e. starting individually), excluding things once they have been successfully submitted. This was the convenient way to implement it because AcceptPackage, and it’s not a terrible way to split two generation-packages.
    • Given our current setup, package transactions are either in the mempool or we haven’t yet decided if they’re valid; we don’t have an intermediate stage of transactions that have been fully “approved” but not yet submitted. Once something fails, we just quit out and don’t try the rest of the transactions in the package. Often, they get a “missing inputs” error, which is equivalent to “depends on an invalid transaction”.
    • In git terms: We want the ability to git commit each subpackage, and then merge the branch with multiple commits into master, or just look at the branch log and use it to produce the RPC results. Today, we can’t git branch, we can only stage changes and either discard all of them or commit them directly on master.

    “But if it’s so hard to keep going after a failure, why do it? How much do we care about continuing to validate the package after a subpackage has failed?” For a package we receive over p2p, we should keep going instead of wasting bandwidth downloading the same transactions until we get the exact right combination.

    “Does it make things easier or more sensible to not support disconnected transactions / distinct clusters?” After deduplication, it is possible that our package contains disconnected transactions, even if we define packages as ancestors or prefixes of some target transaction (i.e. the protocol requires the package to be connected when we receive it over p2p). As an example, imagine it’s us + our parent + a sibling in the same chunk, and the parent is deduplicated. Another reason is that we’d like package test acceptance to be as helpful as possible, and potentially accept lists of raw transactions that aren’t all connected. DepGraph handles disconnected transactions just the same, so the linearization part is not more difficult.

    Describe the solution you’d like

    Here is a proposed solution, which builds off of the package RBF outline in this delving post, simplifying out some of the RBF details and focusing on the changeset staging.

    1. Deduplication: remove transactions that are already in mempool.
    2. Topological linearization: sort it topologically.
    3. UTXO fetching, in which we learn the fee and virtual sizes of these transactions. This allows us to build a standalone instance of DepGraph or a TxGraph of the package, i.e. not connected to mempool transactions. This is probably done through a PreChecks call, so we can save and exclude any standardness failures.
    4. Pre-linearization: linearize the package transactions. This is without mempool transactions. Decide on what the chunks i.e. subpackages will be.
    5. Use TxGraph::StartStaging to create a level 1.
    6. Splitting for each chunk CNK:
      1. Use TxGraph::StartStaging to create a level 2.
      2. Validate: Limiting, Conflict-finding, Replacement checks, Verification, up to and including PolicyScriptChecks. When doing replacement and diagram checks, always compare the top level with the one just below it, not with the Main level.
      3. Commit these changes using TxGraph::CommitStaging to level 1, not level 0 which represents what is in the mempool.
      4. The chunk can also be discarded if it is invalid or doesn’t pass its RBF requirements, i.e. TxGraph::AbortStaging. The other chunks’ changes in the level 1 are retained.
    7. Full Addition and Eviction should happen at the end, i.e. ConsensusScriptChecks and ChangeSet::Apply and TxGraph::CommitStaging applying the changes from level 1 to level 0.

    Please leave any additional context

    Takeaways and open questions:

    • I think we could do an extremely limited version now by rerouting test-accepts through AcceptPackage and making a few logical tweaks (AcceptSingleTransaction(test_accept=true) followed by and AcceptMultipleTransactions(test_accept=true) works for 1p1c). But I don’t think this is worth it.
    • Is there a simple way to implement the full feature without doing the package validation restructuring described above? I don’t really think so.
    • Like a lot of things, implementing will be way easier with the cluster mempool changes merged. However, we need to modify TxGraph to support up to 3 levels.
      • An alternative is to try to build an “UndoSubpackageChanges” external to TxGraph. I think that will be pretty complex and more levels seems more natural.
    • I think we can delete MiniMiner::Linearize since it was written for this purpose, and we can now use cluster_linearize instead.
    • Do we need to make a new RPC given we’re changing the interface? I’d lean towards no since this is a loosening and basically playing catchup for supporting packages.
  2. glozow added the label Feature on Mar 28, 2025
  3. glozow added the label Mempool on Mar 28, 2025
  4. ronnakamoto commented at 6:46 pm on April 16, 2025: none

    The proposed solution looks good. But, like you mentioned we could get it done with a few logical tweaks through the AcceptPackage and that would allow us to skip waiting for the cluster mempool merge.

    I think we can modify the code to have testmempoolaccept and submitpackage share the same validation logic path, but just differ in whether they apply changes to the mempool.

    The basic idea would be:

    • Have both RPCs use AcceptPackage instead of the current split where test uses AcceptMultipleTransactions and submit uses AcceptPackage
    • Add a parameter to control whether package policies (CPFP, RBF) are applied during validation, separate from the existing test_accept flag
    • Use the existing ChangeSet mechanism but skip the Apply() call when in test mode

    But, the downside of course would be that it won’t allow complex topologies directly but maybe that could be done if its done in a more generic way.

  5. glozow commented at 8:54 pm on April 16, 2025: member
    @ronnakamoto IIRC we already the param you’re talking about (m_package_feerates). I haven’t tried implementing it yet, let me know if you have a patch that works? An easy way to test would be to add a testmempoolaccept call in front of all the submitpackage tests and check they give equivalent results.
  6. ismaelsadeeq commented at 12:23 pm on April 17, 2025: member

    Given that we want testmempoolaccept to relax the requirement that the transaction should be topologically connected due to the reasons discussed above, should we consider extending submitpackage to do the same since as you mentioned in practice, some transactions already in the mempool can cause the submitted package to become topological, the submitted package may not appear connected the outset because it was deduplicated (but it is with some in-mempool transaction).

    This allows us to build a standalone instance of DepGraph or a TxGraph of the package, i.e. not connected to mempool transactions.

    Use TxGraph::StartStaging to create a level 1.

    Can you clarify which TxGraph you’re creating the staging from? IIUC, you want to linearize a newly created transaction graph containing just the new transactions to produce a series of chunks, then attempt to add each chunk to a staging level of the current mempool TxGraph applying a series of checks. If a chunk fails to be added or succeeds, move on to the next. Finally, discard the staging level entirely and report reasons for rejecting each chunk in testmempoolaccept, whereas insubmitpackage you apply the staging level to main level.

  7. glozow commented at 1:00 pm on April 17, 2025: member

    Given that we want testmempoolaccept to relax the requirement that the transaction should be topologically connected

    There is already no requirement of connectedness now

    Given that we want testmempoolaccept to relax the requirement that the transaction should be topologically connected due to the reasons discussed above, should we consider extending submitpackage to do the same since as you mentioned in practice, some transactions already in the mempool can cause the submitted package to become topological

    While it is true that AcceptPackage can process a connected package that turns out to not be connected after deduplication, it’s not true that it can handle disconnected packages in general. We can add that fairly easily though I think.

    Use TxGraph::StartStaging to create a level 1.

    Can you clarify which TxGraph you’re creating the staging from?

    All the staging levels are made on the mempool’s TxGraph. The package doesn’t need proposed changes - we’d only use a TxGraph for linearization.

    If a chunk fails to be added or succeeds, move on to the next.

    Yes, we could use TxGraph::BlockBuilder for this, since it can handle skips for us. That’s quite neat…

    Finally, discard the staging level entirely and report reasons for rejecting each chunk in testmempoolaccept, whereas insubmitpackage you apply the staging level to main level.

    Yep!

  8. ismaelsadeeq commented at 2:52 pm on April 17, 2025: member

    There is already no requirement of connectedness now

    But it is currently enforced on master, this test fail

     0diff --git a/test/functional/rpc_packages.py b/test/functional/rpc_packages.py
     1index a2f9210f94d..807ede343a5 100755
     2--- a/test/functional/rpc_packages.py
     3+++ b/test/functional/rpc_packages.py
     4@@ -88,6 +88,7 @@ class RPCPackagesTest(BitcoinTestFramework):
     5         self.test_multiple_parents()
     6         self.test_conflicting()
     7         self.test_rbf()
     8+        self.test_submitpackage_connectedness()
     9         self.test_submitpackage()
    10         self.test_maxfeerate_submitpackage()
    11         self.test_maxburn_submitpackage()
    12@@ -371,6 +372,16 @@ class RPCPackagesTest(BitcoinTestFramework):
    13         peer.wait_for_broadcast([tx["tx"].getwtxid() for tx in package_txns])
    14         self.generate(node, 1)
    15 
    16+    def test_submitpackage_connectedness(self):
    17+        txs_chain = self.wallet.create_self_transfer_chain(chain_length=2)
    18+        txs_chain2 = self.wallet.create_self_transfer_chain(chain_length=1)
    19+        txs_list = [txs_chain[0]['hex'], txs_chain[1]['hex'], txs_chain2[0]['hex']]
    20+        self.log.info("Test that testmempoolaccept does not enforce connectedness")
    21+        test_accept_res = self.nodes[0].testmempoolaccept(txs_list)
    22+        [assert_equal(res['allowed'], True) for res in test_accept_res]
    23+        self.log.info("Test that Submitpackage does not enforce connectedness")
    24+        self.nodes[0].submitpackage(txs_list)
    25+
    26     def test_submitpackage(self):
    27         node = self.nodes[0]
    
  9. glozow commented at 5:06 pm on April 17, 2025: member

    I was referring to testmempoolaccept:

    Given that we want testmempoolaccept to relax the requirement that the transaction should be topologically connected

    There is already no requirement of connectedness now

    See test in rpc_packages.py:

    https://github.com/bitcoin/bitcoin/blob/bfeacc18b36132ba8ac70142133cd6c0e63b6763/test/functional/rpc_packages.py#L95-L99


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-04-28 18:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me