mempool/validation: mempool ancestor/descendant limits for packages #21800

pull glozow wants to merge 9 commits into bitcoin:master from glozow:package-mempool-ancestors changing 9 files +709 −82
  1. glozow commented at 7:28 pm on April 28, 2021: member

    This PR implements a function to calculate mempool ancestors for a package and enforces ancestor/descendant limits on them as a whole. It reuses a portion of CalculateMemPoolAncestors(); there’s also a small refactor to move the reused code into a generic helper function. Instead of calculating ancestors and descendants on every single transaction in the package and their ancestors, we use a “worst case” heuristic, treating every transaction in the package as each other’s ancestor and descendant. This may overestimate everyone’s counts, but is still pretty accurate in the our main package use cases, in which at least one of the transactions in the package is directly related to all the others (e.g. 1 parent + 1 child, multiple parents with 1 child, or chains).

    Note on Terminology: While “package” is often used to describe groups of related transactions within the mempool, here, I only use package to mean the group of not-in-mempool transactions we are currently validating.

    Motivation

    It would be a potential DoS vector to allow submission of packages to mempool without a proper guard for mempool ancestors/descendants. In general, the purpose of mempool ancestor/descendant limits is to limit the computational complexity of dealing with families during removals and additions. We want to be able to validate multiple transactions on top of the mempool, but also avoid these scenarios:

    • We underestimate the ancestors/descendants during package validation and end up with extremely complex families in our mempool (potentially a DoS vector).
    • We expend an unreasonable amount of resources calculating everyone’s ancestors and descendants during package validation.
  2. DrahtBot added the label Build system on Apr 28, 2021
  3. DrahtBot added the label Mempool on Apr 28, 2021
  4. DrahtBot added the label RPC/REST/ZMQ on Apr 28, 2021
  5. DrahtBot added the label TX fees and policy on Apr 28, 2021
  6. DrahtBot added the label UTXO Db and Indexes on Apr 28, 2021
  7. DrahtBot added the label Validation on Apr 28, 2021
  8. DrahtBot commented at 5:26 am on April 29, 2021: member

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #22582 (test: a test to check descendant limits by ritickgoenka)
    • #22543 (test: Use MiniWallet in mempool_limit.py by ShubhamPalriwala)
    • #22290 (Package Mempool Submission with Package Fee-Bumping by glozow)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  9. glozow force-pushed on May 27, 2021
  10. glozow force-pushed on May 28, 2021
  11. glozow renamed this:
    [WIP] mempool/validation: mempool ancestor/descendant limits for packages
    mempool/validation: mempool ancestor/descendant limits for packages
    on Jun 1, 2021
  12. glozow marked this as ready for review on Jun 1, 2021
  13. fanquake removed the label Build system on Jun 1, 2021
  14. fanquake removed the label RPC/REST/ZMQ on Jun 1, 2021
  15. fanquake removed the label TX fees and policy on Jun 1, 2021
  16. fanquake removed the label UTXO Db and Indexes on Jun 1, 2021
  17. fanquake added the label Tests on Jun 1, 2021
  18. glozow added the label Review club on Jun 5, 2021
  19. jnewbery commented at 5:03 pm on June 7, 2021: member
    Concept ACK. Treating every transaction in the package as each other’s ancestor and descendant is a good, conservative heuristic to use, since it can never underestimate the true number of ancestors/descendants. If it’s too limiting, we could potentially implement looser limits in future.
  20. in src/txmempool.cpp:178 in 7d0e9c1970 outdated
    208+        if (stageit->GetSizeWithDescendants() + total_virtual_size > limitDescendantSize) {
    209             errString = strprintf("exceeds descendant size limit for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantSize);
    210             return false;
    211-        } else if (stageit->GetCountWithDescendants() + 1 > limitDescendantCount) {
    212+        } else if (stageit->GetCountWithDescendants() + total_count > limitDescendantCount) {
    213             errString = strprintf("too many descendants for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantCount);
    


    harding commented at 2:39 am on June 8, 2021:

    In “[mempool] extend CalculateMemPoolAncestors for packages”

    Feels a bit wrong to me to return the same “too many (ancestors|descendants) for tx %s [limit: %u]” error message when we’re basing that conclusion on a heuristic rather than a full evaluation. Maybe return a slightly different string like “too many possible (ancestors|descendants)…” if total_count > 1.


    glozow commented at 10:53 am on June 10, 2021:
    Added a “possibly” to the beginning of the error strings, gated on total_count > 1, so that the existing tests that assert expected error messages don’t need to be changed.
  21. harding commented at 8:40 am on June 8, 2021: contributor

    Concept ACK. The simplification taken of assuming each transaction has as many more ancestors and descendants as there are transactions in the package is a clever way to avoid writing/repurposing a bunch of code that’s not necessary for the ultimate purpose of allowing package relay.

    However, the more the results returned by TMPA diverge from the results we’d get from submitting each transaction to our mempool individually, the more I think package validation should be using a different interface than individual transaction validation (e.g., a different parameter to TMPA or a different RPC altogether).

    Reviewed all the code, ran the functional tests, and tweaked the tests to see if I could get something unexpected to happen (nothing did). I have one minor inline comment below about using a different error string when we’re using different evaluation criteria.

    I tried to think of additional tests, but the only thing I’ve got is that it might be nice to either replicate or convert the CPFP carve out tests from test/functional/mempool_package_onemore.py to use TMPA, e.g. as a variation of the current “A-shaped test” (I actually tried to do this real quick but couldn’t get it to work; it was clearly my fault as I got the same error whether I used submitrawtransaction or TMPA).

    Overall, this PR LGTM. Thanks!

  22. glozow commented at 9:07 am on June 8, 2021: member

    @harding thank you for the review!!!

    However, the more the results returned by TMPA diverge from the results we’d get from submitting each transaction to our mempool individually, the more I think package validation should be using a different interface than individual transaction validation (e.g., a different parameter to TMPA or a different RPC altogether).

    I agree, and they will likely continue to diverge if we add bypass_timelocks and such… Perhaps we can have a regtest-only rawpackage RPC with a test_accept parameter (in my opinion users should never have to interact with packages)? And testmempoolaccept can be for users / L2 testing?

  23. harding commented at 6:13 pm on June 8, 2021: contributor

    @glozow

    Perhaps we can have a regtest-only rawpackage RPC with a test_accept parameter (in my opinion users should never have to interact with packages)?

    There’s certainly no need for users to interact with packages before there’s a reliable package relay mechanism, so starting with regtest-only seems like a good idea to me. If someone later comes along with a reason it should be user-facing, they can put in the (probably trivial) amount of work to make the RPC available on mainnet and the other networks.

    And testmempoolaccept can be for users / L2 testing?

    For users, yes. I don’t think anyone is using it today for L2 testing and I’m not sure it’s really well suited to that—testmempoolaccept tells you whether your transaction would be accepted into the current mempool, but L2 testers really want to know whether the transaction will be accepted into a future mempool; a failure now is a reliable harbinger of failure later, but a success now doesn’t guarantee success in the future (even ignoring that relay policy can be made more restrictive).

  24. in src/validation.cpp:450 in 5496b25b6a outdated
    471@@ -472,8 +472,10 @@ class MemPoolAccept
    472          */
    473         std::vector<COutPoint>& m_coins_to_uncache;
    474         const bool m_test_accept;
    475-        /** Disable BIP125 RBFing; disallow all conflicts with mempool transactions. */
    476-        const bool disallow_mempool_conflicts;
    477+        /** Whether we allow transactions to replace mempool transactions by BIP125 rules. If false,
    478+         * any transaction spending the same inputs as a transaction in the mempool is considered
    479+         * a conflict. */
    480+        const bool m_allow_bip125_replacement{true};
    


    Xekyo commented at 4:13 pm on June 9, 2021:
    Thanks, double negations are real brain teasers.
  25. in src/txmempool.cpp:180 in 7d0e9c1970 outdated
    188+            for (const auto& input : entry.get().GetTx().vin) {
    189+                std::optional<txiter> piter = GetIter(input.prevout.hash);
    190+                if (piter) {
    191+                    staged_ancestors.insert(**piter);
    192+                    if (staged_ancestors.size() + total_count > limitAncestorCount) {
    193+                        errString = strprintf("too many unconfirmed parents [limit: %u]", limitAncestorCount);
    


    Xekyo commented at 4:18 pm on June 9, 2021:

    Optionally, in the same vein as @harding mentioned in his review:

    0                        errString = strprintf("potentially too many unconfirmed parents [limit: %u]", limitAncestorCount);
    

    glozow commented at 10:54 am on June 10, 2021:
    Done, see #21800 (review)
  26. michaelfolkson commented at 4:21 pm on June 9, 2021: contributor

    For users, yes. I don’t think anyone is using it today for L2 testing and I’m not sure it’s really well suited to that—testmempoolaccept tells you whether your transaction would be accepted into the current mempool, but L2 testers really want to know whether the transaction will be accepted into a future mempool; a failure now is a reliable harbinger of failure later, but a success now doesn’t guarantee success in the future (even ignoring that relay policy can be made more restrictive).

    I’m not convinced this is true. It is impossible for L2 testers to know whether the package would be accepted into a future mempool as fee levels could theoretically be anything up to infinite. So there is nothing to test. The L2 testing would be for checking that the package would be accepted at the current fee levels (e.g. possibly just before broadcasting the package or for general testing at current fee levels)

  27. in src/policy/packages.h:18 in 0feb2a65e4 outdated
    14@@ -14,6 +15,7 @@
    15 static constexpr uint32_t MAX_PACKAGE_COUNT{25};
    16 /** Default maximum total virtual size of transactions in a package in KvB. */
    17 static constexpr uint32_t MAX_PACKAGE_SIZE{101};
    18+static_assert(MAX_PACKAGE_SIZE * 4 * 1000 >= MAX_STANDARD_TX_WEIGHT);
    


    Xekyo commented at 4:36 pm on June 9, 2021:
    Is it fair to assume that all instances of SIZE now refer to virtualsize? Otherwise, this should perhaps be explicitly “vsize”.

    glozow commented at 9:52 am on June 11, 2021:
    You may find #22097 of interest :)
  28. Xekyo commented at 11:26 pm on June 9, 2021: member
    Concept ACK
  29. glozow force-pushed on Jun 10, 2021
  30. glozow commented at 10:55 am on June 10, 2021: member
    Incorporated doc suggestions from #22084 (to make it mergeable) and some review comments here, working on adding more edge casey tests that were discussed in PR Review Club last night.
  31. glozow force-pushed on Jun 10, 2021
  32. rajarshimaitra commented at 1:19 pm on June 10, 2021: contributor

    tACK https://github.com/bitcoin/bitcoin/pull/21800/commits/1061cf457b021ccf24f394dbb5cc00af575598ba

    Overall the idea of overestimating package ancestor and descendant seems reasonable to me.

    Stepped through the functional test, covering A and V shaped mempool ancestry. All tests passing.

  33. in src/txmempool.h:703 in 1061cf457b outdated
    693+     * @param[in]       limitDescendantSize     Max virtual size including descendants.
    694+     * @param[out]      errString               Populated with error reason if a limit is hit.
    695+     * @param[in]       fSearchForParents       Whether to search for entries' in-mempool parents.
    696+     *                                          Must be true if any entries are not already in mempool.
    697+     */
    698+    bool CalculateMemPoolAncestors(const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& entries,
    


    ariard commented at 9:21 pm on June 13, 2021:
    What do you think about CalculateMemPoolAncestorsSet to disambiguate clearly from CalculateMemPoolAncestors ? Also add a param comment for entries (and the fact they might not been inter-dependent) ?

    glozow commented at 3:53 pm on June 14, 2021:
    Added doxygen comment for entries param, but my intention was to overload the CalculateMemPoolAncestors function (the fact that the first argument is a vector instead of a single entry should suffice?)
  34. in src/txmempool.h:685 in 1061cf457b outdated
    680@@ -681,6 +681,29 @@ class CTxMemPool
    681      */
    682     bool CalculateMemPoolAncestors(const CTxMemPoolEntry& entry, setEntries& setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string& errString, bool fSearchForParents = true) const EXCLUSIVE_LOCKS_REQUIRED(cs);
    683 
    684+    /** Try to calculate all in-mempool ancestors of a set of entries, and check for ancestor and
    685+     * descendant limits (all are inclusive of the transactions in entries). The same limits are
    


    ariard commented at 9:26 pm on June 13, 2021:
    by “inclusive” do you mean “for package limits evaluation, assume that the union of ancestors/descendants of each transaction is an ancestor/descendant of every transaction” or something else?

    glozow commented at 2:49 pm on June 14, 2021:
    It’s documenting the way we apply the limits, e.g. if ancestorcount is 25, it means the total number of ancestors, including itself, must be within 25.
  35. in src/validation.cpp:1118 in 1061cf457b outdated
    1113@@ -1114,6 +1114,31 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1114         m_viewmempool.PackageAddTransaction(ws.m_ptx);
    1115     }
    1116 
    1117+    // Limit the scope of _entries and _ancestors. We should calculate ancestors for each
    1118+    // transaction individually when we call Finalize().
    


    ariard commented at 9:30 pm on June 13, 2021:
    Do you mean PreChecks() and under?

    glozow commented at 2:23 pm on June 14, 2021:
    I should clarify this comment - I mean when we call addUnchecked in real package mempool accept, we’ll need to recalculate the ancestors for each one as input

    glozow commented at 3:54 pm on June 14, 2021:
    fixed
  36. in src/validation.cpp:1124 in 1061cf457b outdated
    1119+    {
    1120+        std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef> _entries;
    1121+        std::transform(workspaces.cbegin(), workspaces.cend(), std::back_inserter(_entries),
    1122+                       [](const auto& ws) { return std::cref(*ws.m_entry); });
    1123+        // We won't use the set of ancestors returned for calling Finalize().
    1124+        CTxMemPool::setEntries _ancestors;
    


    ariard commented at 9:30 pm on June 13, 2021:
    Maybe package_ancestors to dissociate?

    glozow commented at 3:52 pm on June 14, 2021:
    done
  37. in src/validation.cpp:1137 in 1061cf457b outdated
    1132+            // evaluated all of them together. Return the same failure for all transactions.
    1133+            for (auto& ws : workspaces) {
    1134+                ws.m_state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "too-long-mempool-chain", err_string);
    1135+                results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state));
    1136+            }
    1137+            package_state.Invalid(PackageValidationResult::PCKG_POLICY, "package-too-long-mempool-chain");
    


    ariard commented at 9:32 pm on June 13, 2021:
    Reject reason could be package-ancestors/descendants limits? I think “too-long” masks that it can be also a size issue.

    glozow commented at 3:52 pm on June 14, 2021:
    changed the tx error to exceeds-ancestor-descendant-limits and the package error package-mempool-limits
  38. ariard commented at 11:10 pm on June 13, 2021: member

    I think this PR around mempool ancestors/descendants limits illustrates well current limitations of our approach with implementing package-relay, namely the uncertainty about types of packages we want to support.

    Union-based evaluation of ancestors/descendants limits doesn’t work well as an API from a L2 protocol dev viewpoint. Let’s say you implement domino bumping to close in emergency multiple lightning channels, where the bumping utxo is reused across a CPFP chain:

    0        A         E         I
    1          \         \         \
    2           B -- D -- F -- H -- J -- K
    3               /        /          /
    4             C        G          L 
    

    Where root vertices are commitment transactions.

    If you already have {A,B,C,D} in the mempools, and try to broadcast {E,F,G,H,I,J,K,L) in a single package it’s going to be rejected (4 * 8 > DEFAULT_ANCESTOR_LIMITS), though if you broadcast components by pair, it will be accepted.

    Also, to the best of my knowledge, all deployed or in deployment L2s (LN, Revault, Lightning Pool) have the shared need of 2-sized package but not beyond. Honestly deploying a 2-txn sized package relay to solve the safety-critical issue about pre-signed feerate transactions under mempool min feerate would already be a huge win. Once done, we can consider loosening package API, as there is a need emerging in the ecosystem for other types of packages.

    Further, from a base-layer viewpoint, once we have p2p packages, it doesn’t seem reasonable to allow mempool traversals for chain of transactions with 25 elements. It’s far easier to reason about DoS concerns with 2-txn sized packages.

    So what do you think about ?

    • screwing up MAX_PACKAGE_COUNT to 2 transactions for the initial deployement of package-relay
    • implementing “intersection-dedup” evaluation, where duplicated ancestors/descendants aren’t accounted for package limits ?

    I think code changes should be limited as you can account the second package element as one-more descendant and substract it, including its size, to the CalculateMemPoolAncestors’s limitDescendant* argument of the first package element ?

  39. ariard commented at 11:15 pm on June 13, 2021: member

    I agree, and they will likely continue to diverge if we add bypass_timelocks and such… Perhaps we can have a regtest-only rawpackage RPC with a test_accept parameter (in my opinion users should never have to interact with packages)? And testmempoolaccept can be for users / L2 testing?

    Sounds a good idea to introduce a rawpackage test-only RPC. I believe switching it to mainet is going to be function of which p2p package version we decide on (either sender-initiated or relay-initiated). W.r.t to bypass_timelocks/bypass_feerate I think that’s ultimately different from the package interface as you can be interested by the mempool acceptance of individual transactions, so we can leave them on testmempoolaccept for now ? @michaelfolkson

    I’m not convinced this is true. It is impossible for L2 testers to know whether the package would be accepted into a future mempool as fee levels could theoretically be anything up to infinite. So there is nothing to test. The L2 testing would be for checking that the package would be accepted at the current fee levels (e.g. possibly just before broadcasting the package or for general testing at current fee levels)

    Your point is valid, but note in #20833 we had the discussion to introduce bypass_feerate to remedy this pitfall (somewhere in the GH comments, yeah I know it’s messy…)

  40. glozow force-pushed on Jun 14, 2021
  41. glozow commented at 5:08 pm on June 14, 2021: member

    @ariard thanks for the review, I’m very much looking forward to the IRC discussions to better understand what packages we want to support. In your domino bumping example, I’m not entirely sure why we couldn’t use several packages of size 2 to bump them?

    MAX_PACKAGE_COUNT to 2 transactions for the initial deployement of package-relay

    We’re still a few PRs from package accept with submission to mempool - will put some things up soon - and then a few PRs from package relay but sounds good to me. I personally believe that the P2P requirements can be more stringent than individual transaction mempool policy - I also think it’s fine for peers to negotiate their maximum package size they’re willing to receive in a SENDPACKAGES message. Starting with a default of 2 sounds good to me. However, I don’t think package ancestor/descendant limits should be explicitly less than individual ones within a node, because I think it could give an adversary the chance to prevent a package from being accepted by submitting a very large descendant to the top transaction.

    implementing “intersection-dedup” evaluation, where duplicated ancestors/descendants aren’t accounted for package limits ?

    I’m not sure I understand, could you elaborate? The setAncestors used in CalculateMemPoolAncestors() is not duplicated between transactions. I am working on a branch to “trim” packages when it overlaps with mempool (i.e. one or more transactions are already in the mempool), and will add test cases for ancestor/descendant limits with it.

    (Recent push addresses inline review comments)

  42. Xekyo commented at 6:34 pm on June 16, 2021: member

    If you already have {A,B,C,D} in the mempools, and try to broadcast {E,F,G,H,I,J,K,L) in a single package it’s going to be rejected (4 * 8 > DEFAULT_ANCESTOR_LIMITS), though if you broadcast components by pair, it will be accepted. @ariard: I don’t understand the reasoning that leads to “4 * 8” here.

  43. glozow commented at 4:49 pm on July 6, 2021: member

    Bump, this is the next step for package validation logic.

    The heuristic used in this implementation still results in an exact calculation of in-mempool and in-package ancestors for packages of parent + child (the MVP/basic package use case as discussed in the IRC meetings). It can be slightly more restrictive in scenarios such as batched fee bumping, but is extensible if we want to implement something more granular in the future.

  44. in src/txmempool.h:705 in ef6004ca88 outdated
    680@@ -681,6 +681,34 @@ class CTxMemPool
    681      */
    682     bool CalculateMemPoolAncestors(const CTxMemPoolEntry& entry, setEntries& setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string& errString, bool fSearchForParents = true) const EXCLUSIVE_LOCKS_REQUIRED(cs);
    683 
    684+    /** Try to calculate all in-mempool ancestors of a set of entries, and check for ancestor and
    685+     * descendant limits (all are inclusive of the transactions in entries). The same limits are
    686+     * used no matter how many transactions are passed in. For example, if entries.size() = 3 and
    687+     * the limit is 25, the union of all 3 sets of ancestors must be <= 22.
    688+     * @param[in]       entries                 Entries corresponding to transaction(s) being
    689+     *                                          evaluated for acceptance to mempool. If there are
    


    jnewbery commented at 10:17 am on July 7, 2021:
    This generic helper function is used for more than mempool acceptance (eg the getmempoolancestors RPC calls CalculateMempoolAncestors(const CTxMemPoolEntry& entry, ...), which calls CalculateMemPoolAncestors(const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& entries, ...). I suggest you remove the reference to “acceptance to mempool” here.

    glozow commented at 2:21 pm on July 8, 2021:
    Took it out. Also kind of different now that the two CMPAs are slightly different
  45. in src/txmempool.h:706 in ef6004ca88 outdated
    685+     * descendant limits (all are inclusive of the transactions in entries). The same limits are
    686+     * used no matter how many transactions are passed in. For example, if entries.size() = 3 and
    687+     * the limit is 25, the union of all 3 sets of ancestors must be <= 22.
    688+     * @param[in]       entries                 Entries corresponding to transaction(s) being
    689+     *                                          evaluated for acceptance to mempool. If there are
    690+     *                                          multiple, they must not already be in the mempool.
    


    jnewbery commented at 10:18 am on July 7, 2021:
    I don’t understand the “if there are multiple” part of this sentence. This function can be called with a single tx, and in that case the tx must not already be in the mempool.

    glozow commented at 7:06 am on July 15, 2021:
    Right, removed
  46. in src/txmempool.h:694 in ef6004ca88 outdated
    689+     *                                          evaluated for acceptance to mempool. If there are
    690+     *                                          multiple, they must not already be in the mempool.
    691+     *                                          They need not be direct ancestors/descendants of
    692+     *                                          each other, though they will be treated as such.
    693+     * @param[in,out]   setAncestors            Set of in-mempool ancestors. Updated to include
    694+     *                                          any new ancestors found.
    


    jnewbery commented at 10:19 am on July 7, 2021:
    I think this is actually just an out param. The caller shouldn’t be calling this function with setAncestors already populated. Perhaps an assert should be added to the top of the function that setAncestors is empty?

    glozow commented at 2:20 pm on July 8, 2021:
    You’re right it’s just an out param, fixed.
  47. in test/functional/rpc_packages.py:263 in ef6004ca88 outdated
    258+        inputs = [{"txid": first_coin["txid"], "vout": 0}]
    259+        outputs = [{self.address : parent_value}, {ADDRESS_BCRT1_P2WSH_OP_TRUE : parent_value}]
    260+        rawtx = node.createrawtransaction(inputs, outputs)
    261+
    262+        parent_signed = node.signrawtransactionwithkey(hexstring=rawtx, privkeys=self.privkeys)
    263+        parent_tx = CTransaction()
    


    jnewbery commented at 10:47 am on July 7, 2021:

    There’s a silent merge conflict with master: CTransaction is no longer imported into rpc_packages.py (since 2ce7b47958c4a10ba20dc86c011d71cda4b070a5).

    This should fix it:

     0diff --git a/test/functional/rpc_packages.py b/test/functional/rpc_packages.py
     1index 447c3cb08f..a084d0d9cb 100755
     2--- a/test/functional/rpc_packages.py
     3+++ b/test/functional/rpc_packages.py
     4@@ -12,6 +12,7 @@ from test_framework.test_framework import BitcoinTestFramework
     5 from test_framework.messages import (
     6     BIP125_SEQUENCE_NUMBER,
     7     COIN,
     8+    CTransaction,
     9     CTxInWitness,
    10     tx_from_hex,
    11 )
    12@@ -256,7 +257,7 @@ class RPCPackagesTest(BitcoinTestFramework):
    13         parent_signed = node.signrawtransactionwithkey(hexstring=rawtx, privkeys=self.privkeys)
    14         parent_tx = CTransaction()
    15         assert parent_signed["complete"]
    16-        parent_tx.deserialize(BytesIO(hex_str_to_bytes(parent_signed["hex"])))
    17+        parent_tx = tx_from_hex(parent_signed["hex"])
    18         parent_txid = parent_tx.rehash()
    19         node.sendrawtransaction(parent_signed["hex"])
    20 
    21@@ -278,7 +279,7 @@ class RPCPackagesTest(BitcoinTestFramework):
    22         value = parent_value - Decimal("0.0001")
    23         rawtx_b = node.createrawtransaction([{"txid": parent_txid, "vout": 1}], {self.address : value})
    24         tx_child_b = CTransaction() # M2b
    25-        tx_child_b.deserialize(BytesIO(hex_str_to_bytes(rawtx_b)))
    26+        tx_child_b = tx_from_hex(rawtx_b)
    27         tx_child_b.wit.vtxinwit = [CTxInWitness()]
    28         tx_child_b.wit.vtxinwit[0].scriptWitness.stack = [CScript([OP_TRUE])]
    29         tx_child_b_hex = tx_child_b.serialize().hex()
    

    glozow commented at 2:20 pm on July 8, 2021:
    Fixed
  48. in src/txmempool.cpp:180 in ef6004ca88 outdated
    207+        // When multiple transactions are passed in, the ancestors and descendants of all transactions
    208+        // considered together must be within limits even if they are not interdependent. This may be
    209+        // stricter than the limits for each individual transaction.
    210+        if (stageit->GetSizeWithDescendants() + total_virtual_size > limitDescendantSize) {
    211+            errString = strprintf("%sexceeds descendant size limit for tx %s [limit: %u]",
    212+                                  total_count > 1 ? "possibly" : "",
    


    jnewbery commented at 10:57 am on July 7, 2021:

    I think this (and other instances) would need to be:

    0                                  total_count > 1 ? "possibly " : "",
    

    otherwise the log would be possiblyexceeds descendant ...


    glozow commented at 2:20 pm on July 8, 2021:
    Fixed

    JeremyRubin commented at 4:12 pm on July 9, 2021:
    did you tho? looks same
  49. in src/txmempool.cpp:192 in ef6004ca88 outdated
    204+        // If we're not searching for parents, we require all entries to be in the mempool already.
    205+        // We should never use CalculateMemPoolAncestors on a set of transactions that are already
    206+        // in the mempool.
    207+        assert(entries.size() == 1);
    208+        txiter it = mapTx.iterator_to(entries[0].get());
    209+        staged_ancestors = it->GetMemPoolParents();
    


    jnewbery commented at 11:05 am on July 7, 2021:
    What happened to the Const here? This used to be GetMemPoolParentsConst().

    glozow commented at 2:20 pm on July 8, 2021:
    Fixed
  50. in src/txmempool.cpp:155 in ef6004ca88 outdated
    151@@ -151,33 +152,47 @@ void CTxMemPool::UpdateTransactionsFromBlock(const std::vector<uint256> &vHashes
    152     }
    153 }
    154 
    155-bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntries &setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string &errString, bool fSearchForParents /* = true */) const
    156+bool CTxMemPool::CalculateMemPoolAncestors(const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& entries,
    


    jnewbery commented at 11:11 am on July 7, 2021:

    I think that CalculateMemPoolAncestors() was already doing too much, and this adds just a little bit more complication. Instead of overloading CMPA and having CMPA(tx) call CMPA(package), what do you think about the splitting it into two parts:

    • calculate parents (two different functions - one for a single transaction which may have fSearchForParents=false, and one for a package)
    • check package limits (everything from while (!staged_ancestors.empty()) downwards, which is the same function for an individual transaction and a package if parametrized correctly)

    and then roughly:

    0CMPA(tx) {
    1    set_ancestors = CalculateParents(tx);
    2    CheckPackageLimits(set_ancestors);
    3}
    4
    5CMPA(package) {
    6    set_ancestors = CalculateParents(package);
    7    CheckPackageLimits(set_ancestors);
    8}
    

    JeremyRubin commented at 8:05 pm on July 7, 2021:
    nit: use function signatures with spans when passing a const vec, removes the need to allocate for passing a single tx (can span without allocating from pointer).

    JeremyRubin commented at 8:16 pm on July 7, 2021:
    re: @jnewbery’s feedback to split, I think that might be right if fsearchforparents should never be used as false with a package? the asserts confuse me…

    glozow commented at 2:19 pm on July 8, 2021:

    Done, with Span :D

    I’ve made it so that the CMPA with a vector of transactions doesn’t have a fSearchForParents param, we just always search for parents. The only call site (other than tests) should be in MemPoolAccept::AcceptMultipleTransactions() so the transactions shouldn’t already be in the mempool. It wouldn’t make sense to use this heuristic on transactions that are already in the mempool, since it’s potentially stricter.

  51. in src/txmempool.cpp:160 in ef6004ca88 outdated
    156+bool CTxMemPool::CalculateMemPoolAncestors(const std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef>& entries,
    157+                                           setEntries& setAncestors,
    158+                                           const uint64_t limitAncestorCount,
    159+                                           const uint64_t limitAncestorSize,
    160+                                           const uint64_t limitDescendantCount,
    161+                                           const uint64_t limitDescendantSize,
    


    jnewbery commented at 11:15 am on July 7, 2021:
    What do you think about not making these const, and then decrementing from them and checking that we don’t go below zero? That would avoid the need for the local variables and make the logic for individual txs and packages more similar.

    JeremyRubin commented at 8:11 pm on July 7, 2021:
    I think it is clearer to make these as const personally, decrement until 0 is always confusing to me (is it at 0 or below zero that the error comes in? These are currently uint64_t, so we’d also need to either convert to int64_t or detect wraparound…).

    glozow commented at 7:10 am on July 15, 2021:
    Another problem with editing these variables is we’d need to change the error logging (current log would use the decremented count instead of actual limit), so I’ve left it as is.
  52. in src/txmempool.cpp:188 in ef6004ca88 outdated
    200-        // If we're not searching for parents, we require this to be an
    201-        // entry in the mempool already.
    202-        txiter it = mapTx.iterator_to(entry);
    203-        staged_ancestors = it->GetMemPoolParentsConst();
    204+        // If we're not searching for parents, we require all entries to be in the mempool already.
    205+        // We should never use CalculateMemPoolAncestors on a set of transactions that are already
    


    JeremyRubin commented at 8:14 pm on July 7, 2021:
    Never? Why is this new constraint present? I thought it’s safe to use as long as we aren’t searching for parents?

    glozow commented at 2:17 pm on July 8, 2021:
    (Same idea) if we’re calling this with multiple transactions, they shouldn’t already be in the mempool.

    glozow commented at 6:59 am on July 15, 2021:
    Removed this constraint
  53. in src/txmempool.cpp:190 in ef6004ca88 outdated
    202-        txiter it = mapTx.iterator_to(entry);
    203-        staged_ancestors = it->GetMemPoolParentsConst();
    204+        // If we're not searching for parents, we require all entries to be in the mempool already.
    205+        // We should never use CalculateMemPoolAncestors on a set of transactions that are already
    206+        // in the mempool.
    207+        assert(entries.size() == 1);
    


    JeremyRubin commented at 8:15 pm on July 7, 2021:
    this assert is pretty scary? why do we not permit >1?

    JeremyRubin commented at 8:16 pm on July 7, 2021:
    What if it’s a package of size 1? Is this code safe? Or only if it’s called in non package contexts?

    glozow commented at 2:15 pm on July 8, 2021:
    Sorry, I’ve clarified this now. CMPA for a vector of transactions always searches for parents. If we call CMPA with multiple transactions (also handling the case if called with a vector of size 1), those transactions shouldn’t already be in the mempool.

    glozow commented at 6:58 am on July 15, 2021:
    Removed this condition
  54. in src/txmempool.cpp:248 in ef6004ca88 outdated
    239@@ -216,6 +240,17 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
    240     return true;
    241 }
    242 
    243+bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntries &setAncestors,
    244+                                           uint64_t limitAncestorCount, uint64_t limitAncestorSize,
    245+                                           uint64_t limitDescendantCount, uint64_t limitDescendantSize,
    246+                                           std::string &errString, bool fSearchForParents /* = true */) const
    247+{
    248+    std::vector<CTxMemPoolEntry::CTxMemPoolEntryRef> entry_vec{std::cref(entry)};
    


    JeremyRubin commented at 8:17 pm on July 7, 2021:
    if you update to a span, you get rid of an allocation here!

    glozow commented at 2:16 pm on July 8, 2021:
    GONE! 👐
  55. glozow force-pushed on Jul 8, 2021
  56. glozow force-pushed on Jul 9, 2021
  57. glozow force-pushed on Jul 9, 2021
  58. glozow force-pushed on Jul 15, 2021
  59. glozow force-pushed on Jul 15, 2021
  60. in src/txmempool.cpp:154 in 27986f86c6 outdated
    150@@ -151,34 +151,17 @@ void CTxMemPool::UpdateTransactionsFromBlock(const std::vector<uint256> &vHashes
    151     }
    152 }
    153 
    154-bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntries &setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string &errString, bool fSearchForParents /* = true */) const
    155+bool CTxMemPool::calculateAncestorsAndCheckLimits(size_t entry_size,
    


    jnewbery commented at 12:55 pm on July 22, 2021:

    Current project style is to name functions and methods with UpperCaseCamel:

    https://github.com/bitcoin/bitcoin/blob/36aee0f3538ec3399a3838041ea5993aba5f518b/doc/developer-notes.md#L92


    glozow commented at 10:47 am on July 26, 2021:
    Done
  61. in src/txmempool.h:601 in 27986f86c6 outdated
    596+     * param@[out]  setAncestors        Will be populated with all mempool ancestors.
    597+     */
    598+    bool calculateAncestorsAndCheckLimits(size_t entry_size,
    599+                                          size_t entry_count,
    600+                                          setEntries& setAncestors,
    601+                                          CTxMemPoolEntry::Parents &staged_ancestors,
    


    jnewbery commented at 12:58 pm on July 22, 2021:
    0                                          CTxMemPoolEntry::Parents& staged_ancestors,
    
  62. in src/txmempool.h:591 in 27986f86c6 outdated
    585@@ -585,6 +586,25 @@ class CTxMemPool
    586      */
    587     std::set<uint256> m_unbroadcast_txids GUARDED_BY(cs);
    588 
    589+
    590+    /**
    591+     * Helper function to calculate all in-mempool ancestors of staged_ancestors and apply
    


    jnewbery commented at 1:12 pm on July 22, 2021:
    0     * Helper function to calculate all in-mempool ancestors of staged_ancestors (including staged_ancestors themselves) and apply
    

    or similar to document that setAncestors will include the transactions in staged_ancestors.


    glozow commented at 10:47 am on July 26, 2021:
    Done
  63. in src/txmempool.h:600 in 27986f86c6 outdated
    595+     * param@[in]   staged_ancestors    Should contain entries in the mempool.
    596+     * param@[out]  setAncestors        Will be populated with all mempool ancestors.
    597+     */
    598+    bool calculateAncestorsAndCheckLimits(size_t entry_size,
    599+                                          size_t entry_count,
    600+                                          setEntries& setAncestors,
    


    jnewbery commented at 1:16 pm on July 22, 2021:
    The & jumps from right to left between commits. Perhaps just have this as setEntries& setAncestors in commit MOVEONLY: add helper function for calculating ancestors and checking limits

    glozow commented at 10:47 am on July 26, 2021:
    Done
  64. in src/txmempool.cpp:175 in 27986f86c6 outdated
    169@@ -187,14 +170,20 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
    170         staged_ancestors.erase(stage);
    171         totalSizeWithAncestors += stageit->GetTxSize();
    172 
    173-        if (stageit->GetSizeWithDescendants() + entry.GetTxSize() > limitDescendantSize) {
    174-            errString = strprintf("exceeds descendant size limit for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantSize);
    175+        if (stageit->GetSizeWithDescendants() + entry_size > limitDescendantSize) {
    176+            errString = strprintf("%sexceeds descendant size limit for tx %s [limit: %u]",
    177+                                  entry_count > 1 ? "possibly " : "",
    


    jnewbery commented at 1:35 pm on July 22, 2021:

    These conditional log changes make the function less generic (it now knows something semantic about the caller). If you really want this “possibly " prefix, you could prepend it in the calling function:

     0     // When multiple transactions are passed in, the ancestors and descendants of all transactions
     1     // considered together must be within limits even if they are not interdependent. This may be
     2     // stricter than the limits for each individual transaction.
     3-    return calculateAncestorsAndCheckLimits(total_size, package.size(),
     4-                                            setAncestors, staged_ancestors,
     5-                                            limitAncestorCount, limitAncestorSize,
     6-                                            limitDescendantCount, limitDescendantSize, errString);
     7+    auto ret = calculateAncestorsAndCheckLimits(total_size, package.size(),
     8+                                                setAncestors, staged_ancestors,
     9+                                                limitAncestorCount, limitAncestorSize,
    10+                                                limitDescendantCount, limitDescendantSize, errString);
    11+
    12+    errString.insert(0, "possibly ");
    13+    return ret;
    14 }
    

    Although I don’t think you really need it. If someone is submitting packages through the RPC, then presumably they’re a sophisticated user and they’ll be able to understand that a heuristic is being used to calculate ancestor/descendant limits.

    You could also update the only place that this string is used:

     0--- a/src/validation.cpp
     1+++ b/src/validation.cpp
     2@@ -1094,7 +1094,7 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
     3             // be implementation-dependent, and it's likely to be multiple transactions because we
     4             // evaluated all of them together. Return the same failure for all transactions.
     5             for (auto& ws : workspaces) {
     6-                ws.m_state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "exceeds-ancestor-descendant-limits", err_string);
     7+                ws.m_state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "exceeds-ancestor-descendant-limits-heuristic", err_string);
     8                 results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state));
     9             }
    10             package_state.Invalid(PackageValidationResult::PCKG_POLICY, "package-mempool-limits");
    

    glozow commented at 10:47 am on July 26, 2021:
    Good point, added the “possibly " prefix outside
  65. in src/txmempool.cpp:268 in 27986f86c6 outdated
    263+        // entry in the mempool and use the entry's cached parents.
    264+        txiter it = mapTx.iterator_to(entry);
    265+        staged_ancestors = it->GetMemPoolParentsConst();
    266+    }
    267+
    268+    return calculateAncestorsAndCheckLimits(entry.GetTxSize(), 1, setAncestors, staged_ancestors,
    


    jnewbery commented at 1:38 pm on July 22, 2021:

    Document basic type arguments:

    0    return calculateAncestorsAndCheckLimits(entry.GetTxSize(), /* entry_count */ 1, setAncestors, staged_ancestors,
    

    glozow commented at 10:46 am on July 26, 2021:
    Added
  66. in src/txmempool.cpp:222 in 27986f86c6 outdated
    225+                staged_ancestors.insert(**piter);
    226+                if (staged_ancestors.size() + package.size() > limitAncestorCount) {
    227+                    errString = strprintf("too many unconfirmed parents [limit: %u]", limitAncestorCount);
    228+                    return false;
    229+                }
    230+            }
    


    jnewbery commented at 1:41 pm on July 22, 2021:
    Would it make sense to have an early exit here if total_size exceeds limitAncestorSize (similar to if the ancestor count exceeds limitAncestorCount)?

    glozow commented at 2:25 pm on July 22, 2021:
    Not sure. In practice, it wouldn’t happen since we would have checked the package limits already
  67. in src/txmempool.cpp:210 in 27986f86c6 outdated
    206@@ -216,6 +207,69 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
    207     return true;
    208 }
    209 
    210+bool CTxMemPool::CalculateMemPoolAncestors(const Package& package,
    


    jnewbery commented at 1:46 pm on July 22, 2021:
    This function doesn’t need to have a setAncestors out param, since the only place that it’s called doesn’t use it. In fact, you could rename this function to something like CheckPackageLimits() to better document what it’s used for.

    glozow commented at 10:46 am on July 26, 2021:
    Good point - done :)
  68. jnewbery commented at 1:57 pm on July 22, 2021: member

    This looks great. The tests are excellent.

    I have a few suggestions inline.

  69. laanwj added this to the "Blockers" column in a project

  70. ritickgoenka commented at 6:18 pm on July 25, 2021: contributor

    tACK

    Ran the tests locally and also tried to tweak the tests to make sure everything was working fine.

    While checking for ancestors limit for a transaction we are checking the total number of ancestors including itself should be within 25, but when we are checking for descendants limit for a transaction we are checking the total number of descendants excluding itself should be within 25. Is this the expected behavior? Or do we want to include the transaction also while checking the descendant limits?

    To test that the transaction itself is excluded when checking descendants limit I changed the ‘A’ shaped test a little, I reduced the number of in mempool transactions in the left limb from 12 to 11 (total 26 transactions, 24 in mempool and 2 in package) and kept everything else the same and the test failed.

    Similarly, to test that the transaction itself is included when checking ancestor limit I changed the ‘V’ shaped test, I reduced the number of in mempool transactions in the left limb from 12 to 11 (total 26 transactions, 23 in mempool and 3 in package) and kept everything else the same and the test Passed.

    Tested 27986f8 on Ubuntu 18.04

  71. glozow force-pushed on Jul 26, 2021
  72. glozow commented at 10:45 am on July 26, 2021: member

    Thanks for the review @ritickgoenka!

    While checking for ancestors limit for a transaction we are checking the total number of ancestors including itself should be within 25, but when we are checking for descendants limit for a transaction we are checking the total number of descendants excluding itself should be within 25.

    I don’t think this is true. Descendant limits are also inclusive of the transaction.

    To test that the transaction itself is excluded when checking descendants limit I changed the ‘A’ shaped test a little, I reduced the number of in mempool transactions in the left limb from 12 to 11 (total 26 transactions, 24 in mempool and 2 in package) and kept everything else the same and the test failed.

    I looked into this, and realized that I mislabeled the transactions in the diagram - my bad. The current test has 24 transactions in the mempool and 2 in the package, so if you reduced the mempool transactions, it’s 23+2 which is within limits. That’s why the test would fail. I’ve updated the comment and added assertions for the number of transactions in mempool and package.

  73. glozow commented at 10:46 am on July 26, 2021: member
    Addressed @jnewbery’s comments and added another test
  74. in test/functional/rpc_packages.py:305 in 6d8f687bfc outdated
    300+            assert_equal(txres["package-error"], "package-mempool-limits")
    301+
    302+        # Clear mempool and check that the package passes now
    303+        node.generate(1)
    304+        assert all([res["allowed"] for res in node.testmempoolaccept(rawtxs=package_hex)])
    305+        node.generate(1)
    


    jnewbery commented at 11:41 am on July 26, 2021:
    No need for these new calls to generate(1).

    glozow commented at 5:22 pm on August 5, 2021:
    Good point :P removed. Added asserts for empty mempool in each of the subtests.
  75. in src/txmempool.cpp:234 in 6d8f687bfc outdated
    228+    setEntries setAncestors;
    229+    const auto ret = CalculateAncestorsAndCheckLimits(total_size, package.size(),
    230+                                                      setAncestors, staged_ancestors,
    231+                                                      limitAncestorCount, limitAncestorSize,
    232+                                                      limitDescendantCount, limitDescendantSize, errString);
    233+    if (!ret) errString.insert(0, "possibly ");
    


    jnewbery commented at 11:46 am on July 26, 2021:
    Might be worth commenting why you’re prefixing the error string with “possibly” here.

    fanquake commented at 7:03 am on August 3, 2021:
    Agree

    glozow commented at 5:22 pm on August 5, 2021:
    Done

    laanwj commented at 6:39 pm on August 5, 2021:
    It’s somewhat surprising to have an existing error message prefixed with ‘possibly’, I checked that it works for the current messages in CalculateAncestorsAndCheckLimits, but the validity of this text manipulation seems to be hard to guarantee over future changes to the code. But I don’t know a better solution to this, The grammatical correctness of the error message is not super critical.
  76. jnewbery commented at 11:48 am on July 26, 2021: member

    utACK 6d8f687bfc

    New test looks good

  77. ritickgoenka commented at 1:20 pm on July 27, 2021: contributor

    reACK 6d8f687

    Tested the new test which was added. Also, wrote a new test to check the overestimation of descendant limits and it was working fine.

    Tested on Ubuntu 18.04

  78. tryphe commented at 1:51 am on July 30, 2021: contributor

    Concept ACK

    Really nice work!

  79. fanquake commented at 7:42 am on August 3, 2021: member
    Concept ACK - the changes here look fairly straight forward. Need to properly review the test.
  80. in src/txmempool.cpp:218 in 6d8f687bfc outdated
    213+        total_size += GetVirtualTransactionSize(*tx);
    214+        for (const auto& input : tx->vin) {
    215+            std::optional<txiter> piter = GetIter(input.prevout.hash);
    216+            if (piter) {
    217+                staged_ancestors.insert(**piter);
    218+                if (staged_ancestors.size() + package.size() > limitAncestorCount) {
    


    ariard commented at 3:05 pm on August 4, 2021:

    In the future, if packages are allowed to replace in-mempool transactions do we have concerns of the same transaction accounted twice falsifying this check ?

    Let’s say you have in-mempool { Tx_A } and in-package { Tx_A, Tx_B } where Tx_A is parent of Tx_B. Within this configuration, Tx_A is going to be counted twice, firstly as a staged_ancestors member and secondly as a package member.

    If this reasoning holds, one solution to be future-proof could be to proceed to the evaluation against limitAncestorCount after the for iteration, and once dedup between staged_ancestors and package has been done.


    glozow commented at 8:48 am on August 5, 2021:
    Yep, this is absolutely a consideration for replacements and duplicates in packages in the future. I agree, we’ll want to de-duplicate first (CMPA and CheckPackageLimits wouldn’t be called for transactions already in the mempool, since PreChecks looks for txn-already-in-mempool before this). Additionally, we’ll want to subtract the size/count of transactions being replaced from descendant limits before calling this function.

    glozow commented at 8:56 am on August 5, 2021:
    (I will note this down for the future - I think we’re on the same page about this, and it’s not yet applicable for this PR)

    ariard commented at 9:16 am on August 8, 2021:

    Additionally, we’ll want to subtract the size/count of transactions being replaced from descendant limits before calling this function.

    Exactly, this is another limit case to be aware of. And even trickier one like an ancestor of a second-stage package member being replaced by a first-stage package member and thus failing the whole package acceptance.

    Yes, not yet applicable for this PR, good to not forget about it!


    JeremyRubin commented at 5:33 pm on August 8, 2021:
    i think a simpler API would be to have staged always contain all the entries themselves? is there a reason not to (I think the epoch algorithm is relatively lightweight for this).
  81. in src/validation.cpp:1097 in 6d8f687bfc outdated
    1092+        // inside of PreChecks(). Figuring out which transaction to attribute this failure to may be
    1093+        // implementation-dependent, and it's likely to be multiple transactions because we
    1094+        // evaluated all of them together. Return the same failure for all transactions.
    1095+        for (auto& ws : workspaces) {
    1096+            ws.m_state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "exceeds-ancestor-descendant-limits", err_string);
    1097+            results.emplace(ws.m_ptx->GetWitnessHash(), MempoolAcceptResult::Failure(ws.m_state));
    


    ariard commented at 3:16 pm on August 4, 2021:

    Note, I think for the other package policy checks in CheckPackage we only return a PackageValidationResult without per-transaction results. If we want the package acceptance interface to be consistent, I believe we can leave results empty. As such, package policy failure overrides transaction policy one ?

    At least m_tx_results doc in src/validation.h L216 could be clearer on this point.


    glozow commented at 8:50 am on August 5, 2021:
    That’s a good point - perhaps it would be better to copy the error string from the TxValidationState into the PackageValidationState and always return results empty when we have a package-wide error.

    glozow commented at 5:23 pm on August 5, 2021:
    Thanks for the suggestion - I got rid of the tx results when there’s a package-wide error, and added a comment explaining what to expect for m_tx_results in validation.h
  82. in src/txmempool.cpp:176 in 6d8f687bfc outdated
    170@@ -187,10 +171,10 @@ bool CTxMemPool::CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntr
    171         staged_ancestors.erase(stage);
    172         totalSizeWithAncestors += stageit->GetTxSize();
    173 
    174-        if (stageit->GetSizeWithDescendants() + entry.GetTxSize() > limitDescendantSize) {
    175+        if (stageit->GetSizeWithDescendants() + entry_size > limitDescendantSize) {
    176             errString = strprintf("exceeds descendant size limit for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantSize);
    177             return false;
    


    ariard commented at 3:38 pm on August 4, 2021:

    I think this check doesn’t have new coverage in `rpc_package.py ? I got a success for the following diff:

     0diff --git a/src/txmempool.cpp b/src/txmempool.cpp
     1index 85911e15d..c7fa9fc62 100644
     2--- a/src/txmempool.cpp
     3+++ b/src/txmempool.cpp
     4@@ -173,7 +173,7 @@ bool CTxMemPool::CalculateAncestorsAndCheckLimits(size_t entry_size,
     5 
     6         if (stageit->GetSizeWithDescendants() + entry_size > limitDescendantSize) {
     7             errString = strprintf("exceeds descendant size limit for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantSize);
     8-            return false;
     9+            return true;
    10         } else if (stageit->GetCountWithDescendants() + entry_count > limitDescendantCount) {
    11             errString = strprintf("too many descendants for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantCount);
    12             return false;
    

    glozow commented at 8:51 am on August 5, 2021:

    This error should be caught in the mempool_tests unit test?

    It’s true, though, that I didn’t write tests for the size limits, just did count limits. I can add size limit tests.


    glozow commented at 5:24 pm on August 5, 2021:
    Added a functional test for size limits - test_desc_size_limits() will fail if you apply this diff

    ariard commented at 9:56 am on August 8, 2021:

    This error should be caught in the mempool_tests unit test?

    It should but I don’t’ get an error for any mempool_* unit tests. Won’t be surprised there is a hole coverage before to this PR.

    Added a functional test for size limits - test_desc_size_limits() will fail if you apply this diff

    Thanks for adding one, there is at least one behavior change to cover introduce with this PR, entry_size can be > 1.

  83. in src/txmempool.cpp:180 in 6d8f687bfc outdated
    177             return false;
    178-        } else if (stageit->GetCountWithDescendants() + 1 > limitDescendantCount) {
    179+        } else if (stageit->GetCountWithDescendants() + entry_count > limitDescendantCount) {
    180             errString = strprintf("too many descendants for tx %s [limit: %u]", stageit->GetTx().GetHash().ToString(), limitDescendantCount);
    181             return false;
    182         } else if (totalSizeWithAncestors > limitAncestorSize) {
    


    ariard commented at 4:29 pm on August 4, 2021:

    I think we have topologies of in-mempool transactions and packages bypassing this limit.

    Let’s say you have in-mempool, independent Tx_A and Tx_B where their virtual sizes are 30 KvB each. Let’s say you have packages transactions Tx_C and _Tx_D where Tx_C is a child of A and B and Tx_D is a child of Tx_C their virtual sizes are 30 KvB each.

    In CheckPackageLimits, Tx_C isn’t already in the mempool (mapTx is only updated in Finalize) and as such shouldn’t part of staged_ancestors. This set should be composed of Tx_A and Tx_B only.

    In CalculateAncestorsAndCheckLimits, totalSizeWithAncestors is the union Tx_A and Tx_B and its sum of 60 KvB is inferior to 101 KvB. After Tx_C and Tx_D are added to the mempool, the chain of transactions ABCD is of the sum 120 KvB.

    Further, I don’t think it’s rejected by the check against limitDescendantSize, as Tx_A and Tx_B are evaluated independently, and not as a single cluster as they should be (Tx_A + Tx_C + Tx_D or Tx_B + Tx_C + Tx_D)

    Note, you need to replace Tx by chain of transactions because of MAX_STANDARD_TX_WEIGHT but I think this reasoning holds ?


    glozow commented at 5:27 pm on August 5, 2021:
    Hm, I don’t think this case bypasses the algorithm. I’ve implemented this test in the latest push (see test_anc_size_limits()) and it passes for me. It’s possible I misunderstood your description, but it seems like we’re good here?

    ariard commented at 10:31 am on August 8, 2021:

    Thanks for writing the test, I noticed where my reasoning was flawed!

    totalSizeWithAncestors is the union Tx_A and Tx_B and its sum of 60 KvB is inferior to 101 KvB.

    We init totalSizeWithAncestors with the whole package size from now, instead of the entry only (L164 in src/txmempool.cpp). So we have A+B+C+D and the check against limitAncestorSize rejects the package. Maybe the variable could be renamed packageSizeWithAncestors, but that’s a nit.

  84. misc package validation doc improvements f95bbf58aa
  85. MOVEONLY: add helper function for calculating ancestors and checking limits 97dd1c729d
  86. [refactor] pass size/count instead of entry to CalculateAncestorsAndCheckLimits
    This does not change existing behavior.
    The ancestor/descendant limits are inclusive of the entries themselves,
    but CalculateAncestorsAndCheckLimits() does not need access to them.
    f551841d3e
  87. [mempool] check ancestor/descendant limits for packages
    When calculating ancestor/descendant counts for transactions in the
    package, as a heuristic, count every transaction in the package as an
    ancestor and descendant of every other transaction in the package.
    
    This may overestimate, but will not underestimate, the
    ancestor/descendant counts. This shortcut still produces an accurate
    count for packages of 1 parent + 1 child.
    c6e016aa13
  88. glozow force-pushed on Aug 5, 2021
  89. laanwj commented at 6:57 pm on August 5, 2021: member
    Code review ACK 5d513c0698dfe088e6dcd77ade59a0b34d92efbd
  90. fanquake commented at 0:02 am on August 6, 2021: member

    https://cirrus-ci.com/task/6628050143019008:

     0Run tx_pool_standard with args ['/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz', '-runs=1', '/tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard']INFO: Seed: 517370066
     1INFO: Loaded 1 modules   (547316 inline 8-bit counters): 547316 [0x556b20760ed8, 0x556b207e68cc), 
     2INFO: Loaded 1 PC tables (547316 PCs): 547316 [0x556b207e68d0,0x556b21040810), 
     3INFO:     3949 files found in /tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard
     4INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 1048576 bytes
     5INFO: seed corpus: files: 3949 min: 1b max: 1048576b total: 95048838b rss: 213Mb
     6[#128](/bitcoin-bitcoin/128/)	pulse  cov: 17131 ft: 25503 corp: 122/895b exec/s: 64 rss: 225Mb
     7[#256](/bitcoin-bitcoin/256/)	pulse  cov: 17352 ft: 36520 corp: 244/2752b exec/s: 64 rss: 247Mb
     8[#512](/bitcoin-bitcoin/512/)	pulse  cov: 23370 ft: 57316 corp: 399/6397b exec/s: 56 rss: 292Mb
     9[#1024](/bitcoin-bitcoin/1024/)	pulse  cov: 25687 ft: 82402 corp: 673/19Kb exec/s: 44 rss: 400Mb
    10fuzz: test/fuzz/tx_pool.cpp:232: auto (anonymous namespace)::tx_pool_standard_fuzz_target(FuzzBufferType)::(anonymous class)::operator()() const: Assertion `"it != result_package.m_tx_results.end()" && check' failed.
    11==24597== ERROR: libFuzzer: deadly signal
    12    [#0](/bitcoin-bitcoin/0/) 0x556b1cad4ab1  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x278fab1)
    13    [#1](/bitcoin-bitcoin/1/) 0x556b1ca1fc08  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26dac08)
    14    [#2](/bitcoin-bitcoin/2/) 0x556b1ca04d53  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26bfd53)
    15    [#3](/bitcoin-bitcoin/3/) 0x7f50944353bf  (/lib/x86_64-linux-gnu/libpthread.so.0+0x153bf)
    16    [#4](/bitcoin-bitcoin/4/) 0x7f509407918a  (/lib/x86_64-linux-gnu/libc.so.6+0x4618a)
    17    [#5](/bitcoin-bitcoin/5/) 0x7f5094058858  (/lib/x86_64-linux-gnu/libc.so.6+0x25858)
    18    [#6](/bitcoin-bitcoin/6/) 0x7f5094058728  (/lib/x86_64-linux-gnu/libc.so.6+0x25728)
    19    [#7](/bitcoin-bitcoin/7/) 0x7f5094069f35  (/lib/x86_64-linux-gnu/libc.so.6+0x36f35)
    20    [#8](/bitcoin-bitcoin/8/) 0x556b1cf8b787  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x2c46787)
    21    [#9](/bitcoin-bitcoin/9/) 0x556b1cb00a77  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x27bba77)
    22    [#10](/bitcoin-bitcoin/10/) 0x556b1e4256f7  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e06f7)
    23    [#11](/bitcoin-bitcoin/11/) 0x556b1e4253a5  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e03a5)
    24    [#12](/bitcoin-bitcoin/12/) 0x556b1ca06411  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c1411)
    25    [#13](/bitcoin-bitcoin/13/) 0x556b1ca05b55  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c0b55)
    26    [#14](/bitcoin-bitcoin/14/) 0x556b1ca08477  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c3477)
    27    [#15](/bitcoin-bitcoin/15/) 0x556b1ca087d9  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c37d9)
    28    [#16](/bitcoin-bitcoin/16/) 0x556b1c9f74ae  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26b24ae)
    29    [#17](/bitcoin-bitcoin/17/) 0x556b1ca202f2  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26db2f2)
    30    [#18](/bitcoin-bitcoin/18/) 0x7f509405a0b2  (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
    31    [#19](/bitcoin-bitcoin/19/) 0x556b1c9cc24d  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x268724d)
    32NOTE: libFuzzer has rudimentary signal handlers.
    33      Combine libFuzzer with AddressSanitizer or similar for better crash reports.
    34SUMMARY: libFuzzer: deadly signal
    35MS: 0 ; base unit: 0000000000000000000000000000000000000000
    360xb3,0xb3,0xb3,0xb3,0x83,0x83,0x2f,0x2f,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xb,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0xc9,0xc9,0xcb,0x1,0x0,0xcb,
    37\xb3\xb3\xb3\xb3\x83\x83//\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\x0b\x00\x00\x00\x00\x00\x00\x00eeeeeeeeeee\xc9\xc9\xcb\x01\x00\xcb
    38artifact_prefix='./'; Test unit written to ./crash-8bc5eec8ddffef064fc1834346b11548218b6153
    39Base64: s7Ozs4ODLy/Ly8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLCwAAAAAAAABlZWVlZWVlZWVlZcnJywEAyw==Run tx_pool_standard with args ['/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz', '-runs=1', '/tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard']INFO: Seed: 517370066
    40INFO: Loaded 1 modules   (547316 inline 8-bit counters): 547316 [0x556b20760ed8, 0x556b207e68cc), 
    41INFO: Loaded 1 PC tables (547316 PCs): 547316 [0x556b207e68d0,0x556b21040810), 
    42INFO:     3949 files found in /tmp/cirrus-ci-build/ci/scratch/qa-assets/fuzz_seed_corpus/tx_pool_standard
    43INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 1048576 bytes
    44INFO: seed corpus: files: 3949 min: 1b max: 1048576b total: 95048838b rss: 213Mb
    45[#128](/bitcoin-bitcoin/128/)	pulse  cov: 17131 ft: 25503 corp: 122/895b exec/s: 64 rss: 225Mb
    46[#256](/bitcoin-bitcoin/256/)	pulse  cov: 17352 ft: 36520 corp: 244/2752b exec/s: 64 rss: 247Mb
    47[#512](/bitcoin-bitcoin/512/)	pulse  cov: 23370 ft: 57316 corp: 399/6397b exec/s: 56 rss: 292Mb
    48[#1024](/bitcoin-bitcoin/1024/)	pulse  cov: 25687 ft: 82402 corp: 673/19Kb exec/s: 44 rss: 400Mb
    49fuzz: test/fuzz/tx_pool.cpp:232: auto (anonymous namespace)::tx_pool_standard_fuzz_target(FuzzBufferType)::(anonymous class)::operator()() const: Assertion `"it != result_package.m_tx_results.end()" && check' failed.
    50==24597== ERROR: libFuzzer: deadly signal
    51    [#0](/bitcoin-bitcoin/0/) 0x556b1cad4ab1  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x278fab1)
    52    [#1](/bitcoin-bitcoin/1/) 0x556b1ca1fc08  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26dac08)
    53    [#2](/bitcoin-bitcoin/2/) 0x556b1ca04d53  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26bfd53)
    54    [#3](/bitcoin-bitcoin/3/) 0x7f50944353bf  (/lib/x86_64-linux-gnu/libpthread.so.0+0x153bf)
    55    [#4](/bitcoin-bitcoin/4/) 0x7f509407918a  (/lib/x86_64-linux-gnu/libc.so.6+0x4618a)
    56    [#5](/bitcoin-bitcoin/5/) 0x7f5094058858  (/lib/x86_64-linux-gnu/libc.so.6+0x25858)
    57    [#6](/bitcoin-bitcoin/6/) 0x7f5094058728  (/lib/x86_64-linux-gnu/libc.so.6+0x25728)
    58    [#7](/bitcoin-bitcoin/7/) 0x7f5094069f35  (/lib/x86_64-linux-gnu/libc.so.6+0x36f35)
    59    [#8](/bitcoin-bitcoin/8/) 0x556b1cf8b787  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x2c46787)
    60    [#9](/bitcoin-bitcoin/9/) 0x556b1cb00a77  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x27bba77)
    61    [#10](/bitcoin-bitcoin/10/) 0x556b1e4256f7  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e06f7)
    62    [#11](/bitcoin-bitcoin/11/) 0x556b1e4253a5  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x40e03a5)
    63    [#12](/bitcoin-bitcoin/12/) 0x556b1ca06411  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c1411)
    64    [#13](/bitcoin-bitcoin/13/) 0x556b1ca05b55  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c0b55)
    65    [#14](/bitcoin-bitcoin/14/) 0x556b1ca08477  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c3477)
    66    [#15](/bitcoin-bitcoin/15/) 0x556b1ca087d9  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26c37d9)
    67    [#16](/bitcoin-bitcoin/16/) 0x556b1c9f74ae  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26b24ae)
    68    [#17](/bitcoin-bitcoin/17/) 0x556b1ca202f2  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x26db2f2)
    69    [#18](/bitcoin-bitcoin/18/) 0x7f509405a0b2  (/lib/x86_64-linux-gnu/libc.so.6+0x270b2)
    70    [#19](/bitcoin-bitcoin/19/) 0x556b1c9cc24d  (/tmp/cirrus-ci-build/ci/scratch/build/bitcoin-x86_64-pc-linux-gnu/src/test/fuzz/fuzz+0x268724d)
    71NOTE: libFuzzer has rudimentary signal handlers.
    72      Combine libFuzzer with AddressSanitizer or similar for better crash reports.
    73SUMMARY: libFuzzer: deadly signal
    74MS: 0 ; base unit: 0000000000000000000000000000000000000000
    750xb3,0xb3,0xb3,0xb3,0x83,0x83,0x2f,0x2f,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xcb,0xb,0x0,0x0,0x0,0x0,0x0,0x0,0x0,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0x65,0xc9,0xc9,0xcb,0x1,0x0,0xcb,
    76\xb3\xb3\xb3\xb3\x83\x83//\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\xcb\x0b\x00\x00\x00\x00\x00\x00\x00eeeeeeeeeee\xc9\xc9\xcb\x01\x00\xcb
    77artifact_prefix='./'; Test unit written to ./crash-8bc5eec8ddffef064fc1834346b11548218b6153
    78Base64: s7Ozs4ODLy/Ly8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLy8vLCwAAAAAAAABlZWVlZWVlZWVlZcnJywEAyw==
    
  91. [policy] ancestor/descendant limits for packages 3cd663a5d3
  92. extract/rename helper functions from rpc_packages.py
    MOVEONLY; no change in behavior. Rename because there is another helper
    funciton in chain_transaction in test_framework.util.py
    f8253d69d6
  93. [test] helper function to increase transaction weight 313c09f7b7
  94. [test] parameterizable fee for make_chain and create_child_with_parents 2b6b26e57c
  95. [test] mempool package ancestor/descendant limits accf3d5868
  96. glozow force-pushed on Aug 6, 2021
  97. glozow commented at 9:23 am on August 6, 2021: member
    Thanks @fanquake. I’ve pushed a fix for what I expect is the culprit - single transaction ancestor/descendant limits can sometimes be slightly looser due to CPFP carve out; the decision to not return any tx results here would have caused that crash. I’ve gated CheckPackageLimits() to only run when there’s more than one transaction.
  98. glozow commented at 12:11 pm on August 6, 2021: member
    Addressed review comments and added tests for size limits (thanks @ariard for motivating) Ready for review again!
  99. in src/validation.cpp:1083 in accf3d5868
    1078@@ -1079,6 +1079,19 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1079         m_viewmempool.PackageAddTransaction(ws.m_ptx);
    1080     }
    1081 
    1082+    // Apply package mempool ancestor/descendant limits. Skip if there is only one transaction,
    1083+    // because it's unnecessary. Also, CPFP carve out can increase the limit for individual
    


    ariard commented at 10:45 am on August 8, 2021:

    Note, I still wonder if we need to make CPFP carve-out composable with package acceptance in the future. Otherwise a counterparty can build a branch of descendants from an output A on a multi-party unconfirmed ancestor to block package acceptance on an output B.

    I don’t think this is required for current LN safety, as least as long as implementation backends don’t try to chain their commitment+CPFP transactions. But it might be useful for other second-layers applications (e.g Lightning Pool’s batch execution+commitment txn).


    JeremyRubin commented at 3:13 pm on August 8, 2021:
    can we use packages to just remove the carve out?

    ariard commented at 11:00 am on August 9, 2021:

    Ideally yes, carve-out don’t scale for multi-party (n > 2) fee-bumping.

    Though extending the carve-out to safely chain CPFPs was argued by Matt on the LDK side (see here https://lightningdevkit.slack.com/archives/CTBLT3CAU/p1625786787422100). Personally, I think this approach is a bit doomed as we can assume first-stage of the CPFP-chain can always be replaced by a valid claim of a counterparty, thus downgrading the feerate of your other broadcasted commitments and I prefer the concurrent domino fee-bumping approach even if it’s more complexity swallowed by the LN backend.

    Further, I think there is another multi-party contract issue we have to solve. Namely, the first-stage state transaction can be symmetric but the second-stage states asymmetric-though-composable. E.g a CTV tree with a root transaction with branch A/branch B spending 2 isolated outputs. Alice broadcast root+branch A and Bob broadcast root+branch B, thus accidentally or maliciously evicting Alice’s branch. I think we would like the mempool logic to aggregate those in-flight packages by conserving the highest-feerate subgraphs and that would require some form of carve-out to avoid package limits interfering ?

  100. ariard commented at 10:53 am on August 8, 2021: member

    ACK accf3d5.

    The conservative limits check for package acceptance as implemented by this PR sounds correct to me to comply to the package limits as currently evaluated per-transaction and also verified test coverage.

  101. JeremyRubin commented at 5:33 pm on August 8, 2021: contributor
    utACK accf3d5
  102. fanquake merged this on Aug 9, 2021
  103. fanquake closed this on Aug 9, 2021

  104. darosior commented at 7:50 am on August 9, 2021: member
    CheckPackageLimits (and CheckPackage did too) computes the virtual size without taking the number of signature operations. It’s not an issue for now as PreChecks (which does) is always called before any code path that result in CheckPackageLimits or CheckPackage, but we need to be careful to not introduce a DOS vector in the future.
  105. glozow commented at 8:32 am on August 9, 2021: member

    @darosior good point. I agree sigop limits should be properly enforced for packages. At the moment, since the sigops limit is checked in PreChecks as you said, the limit is essentially MAX_STANDARD_TX_SIGOPS_COST * MAX_PACKAGE_COUNT, or 25x the single transaction size. Should that be changed?

    For CheckPackageLimits, I’m not sure that sigops are relevant for mempool DoS protections. We mostly care about the serialized transaction size because we don’t want really large transactions to exhaust space in our mempool and we care about feerates. But we don’t verify signatures after a transaction has been added to the mempool or removed from it, and don’t need to know how many sigops are in our ancestors/descendants, so sigop limits probably don’t make a difference?

  106. glozow deleted the branch on Aug 9, 2021
  107. darosior commented at 8:47 am on August 9, 2021: member

    For CheckPackageLimits, I’m not sure that sigops are relevant for mempool DoS protections

    The signature ops check is not about avoiding checking too many signatures in the mempool, but to keep the mempool consistent with what could be included in the next blocks by rational miners. Miners have a size budget (and therefore we care about feerate) but also a signature ops budget and we must take care about them too as otherwise our mempool could “diverge” (the top of the mempool wouldn’t represent what would likely be included in the next block). Mixing the sig op with the feerate was a neat way to avoid the multidimensional optimization problem, since only ill-crafted transactions could get near that budget.

  108. in src/txmempool.cpp:233 in accf3d5868
    228+    setEntries setAncestors;
    229+    const auto ret = CalculateAncestorsAndCheckLimits(total_size, package.size(),
    230+                                                      setAncestors, staged_ancestors,
    231+                                                      limitAncestorCount, limitAncestorSize,
    232+                                                      limitDescendantCount, limitDescendantSize, errString);
    233+    // It's possible to overestimate the ancestor/descendant totals.
    


    jnewbery commented at 9:25 am on August 9, 2021:
    Now that this errString is only used for the PackageMempoolAcceptResult, I think you can just drop the “possibly” prefix. It may have been useful on individual MempoolAcceptResult to disambiguate between failure because a single transaction definitely exceeded the limits and failure because a transaction was in a package that possibly exceeded limits. Now that we’re not using it there, I think it should just be removed.

    jnewbery commented at 12:23 pm on August 9, 2021:
    In fact, I don’t think this string ever gets used (in logging or returned to users). Should testmempoolaccept be updated to return the reject reason and debug message for the transaction result and package result (see ValidationState::ToString(), which indirectly gets called if sendrawtransaction fails).
  109. in test/functional/rpc_packages.py:136 in accf3d5868
    145-            (tx, txhex, value, parent_locking_script) = self.chain_transaction(txid, value, 0, parent_locking_script)
    146-            txid = tx.rehash()
    147-            chain_hex.append(txhex)
    148-            chain_txns.append(tx)
    149-
    150+        (chain_hex, chain_txns) = create_raw_chain(node, first_coin, self.address, self.privkeys)
    


    jnewbery commented at 9:50 am on August 9, 2021:

    No need for these parens. You can just assign to the individual variables:

    0        chain_hex, chain_txns = create_raw_chain(node, first_coin, self.address, self.privkeys)
    

    Same for several other assignments below.

  110. in test/functional/test_framework/wallet.py:203 in accf3d5868
    198+        "amount": parent_value,
    199+    }] if parent_locking_script else None
    200+    signedtx = node.signrawtransactionwithkey(hexstring=rawtx, privkeys=privkeys, prevtxs=prevtxs)
    201+    assert signedtx["complete"]
    202+    tx = tx_from_hex(signedtx["hex"])
    203+    return (tx, signedtx["hex"], my_value, tx.vout[0].scriptPubKey.hex())
    


    jnewbery commented at 9:58 am on August 9, 2021:

    No need to construct a tuple here. In python you can just use an expression list in a return statement:

    0    return tx, signedtx["hex"], my_value, tx.vout[0].scriptPubKey.hex()
    

    However, both of these are dangerous interfaces with an untyped language like Python, since users of this function could easily assign the return expressions to the wrong variables. A safer interface would be to return a named tuple so that the caller needs to explicitly unpack the wanted values from the return object.


    ysangkok commented at 8:59 pm on August 18, 2021:

    Users of named tuples are not forced to access them by name, because they are also tuples as the name suggests:

    0>>> A = namedtuple('A', ['x','y'])
    1>>> A(1,2)
    2A(x=1, y=2)
    3>>> y, x = A(1,2)
    4>>> x, y
    5(2, 1)
    

    Dataclasses seem closer to what you want:

     0>>> from dataclasses import dataclass
     1>>> [@dataclass](/bitcoin-bitcoin/contributor/dataclass/)
     2... class A:
     3...     x: int
     4...     y: int
     5... 
     6>>> A(1,2)
     7A(x=1, y=2)
     8>>> x,y = A(1,2)
     9Traceback (most recent call last):
    10  File "<stdin>", line 1, in <module>
    11TypeError: cannot unpack non-iterable A object
    
  111. glozow commented at 10:34 am on August 9, 2021: member

    The signature ops check is not about avoiding checking too many signatures in the mempool, but to keep the mempool consistent with what could be included in the next blocks by rational miners. Miners have a size budget (and therefore we care about feerate) but also a signature ops budget and we must take care about them too as otherwise our mempool could “diverge” (the top of the mempool wouldn’t represent what would likely be included in the next block). Mixing the sig op with the feerate was a neat way to avoid the multidimensional optimization problem, since only ill-crafted transactions could get near that budget.

    My takeaway from this is that thinking of feerate as fee / max(sigops size, serialized size) the same way virtual size is max(sigops size, serialized size) is a heuristic to address the fact that blocks are constrained by both size and sigops, which would be 2D knapsack. (I still don’t think it’s a DoS attack but that might just be me being nitpicky). So, what I should do is go through the validation code and make sure that the correct virtual size (including sigops) is used for checks and recorded in the mempool entries.

  112. in test/functional/test_framework/wallet.py:219 in accf3d5868
    214+        prevtxs.append({"txid": parents_tx[i].rehash(), "vout": 0, "scriptPubKey": locking_scripts[i], "amount": values[i]})
    215+    signedtx_child = node.signrawtransactionwithkey(hexstring=rawtx_child, privkeys=privkeys, prevtxs=prevtxs)
    216+    assert signedtx_child["complete"]
    217+    return signedtx_child["hex"]
    218+
    219+def create_raw_chain(node, first_coin, address, privkeys, chain_length=25):
    


    jnewbery commented at 10:37 am on August 9, 2021:

    the default chain_length parameter seems too specific to the specific packages test. Perhaps remove it and call the function with that specific value:

    0def create_raw_chain(node, first_coin, address, privkeys, chain_length):
    
  113. in test/functional/test_framework/wallet.py:185 in accf3d5868
    180@@ -176,3 +181,75 @@ def create_self_transfer(self, *, fee_rate=Decimal("0.003"), from_node, utxo_to_
    181     def sendrawtransaction(self, *, from_node, tx_hex):
    182         from_node.sendrawtransaction(tx_hex)
    183         self.scan_tx(from_node.decoderawtransaction(tx_hex))
    184+
    185+def make_chain(node, address, privkeys, parent_txid, parent_value, n=0, parent_locking_script=None, fee=DEFAULT_FEE):
    


    jnewbery commented at 10:50 am on August 9, 2021:

    make_chain() doesn’t seem like the right name for this function. I’d expect such a function to return a chain of transactions, rather than an individual transaction.

    It looks like the n parameter isn’t ever used by any of the callers. Perhaps remove it and update the function documentation to say that the function spends the first output. The function can always be updated in future to allow n to be different.

  114. in test/functional/test_framework/wallet.py:246 in accf3d5868
    241+    tx_heavy = deepcopy(tx)
    242+    assert_greater_than_or_equal(target_weight, tx_heavy.get_weight())
    243+    while tx_heavy.get_weight() < target_weight:
    244+        random_spk = "6a4d0200"  # OP_RETURN OP_PUSH2 512 bytes
    245+        for _ in range(512*2):
    246+            random_spk += choice("0123456789ABCDEF")
    


    jnewbery commented at 11:03 am on August 9, 2021:
    Why do these outputs need to have different (and random) scriptPubKeys? It looks like none of the callers use any of these added outputs, so they could all just spend to the same (arbitrary) spk.
  115. in test/functional/test_framework/wallet.py:241 in accf3d5868
    236+
    237+def bulk_transaction(tx, node, target_weight, privkeys, prevtxs=None):
    238+    """Pad a transaction with extra outputs until it reaches a target weight (or higher).
    239+    returns CTransaction object
    240+    """
    241+    tx_heavy = deepcopy(tx)
    


    jnewbery commented at 11:05 am on August 9, 2021:
    Is there a reason that you’re copying this rather than just mutating tx and returning it?
  116. in test/functional/mempool_package_limits.py:447 in accf3d5868
    442+                mempool_tx = bulk_transaction(tx_small, node, target_weight, self.privkeys, prevtxs)
    443+            else: # OP_TRUE
    444+                inputs = [{"txid": txid, "vout": 1}]
    445+                outputs = {self.address: value - high_fee}
    446+                small_tx = tx_from_hex(node.createrawtransaction(inputs, outputs))
    447+                mempool_tx = bulk_transaction(small_tx, node, target_weight, None, prevtxs)
    


    jnewbery commented at 11:08 am on August 9, 2021:

    This doesn’t need to pass prevtxs (since privkeys is None):

    0                mempool_tx = bulk_transaction(small_tx, node, target_weight, None, None)
    

    You could also make privkeys an optional parameter in bulk_transaction() that defaults to None.

    This means that spk and prevtxs could go inside the j == 0 branch, since they’re not used in the j == 1 branch.

  117. in test/functional/test_framework/wallet.py:255 in accf3d5868
    250+        signed = node.signrawtransactionwithkey(tx_heavy.serialize().hex(), privkeys, prevtxs)
    251+        return tx_from_hex(signed["hex"])
    252+    # OP_TRUE
    253+    tx_heavy.wit.vtxinwit = [CTxInWitness()]
    254+    tx_heavy.wit.vtxinwit[0].scriptWitness.stack = [CScript([OP_TRUE])]
    255+    return tx_heavy
    


    jnewbery commented at 11:09 am on August 9, 2021:
    Perhaps make this an else branch to make it very clear that this is the alternative to signing with provided privkeys.
  118. in test/functional/test_framework/wallet.py:192 in accf3d5868
    187+    amount = parent_value with a fee deducted.
    188+    Return tuple (CTransaction object, raw hex, nValue, scriptPubKey of the output created).
    189+    """
    190+    inputs = [{"txid": parent_txid, "vout": n}]
    191+    my_value = parent_value - fee
    192+    outputs = {address : my_value}
    


    jnewbery commented at 11:13 am on August 9, 2021:

    No need for a space after the key:

    0    outputs = {address: my_value}
    

    Same in create_child_with_parents() below

  119. in test/functional/test_framework/wallet.py:214 in accf3d5868
    209+    inputs = [{"txid": tx.rehash(), "vout": 0} for tx in parents_tx]
    210+    outputs = {address : total_value - fee}
    211+    rawtx_child = node.createrawtransaction(inputs, outputs)
    212+    prevtxs = []
    213+    for i in range(num_parents):
    214+        prevtxs.append({"txid": parents_tx[i].rehash(), "vout": 0, "scriptPubKey": locking_scripts[i], "amount": values[i]})
    


    jnewbery commented at 11:15 am on August 9, 2021:
    This implicitly assumes that len(parents_tx) == len(locking_scripts) == len(values). Perhaps assert that explicitly at the top of the function, or change the function signature to take a parents parameter which is a list of (tx, value, locking_script) tuples.
  120. in test/functional/mempool_package_limits.py:129 in accf3d5868
    124+        self.log.info("Check that in-mempool and in-package descendants are calculated properly in packages")
    125+        # Top parent in mempool, M1
    126+        first_coin = self.coins.pop()
    127+        parent_value = (first_coin["amount"] - Decimal("0.0002")) / 2 # Deduct reasonable fee and make 2 outputs
    128+        inputs = [{"txid": first_coin["txid"], "vout": 0}]
    129+        outputs = [{self.address : parent_value}, {ADDRESS_BCRT1_P2WSH_OP_TRUE : parent_value}]
    


    jnewbery commented at 11:32 am on August 9, 2021:
    I think it’s fine for both outputs to have the same scriptPubKey (and would make the test logic below simpler as well).
  121. in test/functional/mempool_package_limits.py:268 in accf3d5868
    263+            txid = top_coin["txid"]
    264+            value = top_coin["amount"]
    265+            for i in range(12):
    266+                (tx, txhex, value, spk) = make_chain(node, self.address, self.privkeys, txid, value, 0, spk)
    267+                txid = tx.rehash()
    268+                value -= Decimal("0.0001")
    


    jnewbery commented at 11:44 am on August 9, 2021:
    This isn’t needed. make_chain already removes the fee from the output values.
  122. in test/functional/mempool_package_limits.py:279 in accf3d5868
    274+                    scripts.append(spk)
    275+
    276+        # Child Pc
    277+        pc_hex = create_child_with_parents(node, self.address, self.privkeys, parents_tx, values, scripts)
    278+        pc_tx = tx_from_hex(pc_hex)
    279+        pc_value = sum(values) - Decimal("0.0002")
    


    jnewbery commented at 11:57 am on August 9, 2021:

    This doesn’t need to be 0.0002 and works equally well with 0.0001 now that create_child_with_parents() always sets the fee to 0.0001 regardless of number of inputs.

    Rather than using a magic number here, perhaps just grab the output amount from the transaction:

    0        pc_value = Decimal(pc_tx.vout[0].nValue) / COIN
    
  123. in test/functional/mempool_package_limits.py:312 in accf3d5868
    307+        package_hex = []
    308+        parent_txns = []
    309+        parent_values = []
    310+        scripts = []
    311+        for _ in range(5): # Make package transactions P0 ... P4
    312+            gp_tx = []
    


    jnewbery commented at 12:02 pm on August 9, 2021:
    I have a slight preference for the previous grandparent_ naming here (I don’t think gp will be obvious to readers). Also perhaps change this to txs so it’s clearer that it’s multiple transactions.
  124. in test/functional/mempool_package_limits.py:310 in accf3d5868
    305+        node = self.nodes[0]
    306+        assert_equal(0, node.getmempoolinfo()["size"])
    307+        package_hex = []
    308+        parent_txns = []
    309+        parent_values = []
    310+        scripts = []
    


    jnewbery commented at 12:03 pm on August 9, 2021:
    why not parent_scipts to go with the other parent_... (or even parent_spks).
  125. in test/functional/mempool_package_limits.py:58 in accf3d5868
    53+        self.test_anc_count_limits()
    54+        self.test_anc_count_limits_2()
    55+        self.test_anc_count_limits_bushy()
    56+
    57+        # The node will accept our (nonstandard) extra large OP_RETURN outputs
    58+        self.restart_node(0, extra_args=["-acceptnonstdtxn=1"])
    


    jnewbery commented at 12:15 pm on August 9, 2021:
    Is it possible to make this test work without using -acceptnonstdtxn, perhaps by having bulk_transaction just add standard outputs? Since this test is all about mempool policy, it seems better to have the policy of the node being tested to be as close to realistic as possible.

    glozow commented at 10:30 am on August 11, 2021:
    I definitely agree, it’s a shame to turn off standardness in a policy-based tests. Will try to do this in a followup.
  126. in test/functional/mempool_package_limits.py:397 in accf3d5868
    392+        pd_hex = pd_tx.serialize().hex()
    393+
    394+        assert_equal(2, node.getmempoolinfo()["size"])
    395+        testres_too_heavy = node.testmempoolaccept(rawtxs=[pc_hex, pd_hex])
    396+        for txres in testres_too_heavy:
    397+            assert_equal(txres["package-error"], "package-mempool-limits")
    


    jnewbery commented at 12:31 pm on August 9, 2021:
    If we improved the error messages being returned to the user, we could more precisely check that the package was rejected due to exceeds ancestor size limits.
  127. in test/functional/mempool_package_limits.py:430 in accf3d5868
    425+        node.sendrawtransaction(parent_tx.serialize().hex())
    426+
    427+        package_hex = []
    428+        for j in range(2): # Two legs (left and right)
    429+            # Mempool transaction (Mb and Mc)
    430+            mempool_tx = CTransaction()
    


    jnewbery commented at 12:34 pm on August 9, 2021:
    No need to declare variables in Python
  128. in test/functional/mempool_package_limits.py:446 in accf3d5868
    441+                (tx_small, _, _, _) = make_chain(node, self.address, self.privkeys, txid, value, j, spk, high_fee)
    442+                mempool_tx = bulk_transaction(tx_small, node, target_weight, self.privkeys, prevtxs)
    443+            else: # OP_TRUE
    444+                inputs = [{"txid": txid, "vout": 1}]
    445+                outputs = {self.address: value - high_fee}
    446+                small_tx = tx_from_hex(node.createrawtransaction(inputs, outputs))
    


    jnewbery commented at 12:43 pm on August 9, 2021:
    Maybe choose between tx_small and small_tx and stick to it :)
  129. jnewbery commented at 12:47 pm on August 9, 2021: member

    reACK accf3d5868

    I’ve left a bunch of small comments. None are critical, so feel free to ignore or take them in a follow-up PR.

  130. sidhujag referenced this in commit eacf3f89e0 on Aug 10, 2021
  131. fanquake commented at 2:47 am on August 11, 2021: member

    I’ve left a bunch of small comments. None are critical, so feel free to ignore or take them in a follow-up PR.

    I think there’s enough here to warrant a followup.

  132. laanwj removed this from the "Blockers" column in a project

  133. MarcoFalke referenced this in commit fac7181091 on Sep 9, 2021
  134. sidhujag referenced this in commit 59d084d3f5 on Sep 11, 2021
  135. DrahtBot locked this on Aug 19, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-21 09:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me