Add package acceptance logic to mempool #16401

pull sdaftuar wants to merge 2 commits into bitcoin:master from sdaftuar:2019-07-package-relay changing 6 files +296 −51
  1. sdaftuar commented at 8:16 pm on July 16, 2019: member

    Accepting a single transaction to the mempool only succeeds if (among other things) the feerate of the transaction is greater than both the min relay fee and the mempool min fee. Consequently, a transaction below the minimum fee may not be accepted to the mempool, even if we later learn of a transaction with a high fee that depends on it.

    This PR adds code to validation that will accept a package of transactions to the mempool under the following conditions:

    • All package transactions must be direct parents of the final transaction. This is a simple heuristic for ensuring that a candidate list of transactions is in fact a package (we wouldn’t want arbitrary transactions to be paying for random low feerate transactions)

    • The feerate of the package, as a whole, exceeds the mempool min fee and the min relay fee.

    • No transactions in the mempool conflict with any transactions in the package. This is a simplification that makes the logic easier to write. Without this requirement, we would need to do additional checks to ensure that no parent transaction would evict a transaction from the mempool that some other child depends on.

    • The ancestor/descendant size limits are calculated assuming that any mempool ancestor of any candidate transaction is an ancestor of all the candidate transactions. This allows for doing simpler calculations to ensure that we’re staying within the mempool’s package limits. If we eliminated this, we would need to do much more careful package calculations for each candidate transaction and each in-mempool ancestor.

    This PR also adds code to net_processing that will attempt to process transaction packages in one narrow case: if a transaction fails to get into the mempool due to insufficient fee, but has an orphan in the orphan pool, then we will try to process the pair together to see if they can be accepted as a package.

    This PR is definitely WIP, but I’m opening it as a proof-of-concept motivation for refactoring ATMP (#16400).

  2. DrahtBot commented at 10:33 pm on July 16, 2019: member

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #17985 (net: Remove forcerelay of rejected txs by MarcoFalke)
    • #17399 (validation: Templatize ValidationState instead of subclassing by jkczyz)
    • #15606 ([experimental] UTXO snapshots by jamesob)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  3. DrahtBot added the label Mempool on Jul 17, 2019
  4. DrahtBot added the label P2P on Jul 17, 2019
  5. DrahtBot added the label Tests on Jul 17, 2019
  6. DrahtBot added the label Validation on Jul 17, 2019
  7. sdaftuar force-pushed on Jul 24, 2019
  8. in src/validation.h:334 in 149930d746 outdated
    329@@ -326,7 +330,8 @@ bool TestLockPointValidity(const LockPoints* lp) EXCLUSIVE_LOCKS_REQUIRED(cs_mai
    330  *
    331  * See consensus/consensus.h for flag definitions.
    332  */
    333-bool CheckSequenceLocks(const CTxMemPool& pool, const CTransaction& tx, int flags, LockPoints* lp = nullptr, bool useExistingLockPoints = false) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
    334+bool CheckSequenceLocks(const CTxMemPool &pool, const CTransaction& tx, int flags, LockPoints* lp = nullptr, bool useExistingLockPoints = false) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
    335+bool CheckSequenceLocks(CCoinsViewCache& view, const CTransaction& tx, int flags, LockPoints* lp = nullptr, bool useExistingLockPoints = false) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
    


    ajtowns commented at 1:24 am on July 25, 2019:
    Moving the & in CTxMemPool &pool seems like a typo. Also you don’t actually need the view method in the header, could just make it a static.

    sdaftuar commented at 7:23 pm on July 29, 2019:
    Fixed.
  9. in src/net_processing.cpp:2567 in 149930d746 outdated
    2571+            // in the orphan pool -- then maybe that tx is only missing this
    2572+            // one parent.
    2573+            // Try to process the pair as a package.
    2574+            bool added_as_package = false;
    2575+            if (state.GetRejectCode() == REJECT_INSUFFICIENTFEE) {
    2576+                LOCK(g_cs_orphans);
    


    ajtowns commented at 1:27 am on July 25, 2019:
    g_cs_orphans is already locked earlier in the path (LOCK2(cs_main, g_cs_orphans))

    sdaftuar commented at 7:23 pm on July 29, 2019:
    Fixed.
  10. in src/validation.cpp:467 in 149930d746 outdated
    476+
    477+    // Single transaction acceptance
    478+    bool AcceptSingleTransaction(const CTransactionRef& ptx, ATMPArgs& args) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
    479+
    480+    // Multiple transaction acceptance
    481+    bool AcceptMultipleTransactions(std::list<CTransactionRef> tx_list, ATMPArgs& args) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
    


    ajtowns commented at 1:36 am on July 25, 2019:
    tx_list should be const & afaics.

    sdaftuar commented at 7:23 pm on July 29, 2019:
    Fixed.
  11. in src/validation.cpp:1011 in 149930d746 outdated
    1271+    AssertLockHeld(cs_main);
    1272+    LOCK(m_pool.cs);
    1273+
    1274+    // We will disable the individual transaction fee-rate checks for all
    1275+    // parents of the final transaction.
    1276+    ATMPArgs args_bypass_limits { args.m_chainparams, args.m_state, args.m_missing_inputs, args.m_accept_time, args.m_replaced_transactions, /*m_bypass_limits*/ true, args.m_absurd_fee, args.m_coins_to_uncache, args.m_test_accept };
    


    ajtowns commented at 1:54 am on July 25, 2019:
    Adding a Workspace::m_cpfpable bool set to true for all but the last tx, and setting const bool bypass_limits = args.bypass_limits || ws.cpfpable seems better to me, fwiw.

    sdaftuar commented at 7:24 pm on July 29, 2019:
    Looks good, done.
  12. in src/validation.cpp:1015 in 149930d746 outdated
    1275+    // parents of the final transaction.
    1276+    ATMPArgs args_bypass_limits { args.m_chainparams, args.m_state, args.m_missing_inputs, args.m_accept_time, args.m_replaced_transactions, /*m_bypass_limits*/ true, args.m_absurd_fee, args.m_coins_to_uncache, args.m_test_accept };
    1277+    std::list<Workspace> tx_workspaces;
    1278+
    1279+    for (const CTransactionRef& ptx : tx_list) {
    1280+        tx_workspaces.emplace_back(Workspace(ptx));
    


    ajtowns commented at 1:55 am on July 25, 2019:
    adding Workspace& ws = tx_workspaces.back(); maybe makes this loop a bit tidier?

    sdaftuar commented at 7:24 pm on July 29, 2019:
    Agreed.
  13. in src/validation.cpp:1132 in 149930d746 outdated
    1366+        return args.m_state.Invalid(ValidationInvalidReason::TX_MEMPOOL_POLICY, false, REJECT_NONSTANDARD, "too-long-package-mempool-chain", strprintf("exceeds ancestor size limit for tx %s [limit: %u]", tx_list.back()->GetHash().ToString(), m_limit_ancestor_size));
    1367+    }
    1368 
    1369+    // Make sure all transactions are ancestors of the last one.
    1370+    // For now, just check that the last transaction has all prior transactions
    1371+    // as direct inputs. We can relax this in the future for bigger packages.
    


    ajtowns commented at 2:12 am on July 25, 2019:
    This check could be done first, before even setting up the workspaces, as far as I can see.
  14. in src/validation.cpp:1168 in 149930d746 outdated
    1404+        // success. We have to wait until all the inputs are in the mempool or
    1405+        // in the utxo set (for now) before we can invoke this. This should
    1406+        // not fail unless there's a logic bug in our script validation, but if
    1407+        // it somehow were to fail on some child tx, we would potentially be
    1408+        // allowing parents into the mempool with this logic.
    1409+        if (!ConsensusScriptChecks(args, *wit, txdata[i])) return false;
    


    ajtowns commented at 2:17 am on July 25, 2019:
    This seems like it would be a good case for the CHECK macro from #16136 (if it logged failures, anyway)…
  15. in src/validation.cpp:1161 in 149930d746 outdated
    1397+    if (args.m_test_accept) return true;
    1398+
    1399+    // Add everything to the mempool, and make sure the last transaction makes
    1400+    // it in.
    1401+    size_t i=0;
    1402+    for (auto wit = tx_workspaces.begin(); wit != tx_workspaces.end(); ++wit, ++i) {
    


    ajtowns commented at 2:18 am on July 25, 2019:

    Would be nice if C++11 had a “zip” iterator… FWIW, you could do this as:

    0struct WorkspaceAndTxData { Workspace ws; PrecomputedTransactionData* ptxdata; };
    1std::list<WorkspaceAndTxData> tx_workspaces;
    2...
    3for (auto& wstxd : tx_workspaces) {
    4    txdata.emplace_back(*wstxd.ws.m_ptx);
    5    wstxd.ptxdata = &txdata.back();
    6}
    

    and only have to loop over tx_workspaces, but it’s probably not worth the hassle.

  16. in src/validation.cpp:1123 in 149930d746 outdated
    1357     }
    1358 
    1359-    GetMainSignals().TransactionAddedToMempool(ptx);
    1360+    // In case we have no in-mempool ancestors, we must check the transaction
    1361+    // package itself.
    1362+    if (total_size > m_limit_descendant_size) {
    


    ajtowns commented at 2:30 am on July 25, 2019:
    Might as well do this test before looping over all_ancestors
  17. in src/validation.cpp:1004 in 149930d746 outdated
    1264+
    1265+    return true;
    1266+}
    1267+
    1268+
    1269+bool MemPoolAccept::AcceptMultipleTransactions(std::list<CTransactionRef> tx_list, ATMPArgs& args)
    


    ajtowns commented at 3:22 am on July 25, 2019:
    I think once this is more baked, it’d be good to pull sections of this out into their own private MemPoolAccept:: methods so the overall logic in this function is as clear as the logic in AcceptSingleTransaction
  18. ajtowns commented at 3:33 am on July 25, 2019: member
    I think this works as a proof of concept.
  19. sdaftuar force-pushed on Jul 31, 2019
  20. sdaftuar force-pushed on Sep 19, 2019
  21. in src/txmempool.h:756 in cd8f11aaaa outdated
    750@@ -751,8 +751,12 @@ class CTxMemPool
    751  */
    752 class CCoinsViewMemPool : public CCoinsViewBacked
    753 {
    754+public:
    755+    void AddPotentialTransaction(const CTransactionRef& ptx) { package_tx.emplace(ptx->GetHash(), ptx); }
    


    NicolasDorier commented at 8:21 am on September 30, 2019:
    As far as I see, in the current code code paths, the max number of elements in package_tx is two. But maybe there should be a hard limit, even if high, at this level just in case?
  22. in src/validation.cpp:1178 in 92cc72905a outdated
    1186+        if (wit != tx_workspaces.begin()) {
    1187+            wit->m_ancestors.clear();
    1188+            std::string dummy_string;
    1189+            // Don't worry about the return value here; it must pass based on
    1190+            // the checks above. TODO: assert on passing?
    1191+            m_pool.CalculateMemPoolAncestors(*(wit->m_entry), wit->m_ancestors, m_limit_ancestors, m_limit_ancestor_size, m_limit_descendants, m_limit_descendant_size, dummy_string);
    


    etscrivner commented at 2:57 pm on October 1, 2019:
    This is probably also a good spot for a DCHECK just to ensure that the assumption - “this must pass based on the checks above” - holds here.
  23. in test/functional/feature_package_relay.py:72 in cd8f11aaaa outdated
    67+        self.nodes[0].add_p2p_connection(BaseNode())
    68+
    69+        self.nodes[1].generate(101)
    70+        self.sync_all(self.nodes[0:2])
    71+
    72+        # On node1, generate a 0-fee transaction, and then a 2-satoshi-per-byte
    


    pinheadmz commented at 1:37 pm on October 2, 2019:
    I might be misunderstanding, but it looks like the first tx fee rate is set to 2 sat/kB (actual total fee on this tx comes out to a single sat) and the second tx fee rate is set to 50000 (actual total fee ~0.000083)?
  24. etscrivner commented at 2:34 pm on October 2, 2019: contributor
    Agree with others that I think this works as a proof-of-concept. In my opinion, this still needs more work on tests, more extensive testing, and generalization beyond 2 transaction packages (up to some maximum package size).
  25. in src/net_processing.cpp:2587 in cd8f11aaaa outdated
    2591+            // If this tx didn't make it in due to feerate, and there is a tx
    2592+            // in the orphan pool -- then maybe that tx is only missing this
    2593+            // one parent.
    2594+            // Try to process the pair as a package.
    2595+            bool added_as_package = false;
    2596+            if (state.GetRejectCode() == REJECT_INSUFFICIENTFEE) {
    


    fjahr commented at 4:22 pm on October 2, 2019:
    Maybe also add a check for g_orphan_list.size() > 0 here? But I am not experienced with the orphan pool so it may be fair to leave it out if the orphan pool is populated 99% of the time.
  26. fjahr commented at 4:33 pm on October 2, 2019: member
    This works for me as a proof of concept. Obviously the tests still need some work but thanks for the extensive commenting in the code.
  27. jonatack commented at 4:42 pm on October 2, 2019: member

    Concept/approach ACK cd8f11aaaa7dd27f95af3d4cb85ff7c7607e76f9, tests pass here as well as rebased on master, node running since a few hours with net logging. The code documentation is helpful. Am still reviewing the code, logic, and test, particularly the checks to satisfy and the complexity introduced in AcceptMultipleTransactions().

    Suggestion to reviewers: It may be helpful to review merged PR #16400 and [issue #14895](https://github.com/bitcoin/bitcoin/issues/14895) before this one.

    Given that this PR is labelled Draft and the ATMP refactoring in #16400 has been merged, what are your plans going forward beyond concept/approach ACKs? For example, continue refactoring the p2p protocol for package relay computational efficiency/simplicity/multiparents, or moving forward with a standalone PoC implementation like this one?

  28. in test/functional/feature_package_relay.py:102 in cd8f11aaaa outdated
     97+
     98+        self.nodes[0].generate(1) # Clear out the reject filter
     99+
    100+        # Delivering tx2 should result it being in the orphan pool, and then
    101+        # delivering tx should cause both to be accepted.
    102+        p2p = self.nodes[0].add_p2p_connection(BaseNode())
    


    pinheadmz commented at 5:26 pm on October 2, 2019:
    Couldn’t this p2p variable be defined up on line 67?
  29. etscrivner commented at 5:55 pm on October 2, 2019: contributor
    Add some additional tests in this gist and have also addressed the TODO item: https://gist.github.com/etscrivner/19d5f942a973940aaaeb397bc5e0e0d9
  30. in src/validation.cpp:307 in 92cc72905a outdated
    307@@ -311,6 +308,14 @@ bool CheckSequenceLocks(const CTxMemPool& pool, const CTransaction& tx, int flag
    308     return EvaluateSequenceLocks(index, lockPair);
    309 }
    310 
    311+bool CheckSequenceLocks(const CTxMemPool& pool, const CTransaction& tx, int flags, LockPoints* lp, bool useExistingLockPoints)
    


    ariard commented at 10:53 pm on October 31, 2019:

    I think the new wrapper doesn’t do what the removed comment intended? Can’t we pass coins_cache defined above instead of defining a new one, would work too in removeForReorg ? Also isn’t this a behavior change as coin is fetched from UTXO set instead from mempool?

    This change is may worth its own commit.

  31. in src/validation.cpp:238 in 92cc72905a outdated
    238@@ -239,10 +239,9 @@ bool TestLockPointValidity(const LockPoints* lp)
    239     return true;
    240 }
    241 
    242-bool CheckSequenceLocks(const CTxMemPool& pool, const CTransaction& tx, int flags, LockPoints* lp, bool useExistingLockPoints)
    243+static bool CheckSequenceLocks(CCoinsViewCache &view, const CTransaction& tx, int flags, LockPoints* lp, bool useExistingLockPoints=false) EXCLUSIVE_LOCKS_REQUIRED(cs_main)
    


    ariard commented at 10:58 pm on October 31, 2019:
    EXCLUSIVE_LOCKS_REQUIRED(pool.cs, cs_main)? (wrapper and AcceptMultipleTransactions too)
  32. in src/validation.cpp:711 in 92cc72905a outdated
    717@@ -709,8 +718,8 @@ bool MemPoolAccept::PreChecks(ATMPArgs& args, Workspace& ws)
    718         return state.Invalid(ValidationInvalidReason::TX_NOT_STANDARD, false, REJECT_NONSTANDARD, "bad-txns-too-many-sigops",
    719                 strprintf("%d", nSigOpsCost));
    720 
    721-    // No transactions are allowed below minRelayTxFee except from disconnected
    722-    // blocks
    723+    // No transactions are allowed below minRelayTxFee/mempool min fee except
    724+    // from disconnected blocks
    


    ariard commented at 11:03 pm on October 31, 2019:
    “or being CPFP’ed by another tx”
  33. in src/txmempool.h:760 in 92cc72905a outdated
    750@@ -751,8 +751,12 @@ class CTxMemPool
    751  */
    752 class CCoinsViewMemPool : public CCoinsViewBacked
    753 {
    754+public:
    755+    void AddPotentialTransaction(const CTransactionRef& ptx) { package_tx.emplace(ptx->GetHash(), ptx); }
    756+
    757 protected:
    758     const CTxMemPool& mempool;
    759+    std::map<uint256, const CTransactionRef> package_tx;
    


    ariard commented at 11:23 pm on October 31, 2019:
    Could have a better name like package_pool just to underscore we have multiple pending txn. I didn’t get the rational at first read, I think you can explain given we have mempool locked and can’t write until we have the whole set of tx, we need this temporary buffer to find outpoints of package txn after first row of parents. If this to work assuming topological ordering it should be documented?
  34. in src/validation.cpp:1063 in 92cc72905a outdated
    1071+        Workspace &ws = tx_workspaces.back();
    1072+
    1073+        if (!PreChecks(args, ws)) return false;
    1074+
    1075+        // For now, do not allow replacements in package transactions. If we
    1076+        // relax this, we would need to check that no child transaction depends
    


    ariard commented at 11:48 pm on October 31, 2019:
    +1 for conflicts isolation for now. But IMO we may have to solve child conflicts in a way or another if people start to use one CPFP to bump multiple multi-party txn, like Alice broadcast commitment tx 1 + CPFP child tx, txn get into mempool, now she have to broadcast commitment tx 2 but doesn’t have another UTXO available and can’t CPFP output as going beyond carve-out relaxation..
  35. in src/validation.cpp:1155 in 92cc72905a outdated
    1163+    }
    1164+
    1165+    // This package should be accepted except possibly for failing in
    1166+    // TrimToSize(), which we can't exercise without actually adding to the
    1167+    // mempool and seeing what would happen. Note that we are not adding
    1168+    // these transactions to the script cache, unlike in the single-tx case.
    


    ariard commented at 0:35 am on November 1, 2019:
    Rational?
  36. ariard commented at 0:53 am on November 1, 2019: member

    Reviewed only mempool/validation change for now.

    No transactions in the mempool conflict with any transactions in the package. This is a simplification that makes the logic easier to write. Without this requirement, we would need to do additional checks to ensure that no parent transaction would evict a transaction from the mempool that some other child depends on.

    I assume we can still RBF both parents or CPFP child in case our package txn are still stucking in the mempool? It shouldn’t be an issue as after acceptance txn are normal mempool txn, assuming we don’t have package garbage value for ancestors/descendants.

    The ancestor/descendant size limits are calculated assuming that any mempool ancestor of any candidate transaction is an ancestor of all the candidate transactions.

    It should be fine for LN as unconfirmed ancestors should be limited to 2 (commitment tx and CPFP) unless people batch their commitment txn with one CPFP.

  37. in src/net_processing.cpp:746 in 1dd7de23f2 outdated
    747@@ -748,6 +748,20 @@ void RequestTx(CNodeState* state, const uint256& txid, std::chrono::microseconds
    748     peer_download_state.m_tx_process_time.emplace(process_time, txid);
    749 }
    750 
    751+// Add to a peer's orphan_work_set after processing a given transaction.
    752+void UpdateOrphanWorkSet(const CTransaction& tx, CNode *peer) EXCLUSIVE_LOCKS_REQUIRED(g_cs_orphans)
    


    ariard commented at 4:50 pm on November 1, 2019:
    Can be reused too in ProcessOrphanTx
  38. in src/net_processing.cpp:2582 in 1dd7de23f2 outdated
    2600+                    if (it != mapOrphanTransactionsByPrev.end()) {
    2601+                        for (auto orphan_iter : it->second) orphans_missing_this_tx.push_back(orphan_iter);
    2602+                    }
    2603+                }
    2604+                if (!orphans_missing_this_tx.empty()) {
    2605+                    const COrphanTx &orphan_tx = orphans_missing_this_tx.front()->second;
    


    ariard commented at 5:33 pm on November 1, 2019:
    If I broadcast a second CPFP due to to the first one being still too low but this one still being in the orphan pool, you need to iter and try with the whole set not only picking up the first transaction ? That would be a DoS vector bounded by the size of the orphan pool, and maybe it can be mitigate by caching the package already-tried.
  39. in src/net_processing.cpp:2608 in 1dd7de23f2 outdated
    2626+                                    mempool.size(), mempool.DynamicMemoryUsage() / 1000);
    2627+                        }
    2628+
    2629+                        // Recursively process any orphan transactions that depended on these
    2630+                        ProcessOrphanTx(connman, pfrom->orphan_work_set, lRemovedTxn);
    2631+                    }
    


    ariard commented at 6:09 pm on November 1, 2019:
    If package wasn’t accepted (like a high-fee invalid orphan tx), and if inputs are not missing, if orphan_tx is invalid we may punish peer, or at least erase it for not being standard or insufficient fee?
  40. ariard commented at 6:23 pm on November 1, 2019: member

    It should be made really clear for off-chain protocol devs to always rebroadcast the whole package instead of only the parent or the child. I’m worrying about scenarii where a a commitment tx is broadcast, don’t get into the mempool neither orphan one, latter the high-fee child is broadcast at the application logic and get into the orphan, an attacker overfulfill the orphan pool to discard the child, the application logic rebroadcast the parent tx but not the CPFP assuming it’s already there…

    That’s said, I think the P2P work on principle but could be made more robust and tests should include multiple bumped childs and orphan pool overflow.

  41. sdaftuar marked this as ready for review on Jan 8, 2020
  42. Add package-acceptance logic to mempool
    Accepting a single transaction to the mempool only succeeds if (among other
    things) the feerate of the transaction is greater than both the min relay fee
    and the mempool min fee. Consequently, a transaction below the minimum fee
    may not be accepted to the mempool, even if we later learn of a transaction
    with a high relay fee that depends on it.
    
    This commit adds a function that will accept a package of transactions to the
    mempool, with the following restrictions:
    
    - All package transactions must be direct parents of the final transaction.
      This is a simple heuristic for ensuring that a candidate list of transactions
      is in fact a package (we wouldn't want arbitrary transactions to be paying
      for random low feerate transactions)
    
    - The feerate of the package, as a whole, exceeds the mempool min fee and the
      min relay fee.
    
    - No transactions in the mempool conflict with any transactions in the package.
      This is a simplification that makes the logic easier to write. Without this
      requirement, we would need to do additional checks to ensure that no parent
      transaction would evict a transaction from the mempool that some other child
      depends on.
    
    - The ancestor/descendant size limits are calculated assuming that any mempool
      ancestor of any candidate transaction is an ancestor of all the candidate
      transactions.
      This allows for doing simpler calculations to ensure that we're staying
      within the mempool's package limits. If we eliminated this, we would need to
      do much more careful package calculations for each candidate transaction and each
      in-mempool ancestor.
    
    This commit does not include any accessor function for utilizing this logic (eg
    by exposing this function at the p2p or rpc layer).
    c067d85500
  43. DrahtBot added the label Needs rebase on Jan 8, 2020
  44. ariard commented at 1:35 am on January 9, 2020: member
    In fact worry described here holds, I’ve modified the new test a bit to overflow the orphan pool with dumb-but-valid orphan pools and it succeeds to evict high-fee child tx easily, even with a low number of malicious orphans (200) : https://github.com/ariard/bitcoin/commit/e20ad2a44a830df7f04cf9bccf9e6791b73d2527
  45. Implement package relay for 2-tx packages from the orphan pool
    Exposes a simple use case for package relay -- if we receive a child
    transaction that is missing a parent, then we request the parent from our
    peers.
    
    If a peer responds with a transaction that is rejected from the mempool due to
    feerate, we have an opportunity to accept that parent along with the child, if
    the child's feerate is sufficiently high and the child is missing no other
    parents.
    6a3bdba074
  46. sdaftuar force-pushed on Jan 9, 2020
  47. sdaftuar commented at 4:01 am on January 9, 2020: member

    It should be made really clear for off-chain protocol devs to always rebroadcast the whole package instead of only the parent or the child. I’m worrying about scenarii where a a commitment tx is broadcast, don’t get into the mempool neither orphan one, latter the high-fee child is broadcast at the application logic and get into the orphan, an attacker overfulfill the orphan pool to discard the child, the application logic rebroadcast the parent tx but not the CPFP assuming it’s already there… @ariard It wasn’t my goal in this PR to deploy a new p2p package relay scheme that is resilient to DoS attack; instead I wanted to take a smaller use case (resolving orphan transactions missing a single low fee parent) to motivate adding the package acceptance logic.

    I think once we have the mempool package acceptance logic in, we could then improve the p2p layer further to allow fancier relay schemes. (See also my comment in #14895 (comment).)

    I’ll update the title of the PR to make this more clear.

  48. sdaftuar renamed this:
    Package relay
    Add package acceptance logic to mempool
    on Jan 9, 2020
  49. DrahtBot removed the label Needs rebase on Jan 9, 2020
  50. ariard commented at 5:10 pm on January 9, 2020: member

    Thanks for clarifying scope of this PR. I do think too it’s good to split this project in multi-parts to focus on new mempool acceptance logic, specially what the worst width/depth of a chain of messy transactions we select as a package submission DoS bound.

    That’s said, we may reduce the risks of mempool CPU DoS by implementing the right measures at the p2p level like rate-limiting per-peer package reception. Also if we have different p2p package relay mechanisms (receiver-initiated, sender p2p messages, …) we need to think how we coordinate them between different releases to avoid applications relying on the wrong ones for their uses cases and falsely assuming they are secure.

  51. JeremyRubin added this to the "Package Relay" column in a project

  52. sdaftuar commented at 2:55 pm on January 29, 2020: member
    I’m starting to think that perhaps the use case here isn’t really worth it – I tried to split off what I thought would be a simple use case (of processing some orphan transactions as packages with their rejected parents), but it turns out to be a bit trickier than I expected, and given that we’d just throw this away if we implemented an actual package relay protocol anyway this seems like wasted effort.
  53. sdaftuar closed this on Jan 29, 2020

  54. DrahtBot locked this on Feb 15, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-19 00:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me