v3 transaction policy for anti-pinning #28948

pull glozow wants to merge 8 commits into bitcoin:master from glozow:v3-policy changing 23 files +1136 −42
  1. glozow commented at 3:17 pm on November 27, 2023: member

    See #27463 for overall package relay tracking.

    Delving Bitcoin discussion thread: https://delvingbitcoin.org/t/v3-transaction-policy-for-anti-pinning/340 Delving Bitcoin discussion for LN usage: https://delvingbitcoin.org/t/lightning-transactions-with-v3-and-ephemeral-anchors/418

    Rationale:

    • There are various pinning problems with RBF and our general ancestor/descendant limits. These policies help mitigate many pinning attacks and make package RBF feasible (see #28984 which implements package RBF on top of this). I would focus the most here on Rule 3 pinning. [1][2]
    • Switching to a cluster-based mempool (see #27677 and #28676) requires the removal of CPFP carve out, which applications depend on. V3 + package RBF + ephemeral anchors + 1-parent-1-child package relay provides an intermediate solution.

    V3 policy is for “Priority Transactions.” [3][4] It allows users to opt in to more restrictive topological limits for shared transactions, in exchange for the more robust fee-bumping abilities that offers. Even though we don’t have cluster limits, we are able to treat these transactions as having as having a maximum cluster size of 2.

    Immediate benefits:

    • You can presign a transaction with 0 fees (not just 1sat/vB!) and add a fee-bump later.
    • Rule 3 pinning is reduced by a significant amount, since the attacker can only attach a maximum of 1000vB to your shared transaction.

    This also enables some other cool things (again see #27463 for overall roadmap):

    • Ephemeral Anchors
    • Package RBF for these 1-parent-1-child packages. That means e.g. a commitment tx + child can replace another commitment tx using the child’s fees.
    • We can transition to a “single anchor” universe without worrying about package limit pinning. So current users of CPFP carve out would have something else to use.
    • We can switch to a cluster-based mempool [5] (#27677 #28676), which removes CPFP carve out [6].

    [1]: Original mailing list post and discussion about RBF pinning problems https://gist.github.com/glozow/25d9662c52453bd08b4b4b1d3783b9ff, https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-January/019817.html [2]: A FAQ is “we need this for cluster mempool, but is this still necessary afterwards?” There are some pinning issues that are fixed here and not fully fixed in cluster mempool, so we will still want this or something similar afterward. [3]: Mailing list post for v3 https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-September/020937.html [4]: Original PR #25038 also contains a lot of the discussion [5]: https://delvingbitcoin.org/t/an-overview-of-the-cluster-mempool-proposal/393/7 [6]: https://delvingbitcoin.org/t/an-overview-of-the-cluster-mempool-proposal/393#the-cpfp-carveout-rule-can-no-longer-be-supported-12

  2. glozow added the label TX fees and policy on Nov 27, 2023
  3. glozow added the label Mempool on Nov 27, 2023
  4. DrahtBot commented at 3:17 pm on November 27, 2023: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage

    For detailed information about the code coverage, see the test coverage report.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    ACK sdaftuar, instagibbs, achow101
    Concept NACK petertodd
    Concept ACK ismaelsadeeq

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #29325 (consensus: Store transaction nVersion as uint32_t by achow101)
    • #29306 (policy: enable sibling eviction for v3 transactions by glozow)
    • #29242 (Mempool util: Add RBF diagram checks for single chunks against clusters of size 2 by instagibbs)
    • #28984 (Cluster size 2 package rbf by instagibbs)
    • #28960 (kernel: Remove dependency on CScheduler by TheCharlatan)
    • #28886 (refactor: Replace sets of txiter with CTxMemPoolEntryRefs by TheCharlatan)
    • #28710 (Remove the legacy wallet and BDB dependency by achow101)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  5. in src/policy/v3_policy.h:39 in 18406ecbd5 outdated
    34+ * @returns a tuple (parent wtxid, child wtxid, bool) where one is V3 but the other is not, if at
    35+ * least one such pair exists. The bool represents whether the child is v3 or not. There may be
    36+ * other such pairs that are not returned.
    37+ * Otherwise std::nullopt.
    38+ */
    39+std::optional<std::tuple<uint256, uint256, bool>> CheckV3Inheritance(const Package& package);
    


    dergoegge commented at 3:41 pm on November 27, 2023:
    Use Wtxid?

    glozow commented at 4:38 pm on November 27, 2023:
    Updated this file to use Txid/Wtxid instead of uint256 everywhere.
  6. in src/policy/v3_policy.h:57 in 18406ecbd5 outdated
    52+ *
    53+ * @returns an error string if any V3 rule was violated, otherwise std::nullopt.
    54+ */
    55+std::optional<std::string> ApplyV3Rules(const CTransactionRef& ptx,
    56+                                        const CTxMemPool::setEntries& ancestors,
    57+                                        const std::set<uint256>& direct_conflicts);
    


    dergoegge commented at 3:42 pm on November 27, 2023:
    Are these txids/wtxids?

    glozow commented at 4:38 pm on November 27, 2023:
    Changed to specify Txid. I assume we want a “update as we touch these lines” kind of thing so I added a preceding commit to make m_conflicts a std::set<Txid> instead of uint256, and updated other places as necessary to compile.
  7. glozow force-pushed on Nov 27, 2023
  8. DrahtBot added the label CI failed on Nov 27, 2023
  9. dergoegge commented at 3:52 pm on November 27, 2023: member
    Would be good to have some fuzzing for the new policy rules. Maybe amending the existing fuzz targets (tx_pool, tx_package_eval) to include v3 txs is enough? A standalone harness for the new rules would probably also be good regardless.
  10. DrahtBot removed the label CI failed on Nov 27, 2023
  11. glozow force-pushed on Nov 27, 2023
  12. glozow commented at 4:38 pm on November 27, 2023: member
    Changed tx_pool and tx_package_eval fuzzer to sometimes create v3 transactions, and check v3 invariants at the end of each iteration.
  13. glozow force-pushed on Nov 27, 2023
  14. DrahtBot added the label CI failed on Nov 27, 2023
  15. glozow force-pushed on Nov 27, 2023
  16. dergoegge commented at 6:02 pm on November 27, 2023: member
     0$ echo "ACVL/gA+Pj4+Pj4+3D4AAAAAAAAAAAAAAAAAAAAAAAAAAAAAPT09//8AAP/8AAAAigAAAAAAAAAAAAAAAAAAAAAAAAD6AAAAAAAIAQk9AAAAAAAAAAAAAAAAAAAAABMA9wAAAAYiACkAAAAAAAAAAAD///8AAPkAAAAkAAAAAACiAGxpbWl0YW739/f3Y2VzdG9yc2lSemUAAAAmAKMAAP9kZQBi4nUxZ2xvZ2ZpbGUAJgD/AA==" | base64 --decode > tx_pool-bbe7bc3c2a6d0cb1a21c48506cd16fdb807ae3cf.crash
     1$ FUZZ=tx_pool ./src/test/fuzz/fuzz tx_pool-bbe7bc3c2a6d0cb1a21c48506cd16fdb807ae3cf.crash
     2INFO: Running with entropic power schedule (0xFF, 100).
     3INFO: Seed: 1136977254
     4INFO: Loaded 1 modules   (572973 inline 8-bit counters): 572973 [0x563877bfae70, 0x563877c86c9d), 
     5INFO: Loaded 1 PC tables (572973 PCs): 572973 [0x563877c86ca0,0x563878544f70), 
     6/workdir/fuzz_bins/fuzz_libfuzzer: Running 1 inputs 1 time(s) each.
     7Running: /workdir/crashes/crash-bbe7bc3c2a6d0cb1a21c48506cd16fdb807ae3cf
     8test/fuzz/tx_pool.cpp:114 Finish: Assertion `entry.GetModFeesWithDescendants() > 0' failed.
     9==1877== ERROR: libFuzzer: deadly signal
    10    [#0](/bitcoin-bitcoin/0/) 0x5638764b4c88 in __sanitizer_print_stack_trace (/workdir/fuzz_bins/fuzz_libfuzzer+0x14b4c88) (BuildId: 7fbfcc32a58adde3cb3dcfe8229731b5bb30d71d)
    11    [#1](/bitcoin-bitcoin/1/) 0x56387648c04c in fuzzer::PrintStackTrace() crtstuff.c
    12    [#2](/bitcoin-bitcoin/2/) 0x563876471e67 in fuzzer::Fuzzer::CrashCallback() crtstuff.c
    13    [#3](/bitcoin-bitcoin/3/) 0x7f168cc0850f  (/lib/x86_64-linux-gnu/libc.so.6+0x3c50f) (BuildId: 8a1bf172e710f8ca0c1576912c057b45f90d90d8)
    14    [#4](/bitcoin-bitcoin/4/) 0x7f168cc560fb  (/lib/x86_64-linux-gnu/libc.so.6+0x8a0fb) (BuildId: 8a1bf172e710f8ca0c1576912c057b45f90d90d8)
    15    [#5](/bitcoin-bitcoin/5/) 0x7f168cc08471 in raise (/lib/x86_64-linux-gnu/libc.so.6+0x3c471) (BuildId: 8a1bf172e710f8ca0c1576912c057b45f90d90d8)
    16    [#6](/bitcoin-bitcoin/6/) 0x7f168cbf24b1 in abort (/lib/x86_64-linux-gnu/libc.so.6+0x264b1) (BuildId: 8a1bf172e710f8ca0c1576912c057b45f90d90d8)
    17    [#7](/bitcoin-bitcoin/7/) 0x5638778c6167 in assertion_fail(std::basic_string_view<char, std::char_traits<char>>, int, std::basic_string_view<char, std::char_traits<char>>, std::basic_string_view<char, std::char_traits<char>>) check.cpp
    18    [#8](/bitcoin-bitcoin/8/) 0x56387682def1 in (anonymous namespace)::Finish(FuzzedDataProvider&, (anonymous namespace)::MockedTxPool&, Chainstate&) tx_pool.cpp
    19    [#9](/bitcoin-bitcoin/9/) 0x563876832e79 in (anonymous namespace)::tx_pool_fuzz_target(Span<unsigned char const>) tx_pool.cpp
    20    [#10](/bitcoin-bitcoin/10/) 0x5638768972c7 in LLVMFuzzerTestOneInput fuzz.cpp
    21    [#11](/bitcoin-bitcoin/11/) 0x563876473334 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) crtstuff.c
    22    [#12](/bitcoin-bitcoin/12/) 0x56387645c263 in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) crtstuff.c
    23    [#13](/bitcoin-bitcoin/13/) 0x563876461e86 in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) crtstuff.c
    24    [#14](/bitcoin-bitcoin/14/) 0x56387648c9d6 in main crtstuff.c
    25    [#15](/bitcoin-bitcoin/15/) 0x7f168cbf36c9  (/lib/x86_64-linux-gnu/libc.so.6+0x276c9) (BuildId: 8a1bf172e710f8ca0c1576912c057b45f90d90d8)
    26    [#16](/bitcoin-bitcoin/16/) 0x7f168cbf3784 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x27784) (BuildId: 8a1bf172e710f8ca0c1576912c057b45f90d90d8)
    27    [#17](/bitcoin-bitcoin/17/) 0x563876456cd0 in _start (/workdir/fuzz_bins/fuzz_libfuzzer+0x1456cd0) (BuildId: 7fbfcc32a58adde3cb3dcfe8229731b5bb30d71d)
    28
    29NOTE: libFuzzer has rudimentary signal handlers.
    30      Combine libFuzzer with AddressSanitizer or similar for better crash reports.
    31SUMMARY: libFuzzer: deadly signal
    
  17. ariard commented at 0:07 am on November 28, 2023: member

    Even though we don’t have cluster limits, we are able to treat these transactions as having as having a maximum cluster size of 2.

    I don’t think adopting cluster limits instead of max ancestors / descendants limits change anything for V3 packages. What matters is the overall weight limit of the package (4000 WU for a child), as this limit draws an anti-pinning security bound on the lowest off-chain payment that one can afford to burn as an absolute fee . Limit to be evaluated in function of network mempools congestion. Removing the absolute fee replacement rules I think would simplify current pinning analysis - though sounds this is beyond the scope of this proposal.

  18. glozow force-pushed on Nov 28, 2023
  19. in src/test/fuzz/package_eval.cpp:319 in d79f36b1ec outdated
    313+                Assert(entry.GetTxWeight() <= V3_CHILD_MAX_WEIGHT);
    314+            }
    315+        }
    316+        // Transactions with fees of 0 or lower should be proactively trimmed.
    317+        if (tx_pool.m_min_relay_feerate.GetFeePerK() > 0) {
    318+            Assert(entry.GetModFeesWithDescendants() > 0);
    


    instagibbs commented at 6:02 pm on November 28, 2023:
    If bypass_limits is set for a single submission, this can fail since no trimming is done after that.

    glozow commented at 7:31 pm on November 28, 2023:
    Ah true. I forgot that we do an atmp here too. I don’t think we should do bypass_limits though - couldn’t that cause the result to deviate from the package eval result?

    instagibbs commented at 7:32 pm on November 28, 2023:
    yeah I don’t think we need to ever

    glozow commented at 11:12 am on November 29, 2023:
    Made bypass_limits always false
  20. in src/policy/v3_policy.cpp:85 in 8ae62c58cf outdated
    79+    if (ancestors.empty()) {
    80+        return std::nullopt;
    81+    } else {
    82+        const auto tx_weight{GetTransactionWeight(*ptx)};
    83+        // If this transaction spends V3 parents, it cannot be too large.
    84+        if (tx_weight > V3_CHILD_MAX_WEIGHT) {
    


    instagibbs commented at 7:19 pm on November 28, 2023:

    We need to limit the sigops-adjusted size of the child, since that effects the effectiveness of a potential pin. This difference can be up to 20x for really strange scripts.

    Simplest but perhaps controversial would be to disallow v3 transactions to have sigops-adjusted size different than bip141 vsize, Other solution would be to directly feed in the adjusted vsize and check against that instead of weight.


    glozow commented at 7:29 pm on November 28, 2023:

    Yes definitely

    disallow v3 transactions to have sigops-adjusted size different than bip141 vsize

    I actually think this is more complicated than adding a sigop limit, since we’d need to define what sigop-adjusted size is. Or just word it as “number of sigops times this constant cannot be larger than the weight” I guess.


    glozow commented at 11:13 am on November 29, 2023:
    Added a 50-sigop limit, with a p2sh test.

    sdaftuar commented at 7:07 pm on November 29, 2023:
    What’s the issue with using the transaction’s vsize, which is already the value used everywhere else in the mempool? It seems like that is simpler than introducing a second limit, and exactly captures how we apply the RBF rules anyway.

    instagibbs commented at 7:10 pm on November 29, 2023:
    Yes, I think the thing we’re actually trying to protect against, sigops-adjusted vsize, is what should be used.

    glozow commented at 1:02 pm on November 30, 2023:

    What’s the issue with using the transaction’s vsize

    My reasons were:

    • I wanted to specify a value of sigops that wouldn’t depend on the node’s -bytespersigop setting, and similarly didn’t want it to be bypassable locally this way
    • People expressed confusion at what sigops-adjusted vsize was, so I wanted to make it clearer

    Happy to change though, if it’s preferred?


    sdaftuar commented at 3:09 pm on November 30, 2023:
    I just think that the point of the policy rule is not to enforce a particular sigops limit, but instead to ensure that the total fee required to replace a child transaction is only high if the child itself has a high feerate, which is defined as fee / sigops_adjusted_vsize. So there doesn’t seem to me to be a benefit to thinking about two different limits.

    glozow commented at 2:15 pm on December 1, 2023:
    Will change 👍
  21. glozow force-pushed on Nov 29, 2023
  22. glozow force-pushed on Nov 29, 2023
  23. glozow force-pushed on Nov 29, 2023
  24. DrahtBot removed the label CI failed on Nov 29, 2023
  25. glozow commented at 2:51 pm on November 29, 2023: member
    CI is green. I’ve added a rule for maximum sigops and updated the doc. (cc @sdaftuar)
  26. in src/policy/v3_policy.cpp:82 in fc62757320 outdated
    77+    if (ancestors.size() + 1 > V3_ANCESTOR_LIMIT) {
    78+        return strprintf("tx %s would have too many ancestors", ptx->GetWitnessHash().ToString());
    79+    }
    80+    if (ancestors.empty()) {
    81+        return std::nullopt;
    82+    } else {
    


    instagibbs commented at 7:17 pm on November 29, 2023:
    you can remove this level of nesting since the 3 checks above all end early

    glozow commented at 3:23 pm on December 12, 2023:
    Taken
  27. in src/policy/v3_policy.cpp:118 in fc62757320 outdated
    87+        }
    88+        if (sigops > V3_CHILD_MAX_SIGOPS) {
    89+            return strprintf("v3 child tx has too many sigops: %u > %u sigops", sigops, V3_CHILD_MAX_SIGOPS);
    90+        }
    91+        // Any ancestor of a V3 transaction must also be V3.
    92+        const auto& parent_entry = *ancestors.begin();
    


    sdaftuar commented at 7:21 pm on November 29, 2023:
    This logic only works if V3_ANCESTOR_LIMIT <= 2, I think? Having a constant would imply that you could consider changing the value, but if the logic breaks with higher values then I think we should static_assert that the constant is in the range we expect.
  28. in src/policy/v3_policy.cpp:111 in fc62757320 outdated
    76+
    77+    if (ancestors.size() + 1 > V3_ANCESTOR_LIMIT) {
    78+        return strprintf("tx %s would have too many ancestors", ptx->GetWitnessHash().ToString());
    79+    }
    80+    if (ancestors.empty()) {
    81+        return std::nullopt;
    


    instagibbs commented at 7:25 pm on November 29, 2023:
    0        // This is a v3 parent; no additional constraints
    1        return std::nullopt;
    

    ismaelsadeeq commented at 1:52 pm on December 11, 2023:
    0if (ancestors.empty()) {
    1     return std::nullopt;
    2 }
    

    should be checked before we check this size?


    glozow commented at 3:52 pm on December 12, 2023:
    I’ve nested this part of the function inside !ancestors.empty() to make this clearer (and safer imo).
  29. in src/policy/v3_policy.h:50 in fc62757320 outdated
    45+/** Every transaction that spends an unconfirmed V3 transaction must also be V3. */
    46+std::optional<std::string> CheckV3Inheritance(const CTransactionRef& ptx,
    47+                                              const CTxMemPool::setEntries& ancestors);
    48+
    49+/** The following rules apply to V3 transactions:
    50+ * 1. Tx with all of its ancestors (including non-nVersion=3) must be within V3_ANCESTOR_SIZE_LIMIT_KVB.
    


    sdaftuar commented at 7:31 pm on November 29, 2023:
    I think a v3 transaction is not permitted to have non-v3 unconfirmed ancestors – so this comment seems confusing?
  30. in src/validation.cpp:378 in 22ce995d0b outdated
    374@@ -372,11 +375,16 @@ void Chainstate::MaybeUpdateMempoolForReorg(
    375             }
    376         }
    377         // Transaction is still valid and cached LockPoints are updated.
    378+        auto ancestors{m_mempool->AssumeCalculateMemPoolAncestors(func, *it, CTxMemPool::Limits::NoLimits(),
    


    sdaftuar commented at 8:09 pm on November 29, 2023:
    1. This strikes me as a very expensive operation to perform in order to update the mempool for a reorg. I think we could probably do better by only calculating ancestors or descendants for transactions that are themselves v3, if we decide we need to enforce the v3 rules after a reorg.

    2. With the logic as written, if a v3 parent-child pair (A, B) were to be reorged out and added back to the mempool, and A has some other spend C already in the mempool, then when ApplyV3Rules() is invoked on B, it would be selected for removal. This in turn would prevent A from being mined, if we assume that B’s feerate was the reason it was mined in the first place? This strikes me as unfortunate.

    Instead, I’d suggest that we not bother enforcing the v3 rules on a reorg. That seems safer than potentially evicting the fee-paying transaction, less CPU work (so reorgs won’t be further slowed down by additional processing), and shouldn’t materially affect pinning (if miners are reorging out your transactions, trying to minimize RBF pinning with v3 policy isn’t really going to help you!).


    glozow commented at 1:11 pm on November 30, 2023:

    Instead, I’d suggest that we not bother enforcing the v3 rules on a reorg.

    I’m a fan of this approach. I want to spend a little bit of time thinking about what the possibilities are - I guess we can have some incorrect assumptions about transactions (resulting in e.g. incorect MinerScores) but it wouldn’t be problematic if a reorg is the only way to get there, as you say.

  31. sdaftuar commented at 8:41 pm on November 29, 2023: member

    Did a first pass review – concept ACK.

    Also I think we should resurrect #27018 so that we will mine everything in the mempool, but that doesn’t need to be in this PR (and probably isn’t strictly necessary either).

  32. in src/policy/v3_policy.cpp:141 in fc62757320 outdated
    64+        }
    65+    }
    66+    return std::nullopt;
    67+}
    68+
    69+std::optional<std::string> ApplyV3Rules(const CTransactionRef& ptx,
    


    instagibbs commented at 8:42 pm on November 29, 2023:

    This only seems to run useful checks on v3 children when the ancestor is already in the mempool, not when the ancestor is something new in a package. I suspect we want a parallel ApplyV3Rules that looks at the packages only, just like CheckV3Inheritance

    As-is you can get large children and muiltiple ancestors in currently.

    (also, mempool_v3_accept.py needs tests covering this case for each situation)

     0diff --git a/test/functional/mempool_accept_v3.py b/test/functional/mempool_accept_v3.py
     1index 5b72cca6b9..3a4b822a90 100755
     2--- a/test/functional/mempool_accept_v3.py
     3+++ b/test/functional/mempool_accept_v3.py
     4@@ -22,57 +22,73 @@ def cleanup(func):
     5         try:
     6             func(self)
     7         finally:
     8             # Clear mempool
     9             self.generate(self.nodes[0], 1)
    10             # Reset config options
    11             self.restart_node(0)
    12     return wrapper
    13 
    14 class MempoolAcceptV3(BitcoinTestFramework):
    15     def set_test_params(self):
    16         self.num_nodes = 1
    17         self.setup_clean_chain = True
    18 
    19     def check_mempool(self, txids):
    20         """Assert exact contents of the node's mempool (by txid)."""
    21         mempool_contents = self.nodes[0].getrawmempool()
    22         assert_equal(len(txids), len(mempool_contents))
    23         assert all([txid in txids for txid in mempool_contents]) [@cleanup](/bitcoin-bitcoin/contributor/cleanup/)
    24     def test_v3_acceptance(self):
    25         node = self.nodes[0]
    26         self.log.info("Test a child of a V3 transaction cannot be more than 1000vB")
    27         self.restart_node(0, extra_args=["-datacarriersize=1000"])
    28-        tx_v3_parent_normal = self.wallet.send_self_transfer(from_node=node, version=3)
    29-        self.check_mempool([tx_v3_parent_normal["txid"]])
    30-        tx_v3_child_heavy = self.wallet.create_self_transfer(
    31-            utxo_to_spend=tx_v3_parent_normal["new_utxo"],
    32+        tx_v3_parent_normal = self.wallet.create_self_transfer(
    33+            fee_rate=0,
    34+            target_weight=4004,
    35+            version=3
    36+        )
    37+        tx_v3_parent_2_normal = self.wallet.create_self_transfer(
    38+            fee_rate=0,
    39             target_weight=4004,
    40             version=3
    41         )
    42+        tx_v3_child_heavy = self.wallet.create_self_transfer_multi(
    43+            utxos_to_spend=[tx_v3_parent_normal["new_utxo"], tx_v3_parent_2_normal["new_utxo"]],
    44+            target_weight=4004,
    45+            fee_per_output=10000,
    46+            version=3
    47+        )
    48+
    49+        assert_equal(node.getrawmempool(), [])
    50+        node.submitpackage([tx_v3_parent_normal["hex"], tx_v3_parent_2_normal["hex"], tx_v3_child_heavy["hex"]])
    51+        # !!! Oversize child and multiple (in-package) ancestors
    52+        assert_equal(len(node.getrawmempool()), 3)
    53+        assert False
    54+
    

    instagibbs commented at 2:53 pm on November 30, 2023:
    good news: this issue was discovered in less than a minute on a single core with the package fuzzer

    glozow commented at 1:24 pm on December 12, 2023:
    added this test
  33. dergoegge commented at 10:30 am on December 1, 2023: member
    0$ echo "jtHRO///YED/AAD/AAABCQAAgQAN/wAAAQAALAEB/QAAAAAJAAAAAAAAAAAAAAAA+GwAAPAV/v8BkAEicG9vcyYANwAAAAEBADsBAQHo/v7+bv4RAQEBkQABAQE3CTsBIQEBAQEBAAEBATcAACYKAAQBAf8BAQEBAQEBCAABAQEBfgEAAQEBCgAAADsBAQEBAQEB/v8BJgABQQE3AAAAAAFBATcAAAAA8AAAAQEBAQEBAQEBCAABLgEBAX4BAAE=" | base64 --decode > tx_package_eval-81b8ec0e06b86811790d61f69fe27701ca0c8305.crash
    1$ FUZZ=tx_package_eval ./src/test/fuzz/fuzz tx_package_eval-81b8ec0e06b86811790d61f69fe27701ca0c8305.crash
    2test/fuzz/package_eval.cpp:320 tx_package_eval_fuzz_target: Assertion `entry.GetModFeesWithDescendants() > 0' 
    
  34. ariard commented at 3:26 am on December 4, 2023: member

    Also I think we should resurrect #27018 so that we will mine everything in the mempool, but that doesn’t need to be in this PR (and probably isn’t strictly necessary either).

    I don’t know about automatically flushing out <= 0 fees even when the local mempool is empty. One could add significant computing payload on its mining competitors by throwing buffy packages with 0 fees parent and then replace the high-fee child (in a cycle). A more robust approach can be to let those 0 fees parent sleep in the local non-full mempool, and only evict them when you have non-0 fees transactions showing up. The asymmetry between the additional computational cost to flush out automatically and the rbf penalty paid sounds to be at the advantage of a potential denial-of-service attacker (edited).

  35. instagibbs commented at 3:14 pm on December 4, 2023: member

    I don’t know about automatically flushing out <= 0 fees even when the local mempool is empty. One could add significant computing payload on its mining competitors by throwing buffy packages with 0 fees parent and then replace the high-fee child (in a cycle).

    The replacements are paying for those CPU cycles and bandwidth via “incremental feerate”, it’s equivalent to an entire package simply being RBF’d like normal. Keeping 0-fee parents hanging around in mempool would also allow new entries into the mempool that will never be mined. See #26933 for historical discussion. Post-cluster mempool maybe that isn’t the case anymore.

    It also neatly allows ephemeral anchors to be ejected from the mempool if it has no children.

  36. in src/txmempool.cpp:1154 in f6a382af7b outdated
    1163 
    1164+        // Keep trimming as long as memory is above the maximum.
    1165+        // Also, unless min relay feerate allows it, skim away everything paying <=0 fees.
    1166+        const bool keep_trimming{DynamicMemoryUsage() > sizelimit ||
    1167+            (m_min_relay_feerate.GetFeePerK() > 0 && it->GetModFeesWithDescendants() <= 0)};
    1168+        if (!keep_trimming) break;
    


    mzumsande commented at 11:06 pm on December 4, 2023:
    This could lead to confusing log messages when removing 0-fee (or deprioritized) transactions: Such a tx could be removed due to “size limit” even if the mempool was almost empty.

    glozow commented at 1:20 pm on December 12, 2023:
    I’ve changed the RemovalReasonToString to be “sizelimit or <=0 fee”. Alternatively, we can add another reason?
  37. mzumsande commented at 11:09 pm on December 4, 2023: contributor

    [Edit]: Got confused about something yesterday, removed this point.

    As a side effect, I think this will allow users to remove select entries from their non-full mempool by prioritizing to large negative values (so basically a poor man’s removemempoolentry).

  38. glozow force-pushed on Dec 5, 2023
  39. ismaelsadeeq commented at 12:44 pm on December 6, 2023: member
    Concept ACK
  40. instagibbs commented at 4:49 pm on December 7, 2023: member

    re: #28948 (comment)

    LimitMempoolSize is only called upon successful transaction inclusions, so a v3 tx that is negatively prioritized later may not be evicted yet.

  41. in src/validation.cpp:332 in 4557e54c90 outdated
    328@@ -328,11 +329,12 @@ void Chainstate::MaybeUpdateMempoolForReorg(
    329     m_mempool->UpdateTransactionsFromBlock(vHashUpdate);
    330 
    331     // Predicate to use for filtering transactions in removeForReorg.
    332+    // Checks whether a non-v3 transaction spends v3 and or vice versa.
    


    sdaftuar commented at 0:28 am on December 10, 2023:
    Leftover comment?

    glozow commented at 1:20 pm on December 12, 2023:
    removed
  42. in src/validation.cpp:960 in 4557e54c90 outdated
    947@@ -946,6 +948,16 @@ bool MemPoolAccept::PreChecks(ATMPArgs& args, Workspace& ws)
    948     }
    949 
    950     ws.m_ancestors = *ancestors;
    951+    if (const auto err_string{CheckV3Inheritance(ws.m_ptx, ws.m_ancestors)}) {
    


    sdaftuar commented at 0:34 am on December 10, 2023:
    This is an aside – in the cluster mempool branch, I think I’d like to change this to just look at direct parents rather than ancestors (I believe that in the normal course of transaction validation, we should no longer need to calculate ancestor sets when we’re not worried about ancestor limits or updating cached ancestor state anymore). Can you think of any reason it would be problematic to replace ancestors with just an entry’s direct parents here?

    glozow commented at 12:02 pm on December 11, 2023:
    I think just direct parents should be fine - presumably any further generations should have the same version anyway. I’ll continue using m_ancestors here since it’s what we have access to, but will leave a comment about this.
  43. in src/validation.cpp:1303 in 4557e54c90 outdated
    1299@@ -1288,6 +1300,31 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1300                    [](const auto& tx) { return Workspace(tx); });
    1301     std::map<uint256, MempoolAcceptResult> results;
    1302 
    1303+    if (const auto v3_violation{CheckV3Inheritance(txns)}) {
    


    sdaftuar commented at 0:44 am on December 10, 2023:
    Not sure if I’m missing something, but this only checks that the v3 inheritance rules are enforced among transactions in the package. What if someone sent a package of 2 v3 transactions, but the child transaction also spent a transaction that was already in the mempool? I think the CheckV3Inheritance() needs to be able to pull parents from both the mempool and from the package, perhaps?

    instagibbs commented at 1:42 pm on December 11, 2023:
    the other variant you’re referring to here #28948 (review) should cover that case.

    sdaftuar commented at 8:16 pm on December 11, 2023:

    Looks like it’s broken, see this test (here the parent transaction is spending an in-mempool output, rather than the child transaction):

     0diff --git a/test/functional/mempool_accept_v3.py b/test/functional/mempool_accept_v3.py
     1index 769b177cf523..713a8cb7406b 100755
     2--- a/test/functional/mempool_accept_v3.py
     3+++ b/test/functional/mempool_accept_v3.py
     4@@ -229,11 +229,42 @@ class MempoolAcceptV3(BitcoinTestFramework):
     5         tx_replacer = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=utxo_confirmed, fee_rate=DEFAULT_FEE * 10)
     6         self.check_mempool([tx_replacer["txid"]])
     7 
     8+ [@cleanup](/bitcoin-bitcoin/contributor/cleanup/)
     9+    def test_package_v3_inheritance(self):
    10+        """
    11+        A v3 transaction in a package cannot have 2 v3 parents.
    12+        Test that if we have a transaction graph A -> B -> C, where A, B, C are
    13+        all v3 transactions, that we cannot use submitpackage to get the
    14+        transactions all into the mempool.
    15+
    16+        Verify, in particular, that if A is already in the mempool, then
    17+        submitpackage(B, C) will fail.
    18+        """
    19+        node = self.nodes[0]
    20+        self.log.info("Test that submitpackage won't allow v3 topology rules to be violated")
    21+        # This is our transaction "A":
    22+        tx_in_mempool = self.wallet.send_self_transfer(from_node=node, version=3)
    23+
    24+        # Verify that A is in the mempool
    25+        self.check_mempool([tx_in_mempool["txid"]])
    26+
    27+        # tx_0fee_parent is our transaction "B"; just create it.
    28+        tx_0fee_parent = self.wallet.create_self_transfer(utxo_to_spend=tx_in_mempool["new_utxo"], fee=0, fee_rate=0, version=3)
    29+
    30+        # tx_child_violator is our transaction "C"; create it:
    31+        tx_child_violator = self.wallet.create_self_transfer_multi(utxos_to_spend=[tx_0fee_parent["new_utxo"]], version=3)
    32+
    33+        # submitpackage(B, C) should fail
    34+        result = node.submitpackage([tx_0fee_parent["hex"], tx_child_violator["hex"]])
    35+        assert(result['package_msg'] != 'success')
    36+        #self.check_mempool([tx_in_mempool["txid"], tx_0fee_parent["txid"], tx_child_violator["txid"]])
    37+
    38     def run_test(self):
    39         self.log.info("Generate blocks to create UTXOs")
    40         node = self.nodes[0]
    41         self.wallet = MiniWallet(node)
    42         self.generate(self.wallet, 110)
    43+        self.test_package_v3_inheritance()
    44         self.test_v3_acceptance()
    45         self.test_v3_replacement()
    46         self.test_v3_bip125()
    

    instagibbs commented at 8:20 pm on December 11, 2023:
    that’s the known topology issue IIUC: #28948 (review)

    sdaftuar commented at 8:54 pm on December 11, 2023:
    Ah, I see. I think this is a different set of checks that fail, but I’m not sure exactly what the fix is so maybe both of these can be fixed at the same time…

    glozow commented at 1:20 pm on December 12, 2023:
    Added a check for in-package + mempool ancestors, and both of your tests.
  44. in src/txmempool.cpp:1164 in bb979f4c76 outdated
    1162-        removed += m_incremental_relay_feerate;
    1163-        trackPackageRemoved(removed);
    1164-        maxFeeRateRemoved = std::max(maxFeeRateRemoved, removed);
    1165+        if (removed >= m_min_relay_feerate) {
    1166+            removed += m_incremental_relay_feerate;
    1167+            trackPackageRemoved(removed);
    


    ismaelsadeeq commented at 9:26 am on December 11, 2023:
    why move this to the if statement?

    glozow commented at 3:30 pm on December 11, 2023:
    Removal of things below minimum relay feerate shouldn’t impact the dynamic mempool minimum
  45. in src/policy/v3_policy.cpp:12 in b39cdfbd4c outdated
     6+
     7+#include <coins.h>
     8+#include <consensus/amount.h>
     9+#include <logging.h>
    10+#include <tinyformat.h>
    11+
    


    ismaelsadeeq commented at 11:57 am on December 11, 2023:
    0#include <algorithm>
    

    glozow commented at 1:19 pm on December 12, 2023:
    added
  46. in src/policy/v3_policy.h:33 in b39cdfbd4c outdated
    28+// additionally check ancestor/descendant limits for V3 transactions.
    29+static_assert(V3_CHILD_MAX_VSIZE + MAX_STANDARD_TX_WEIGHT / WITNESS_SCALE_FACTOR <= DEFAULT_ANCESTOR_SIZE_LIMIT_KVB * 1000);
    30+static_assert(V3_CHILD_MAX_VSIZE + MAX_STANDARD_TX_WEIGHT / WITNESS_SCALE_FACTOR <= DEFAULT_DESCENDANT_SIZE_LIMIT_KVB * 1000);
    31+
    32+/** Any two unconfirmed transactions with a dependency relationship must either both be V3 or both
    33+ * non-V3. Check this rule for any list of unconfirmed transactions.
    


    ismaelsadeeq commented at 12:54 pm on December 11, 2023:
    Is it worth adding a comment saying Package does not refer to a child and all of its unconfirmed parents? The Package in param can contain unrelated transaction. untill I saw the test, I had the assumption of otherwise.

    glozow commented at 1:19 pm on December 12, 2023:
    Added mention of no topological restrictions to the comment
  47. in test/functional/feature_bip68_sequence.py:279 in bb979f4c76 outdated
    275             self.nodes[0].setmocktime(cur_time + 600)
    276             self.generate(self.wallet, 1, sync_fun=self.no_op)
    277             cur_time += 600
    278 
    279-        assert tx2.hash in self.nodes[0].getrawmempool()
    280+        assert tx2.hash not in self.nodes[0].getrawmempool()
    


    ismaelsadeeq commented at 1:50 pm on December 11, 2023:

    I agree with @mzumsande https://github.com/bitcoin/bitcoin/pull/28948/commits/bb979f4c76f4f4f4357a08674df833a5cc6dcf82#r1414612161. I think it will generally be better to remove the transaction when it was deprioritized and its fee reduced to <= 0 to clear the ambiguity?

    tx2 is evicted for having 0 fees but its when we are trimming mempool that its evicted, it should not be in the mempool the moment it’s deprioritized not at a later time.


    glozow commented at 3:32 pm on December 11, 2023:
    Sorry I’m having trouble parsing this suggestion, are you saying we should call TrimToSize whenever a transaction is prioritised?

    ismaelsadeeq commented at 3:40 pm on December 11, 2023:
    No what I am suggesting is to remove the transaction from mempool when it was deprioritized and its fee reduced to <= 0, not call TrimToSize

    glozow commented at 3:27 pm on December 12, 2023:
    I don’t think we should do that; we’d want to look at the impact on all related transaction’s descendant scores. This transaction’s modified fee might be negative, but it could have high-feerate descendants.

    ismaelsadeeq commented at 10:31 am on December 14, 2023:

    I understand your point. TrimToSize docstring says

    0/** Remove transactions from the mempool until its dynamic size is <= sizelimit.
    

    Now TrimToSizeremoves transaction even if the mempool is not full if their are 0 fee transactions. the docstring should be updated to reflect the operation and maybe the name?


    glozow commented at 6:08 pm on December 15, 2023:
    Updated the docstring
  48. ismaelsadeeq commented at 2:17 pm on December 11, 2023: member

    There are some pinning issues that are fixed here and not fully fixed in cluster mempool, so we will continue to use this or something similar afterward.

    If IIUC the pinning issues that are fixed by this new v3 rules are

    1. Rule 3 Absolute fee pinning.
    2. Rule 5 pinning vector.

    ~I looked at the proposal of cluster mempool the v3 proposals but fail to understand how the addition of cluster mempool might revert the pinning vectors solved by this new policy rules specifically of the above pinning issues.

~ Edit thanks to @instagibbs for insights

    I believe what you meant is that certain pinning attacks, such as rule 3 pinning, aren’t resolved by introducing a cluster mempool.

    To address rule 3 pinning post cluster mempool, we’ll need policy rules similar to the ones in the current v3 transactions policy. For instance, enforcing a cluster size limit of two transactions, and the child size restriction.

    Because cluster mempool will require these policy rules in the future as such cluster mempool and v3 transactions do not conflict with one another, after v3 transaction policy is added cluster mempool will just likely fit in.

    Post v3, rule 3 and rule 5 pinning are eliminated. If a cluster mempool is added after v3 transactions, the cluster limit remains at 2. This means rule 3 is still eliminated, and with a cluster limit of 2, we won’t encounter rule 5 package limit pinning during package rbf of lightning commitment transactions since we are rbfing a single cluster of size two.

    Reviewed to b39cdfbd4c88adc2060841f49a6caaf7e5202c82 so far.

  49. in test/functional/test_framework/wallet.py:294 in 104b3d39f4 outdated
    289@@ -290,7 +290,8 @@ def create_self_transfer_multi(
    290         sequence=0,
    291         fee_per_output=1000,
    292         target_weight=0,
    293-        confirmed_only=False
    294+        confirmed_only=False,
    295+        version=2
    


    maflcko commented at 3:47 pm on December 11, 2023:

    nit in 104b3d39f416d20ab496d753afbe7e4d31902065:

    This only allows the version option for create* calls. I think it would be better to allow the option for any MiniWallet method. In python this can be achieved by passing keyword-args down a call chain. See #28972. Feel free to cherry-pick that here (replacing this commit), or review it, or ignore it.


    glozow commented at 7:36 pm on December 11, 2023:
    thanks :+1:

    glozow commented at 1:18 pm on December 12, 2023:
    taken
  50. glozow force-pushed on Dec 12, 2023
  51. glozow commented at 1:23 pm on December 12, 2023: member

    As a side effect, I think this will allow users to remove select entries from their non-full mempool by prioritizing to large negative values (so basically a poor man’s #15873).

    This was discussed as part of #27018, also see the discussion on irc that day: https://bitcoin-irc.chaincode.com/bitcoin-core-dev/2023-02-01. I think the general idea is “this is good, and will help avoid unspent ephemeral anchors.”

  52. glozow commented at 1:25 pm on December 12, 2023: member
    Last push fixed issues and addressed most comments, I’m also working on adding more tests.
  53. in src/test/fuzz/package_eval.cpp:307 in e649551b12 outdated
    302@@ -300,5 +303,21 @@ FUZZ_TARGET(tx_package_eval, .init = initialize_tx_pool)
    303     UnregisterSharedValidationInterface(outpoints_updater);
    304 
    305     WITH_LOCK(::cs_main, tx_pool.check(chainstate.CoinsTip(), chainstate.m_chain.Height() + 1));
    306+    LOCK(tx_pool.cs);
    307+    for (const auto& tx_info : tx_pool.infoAll()) {
    


    instagibbs commented at 2:53 pm on December 12, 2023:
    did you want to use CheckMempoolInvariants here as well?

    glozow commented at 3:53 pm on December 12, 2023:
    Pulled into a helper function and reused :+1:
  54. in src/validation.cpp:1038 in 91e83bc0fb outdated
    1034@@ -1014,6 +1035,7 @@ bool MemPoolAccept::ReplacementChecks(Workspace& ws)
    1035 }
    1036 
    1037 bool MemPoolAccept::PackageMempoolChecks(const std::vector<CTransactionRef>& txns,
    1038+                                         const std::vector<Workspace>& workspaces,
    


    instagibbs commented at 3:01 pm on December 12, 2023:
    unused addition?

    glozow commented at 10:51 am on December 13, 2023:
    deleted
  55. in src/validation.cpp:1054 in 91e83bc0fb outdated
    1050@@ -1029,6 +1051,7 @@ bool MemPoolAccept::PackageMempoolChecks(const std::vector<CTransactionRef>& txn
    1051         // This is a package-wide error, separate from an individual transaction error.
    1052         return package_state.Invalid(PackageValidationResult::PCKG_POLICY, "package-mempool-limits", err_string);
    1053     }
    1054+
    


    instagibbs commented at 3:02 pm on December 12, 2023:
    extra line

    glozow commented at 10:51 am on December 13, 2023:
    deleted
  56. in src/policy/v3_policy.cpp:118 in 91e83bc0fb outdated
    113+    // If this transaction spends V3 parents, it cannot be too large.
    114+    if (vsize > V3_CHILD_MAX_VSIZE) {
    115+        return strprintf("v3 child tx is too big: %u > %u virtual bytes", vsize, V3_CHILD_MAX_VSIZE);
    116+    }
    117+    // Any ancestor of a V3 transaction must also be V3.
    118+    const auto& parent_entry = *ancestors.begin();
    


    instagibbs commented at 3:03 pm on December 12, 2023:
    isn’t this already covered by CheckV3Inheritence?
  57. in src/validation.cpp:1366 in 91e83bc0fb outdated
    1361@@ -1332,11 +1362,24 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1362             MempoolAcceptResult::FeeFailure(placeholder_state, CFeeRate(m_total_modified_fees, m_total_vsize), all_package_wtxids)}});
    1363     }
    1364 
    1365+    // Check the last transaction's v3 ancestor limits again, assuming that all transactions
    1366+    // are in the ancestor set of the last tx (which is the case when a package IsChildWithParents)
    


    instagibbs commented at 3:07 pm on December 12, 2023:
    Assume(IsChildWithParents()) here makes sense

    glozow commented at 9:31 am on December 13, 2023:
    Ok the lazy way clearly doesn’t work; I think what we have to do is calculate in-package ancestors and feed that to ApplyV3Rules when we call it. This will also have the effect of consolidating our v3 checks so that it’s cleaner. Pushing soon…
  58. in test/functional/mempool_accept_v3.py:255 in e649551b12 outdated
    250+            fee_per_output=10000,
    251+            version=3
    252+        )
    253+        assert_equal(node.getrawmempool(), [])
    254+        result = node.submitpackage([tx_v3_parent_normal["hex"], tx_v3_parent_2_normal["hex"], tx_v3_child_heavy["hex"]])
    255+        assert result['package_msg'] != 'success'
    


    instagibbs commented at 3:35 pm on December 12, 2023:
    let’s check the error messages here and elsewhere to prevent test regressions: 'error': 'v3-nonstandard, tx 1399d1fad241d2da8057b0eb29524e75b8415dedc26a536d8a0cfa72f49f9747 would have too many ancestors'}

    glozow commented at 10:51 am on December 13, 2023:
    added error string checks everywhere
  59. in test/functional/mempool_accept_v3.py:285 in e649551b12 outdated
    280+        # tx_child_violator is our transaction "C"; create it:
    281+        tx_child_violator = self.wallet.create_self_transfer_multi(utxos_to_spend=[tx_0fee_parent["new_utxo"]], version=3)
    282+
    283+        # submitpackage(B, C) should fail
    284+        result = node.submitpackage([tx_0fee_parent["hex"], tx_child_violator["hex"]])
    285+        assert result['package_msg'] != 'success'
    


    instagibbs commented at 3:36 pm on December 12, 2023:
    let’s check the error messages here and elsewhere to prevent test regressions

    glozow commented at 10:52 am on December 13, 2023:
    done
  60. glozow force-pushed on Dec 12, 2023
  61. glozow commented at 3:53 pm on December 12, 2023: member
    Fixed the fuzzer crashes in #28948 (comment) and #28948 (comment) (thanks)
  62. in src/policy/v3_policy.h:52 in f7a3e96eee outdated
    37+ * If a V3 tx has V3 ancestors,
    38+ * 1. The tx vsize must be within V3_CHILD_MAX_VSIZE.
    39+ *
    40+ * @returns an error string if any V3 rule was violated, otherwise std::nullopt.
    41+ */
    42+std::optional<std::string> ApplyV3Rules(const CTransactionRef& ptx,
    


    instagibbs commented at 4:08 pm on December 12, 2023:
    Now that V3 checks are split into 4 different areas, I think we need to brainstorm a better way of naming this function, and describing exactly what it’s not covering. Encapsulating the final check with something very similar will probably help with my pattern matching, ala CheckV3Inheritence?

    glozow commented at 10:51 am on December 13, 2023:
    Ok there are now only 2 places where v3 is checked: package sanity checks, and then ApplyV3Rules for each tx. I think the sanity check is very useful, as there is some heavy-ish computation done to calculate each transaction’s ancestor set (including in-package and their mempool ancestors) to pass on to ApplyV3Rules. I’ve moved inheritance checks into ApplyV3Rules and made all errors “v3-rule-violation” with more details in the debug string.

    instagibbs commented at 4:34 pm on January 29, 2024:
    0std::optional<std::string> SingleV3Rules(const CTransactionRef& ptx,
    

    can leave a comment saying that it can also be called on transactions in packages, as an optimization.


    glozow commented at 11:51 am on January 30, 2024:
    I updated the comment to say “not strictly necessary” for packages.
  63. in src/policy/v3_policy.cpp:87 in f7a3e96eee outdated
    82+        }
    83+    }
    84+
    85+    // Sanity check that package itself obeys ancestor/descendant limits. Assumes that this is
    86+    // ancestor package-shaped.
    87+    if (!package.empty() && package.size() > V3_CHILD_MAX_VSIZE && package.back()->nVersion == 3) {
    


    instagibbs commented at 4:52 pm on December 12, 2023:
    V3_CHILD_MAX_VSIZE is the wrong thing, obviously, but it does need to be checked, otherwise a package size two doesn’t fail when the child is too big ala #28948 (comment)

    glozow commented at 10:50 am on December 13, 2023:
    yikes, fixed
  64. dergoegge commented at 4:58 pm on December 12, 2023: member
    0$ echo "/v//tk/aAYxsz8/Pz8/PAAAAAAB2AAYAAADPz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz8/Pz88AAABAAgAAAL35AAAABwIBAIwAAABA0gH5ACMAAAAAAAAAAAAAAAAAAAAADgAAAAAAAAAAAAFBWwAACD/gPp66urq6uro=" | base64 --decode > tx_package_eval-e9f61e34e32c669558b51daaec0c5c3780377b37.crash
    1$ FUZZ=tx_package_eval ./src/test/fuzz/fuzz tx_package_eval-e9f61e34e32c669558b51daaec0c5c3780377b37.crash
    2test/util/txmempool.cpp:132 CheckMempoolV3Invariants: Assertion `entry.GetTxSize() <= V3_CHILD_MAX_VSIZE' failed.
    
  65. instagibbs changes_requested
  66. instagibbs commented at 5:01 pm on December 12, 2023: member

    test_v3_ancestors_package checked two failing conditions, which allowed the one condition(too heavy child) to slip through

     0diff --git a/test/functional/mempool_accept_v3.py b/test/functional/mempool_accept_v3.py
     1index 837846970f..ee3cdd4cd4 100755
     2--- a/test/functional/mempool_accept_v3.py
     3+++ b/test/functional/mempool_accept_v3.py
     4@@ -222,58 +222,69 @@ class MempoolAcceptV3(BitcoinTestFramework):
     5         node = self.nodes[0]
     6         self.log.info("Test that below-min-relay-feerate transactions are removed in RBF")
     7         tx_0fee_parent = self.wallet.create_self_transfer(fee=0, fee_rate=0, version=3)
     8         utxo_confirmed = self.wallet.get_utxo()
     9         tx_child_replacee = self.wallet.create_self_transfer_multi(utxos_to_spend=[tx_0fee_parent["new_utxo"], utxo_confirmed], version=3)
    10         node.submitpackage([tx_0fee_parent["hex"], tx_child_replacee["hex"]])
    11         self.check_mempool([tx_0fee_parent["txid"], tx_child_replacee["txid"]])
    12         tx_replacer = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=utxo_confirmed, fee_rate=DEFAULT_FEE * 10)
    13         self.check_mempool([tx_replacer["txid"]]) [@cleanup](/bitcoin-bitcoin/contributor/cleanup/)
    14     def test_v3_ancestors_package(self):
    15         self.log.info("Test that v3 ancestor limits are checked within the package")
    16         node = self.nodes[0]
    17         self.restart_node(0, extra_args=["-datacarriersize=1000"])
    18         tx_v3_parent_normal = self.wallet.create_self_transfer(
    19             fee_rate=0,
    20             target_weight=4004,
    21             version=3
    22         )
    23         tx_v3_parent_2_normal = self.wallet.create_self_transfer(
    24             fee_rate=0,
    25             target_weight=4004,
    26             version=3
    27         )
    28-        tx_v3_child_heavy = self.wallet.create_self_transfer_multi(
    29+        tx_v3_child_multiparent = self.wallet.create_self_transfer_multi(
    30             utxos_to_spend=[tx_v3_parent_normal["new_utxo"], tx_v3_parent_2_normal["new_utxo"]],
    31+            target_weight=4000,
    32+            fee_per_output=10000,
    33+            version=3
    34+        )
    35+        tx_v3_child_heavy = self.wallet.create_self_transfer_multi(
    36+            utxos_to_spend=[tx_v3_parent_normal["new_utxo"]],
    37             target_weight=4004,
    38             fee_per_output=10000,
    39             version=3
    40         )
    41+
    42         assert_equal(node.getrawmempool(), [])
    43-        result = node.submitpackage([tx_v3_parent_normal["hex"], tx_v3_parent_2_normal["hex"], tx_v3_child_heavy["hex"]])
    44+        result = node.submitpackage([tx_v3_parent_normal["hex"], tx_v3_parent_2_normal["hex"], tx_v3_child_multiparent["hex"]])
    45+        assert result['package_msg'] != 'success'
    46+        self.check_mempool([])
    47+
    48+        result = node.submitpackage([tx_v3_parent_normal["hex"], tx_v3_child_heavy["hex"]])
    49         assert result['package_msg'] != 'success'
    50         self.check_mempool([]) [@cleanup](/bitcoin-bitcoin/contributor/cleanup/)
    51     def test_v3_ancestors_package_and_mempool(self):
    52         """
    53         A v3 transaction in a package cannot have 2 v3 parents.
    54         Test that if we have a transaction graph A -> B -> C, where A, B, C are
    55         all v3 transactions, that we cannot use submitpackage to get the
    56         transactions all into the mempool.
    57 
    58         Verify, in particular, that if A is already in the mempool, then
    59         submitpackage(B, C) will fail.
    60         """
    61         node = self.nodes[0]
    62         self.log.info("Test that v3 ancestor limits include transactions within the package and all in-mempool ancestors")
    63         # This is our transaction "A":
    64         tx_in_mempool = self.wallet.send_self_transfer(from_node=node, version=3)
    65 
    66         # Verify that A is in the mempool
    67         self.check_mempool([tx_in_mempool["txid"]])
    68 
    69         # tx_0fee_parent is our transaction "B"; just create it.
    70         tx_0fee_parent = self.wallet.create_self_transfer(utxo_to_spend=tx_in_mempool["new_utxo"], fee=0, fee_rate=0, version=3)
    
  67. glozow force-pushed on Dec 13, 2023
  68. glozow commented at 11:22 am on December 13, 2023: member

    #28948#pullrequestreview-1777711045

    Added this test but changed the tx_v3_child_heavy test assuming that it wasn’t supposed to be submitted with tx_v3_parent_2_normal (it fails package topo checks then), lmk if you had something else in mind? EDIT: nvm I misread the diff

  69. ilimtv approved
  70. glozow force-pushed on Dec 13, 2023
  71. glozow force-pushed on Dec 13, 2023
  72. DrahtBot added the label CI failed on Dec 13, 2023
  73. in src/test/util/txmempool.cpp:135 in a864ce33b0 outdated
    130+            // If this transaction has at least 1 ancestor, it's a "child" and has restricted weight.
    131+            if (entry.GetCountWithAncestors() > 1) {
    132+                Assert(entry.GetTxSize() <= V3_CHILD_MAX_VSIZE);
    133+            }
    134+        }
    135+        // Transactions with fees of 0 or lower should be proactively trimmed.
    


    instagibbs commented at 4:31 pm on December 13, 2023:

    this only works if prioritization happens before mempool acceptance

    I’d just set check_fees to false if Prio had just been called before calling this, to stay simple.


    glozow commented at 9:08 am on December 14, 2023:
    Yeah… slightly annoying that we can’t check this until we’re sure a trim happened.

    glozow commented at 2:18 pm on December 14, 2023:

    Just saw your edit

    I’d just set check_fees to false if Prio had just been called before calling this, to stay simple.

    I’m not sure if this helps. When debugging some previous fuzzer crashes, it would often be that a prioritisation (or a bypass_limits) happened in a previous iteration, and then nothing was submitted in this iteration.

    fwiw my local fuzzer seems happy right now, with only calling CheckMempoolV3Invariants when accepted.


    glozow commented at 1:49 pm on December 15, 2023:
    ^We figured this out offline (see passed vs accepted), thanks
  74. in src/policy/v3_policy.h:88 in a864ce33b0 outdated
    58+std::optional<std::tuple<Wtxid, Wtxid, bool>> CheckV3Inheritance(const Package& package);
    59+
    60+/** Check inheritance of v3 and ancestor/descendant limits within a package (this assumes that all
    61+ * transactions are in the ancestor set of the last tx, which is true if IsChildWithParents). This
    62+ * check does not fully check all v3 rules, but allows us to quit early before accessing mempool.  */
    63+std::optional<std::string> PackageV3SanityChecks(const Package& package);
    


    instagibbs commented at 5:48 pm on December 13, 2023:

    Might be worth stressing that this is the only place where in-package inheritance is checked

    “Necessary but insufficient”


    glozow commented at 9:53 am on December 14, 2023:
    Rewrote the docs in this file, I think it’s much more clear now
  75. in src/test/util/txmempool.cpp:136 in a864ce33b0 outdated
    128+            Assert(entry.GetCountWithDescendants() <= V3_DESCENDANT_LIMIT);
    129+            Assert(entry.GetCountWithAncestors() <= V3_ANCESTOR_LIMIT);
    130+            // If this transaction has at least 1 ancestor, it's a "child" and has restricted weight.
    131+            if (entry.GetCountWithAncestors() > 1) {
    132+                Assert(entry.GetTxSize() <= V3_CHILD_MAX_VSIZE);
    133+            }
    


    instagibbs commented at 5:54 pm on December 13, 2023:
    0                const auto& parents = entry.GetMemPoolParentsConst();
    1                Assert(parents.begin()->get().GetTx().nVersion == 3);
    2            }
    3            if (entry.GetCountWithDescendants() > 1) {
    4                const auto& children = entry.GetMemPoolChildrenConst();
    5                Assert(children.begin()->get().GetTx().nVersion == 3);
    6            }
    

    to catch inheritance failures


    glozow commented at 9:53 am on December 14, 2023:
    Added the inheritance check, but the second half is redundant with when we check the child, no?

    instagibbs commented at 2:30 pm on December 14, 2023:
    If parent is v3 but child v2, we wouldn’t check, but you added the extra else if which catches that. lgtm
  76. in test/functional/mempool_accept_v3.py:290 in a864ce33b0 outdated
    293+        # submitpackage(B, C) should fail
    294+        result = node.submitpackage([tx_0fee_parent["hex"], tx_child_violator["hex"]])
    295+        assert_equal(result['package_msg'], "transaction failed")
    296+        assert_equal(result['tx-results'][tx_child_violator['wtxid']]['error'], f"v3-rule-violation, tx {tx_child_violator['wtxid']} would have too many ancestors")
    297+        self.check_mempool([tx_in_mempool["txid"]])
    298+
    


    instagibbs commented at 5:57 pm on December 13, 2023:

    no straight forward coverage of this I think, so:

     0    def test_v3_package_inheritence(self):
     1        self.log.info("Test that v3 inheritence is checked within package")
     2        node = self.nodes[0]
     3        self.restart_node(0, extra_args=["-datacarriersize=1000"])
     4        tx_v3_parent = self.wallet.create_self_transfer(
     5            fee_rate=0,
     6            version=3
     7        )
     8        tx_v2_child = self.wallet.create_self_transfer(
     9            utxo_to_spend=tx_v3_parent["new_utxo"],
    10            fee=10000,
    11            version=2
    12        )
    13
    14        assert_equal(node.getrawmempool(), [])
    15        result = node.submitpackage([tx_v3_parent["hex"], tx_v2_child["hex"]])
    16        assert_equal(result['package_msg'], "v3-violation")
    17        self.check_mempool([])
    

    glozow commented at 9:53 am on December 14, 2023:
    Added, thanks
  77. in test/functional/mempool_accept_v3.py:411 in a864ce33b0 outdated
    307+        self.test_v3_bip125()
    308+        self.test_v3_reorg()
    309+        self.test_nondefault_package_limits()
    310+        self.test_fee_dependency_replacements()
    311+        self.test_v3_ancestors_package()
    312+        self.test_v3_ancestors_package_and_mempool()
    


    instagibbs commented at 5:58 pm on December 13, 2023:
    0        self.test_v3_ancestors_package_and_mempool()
    1        self.test_v3_package_inheritence()
    
  78. instagibbs commented at 6:06 pm on December 13, 2023: member
    more plausibly correct now, fuzzer is passing if the fee check is disabled
  79. in src/policy/v3_policy.cpp:61 in 6724b08988 outdated
    56+}
    57+
    58+std::optional<std::string> PackageV3SanityChecks(const Package& package)
    59+{
    60+    // Check inheritance rules within package.
    61+    if (const auto inheritance_error{CheckV3Inheritance(package)}) {
    


    instagibbs commented at 8:11 pm on December 13, 2023:

    can’t we toss CheckV3Inheritance entirely and just use std::all_of check, since we’ve already checked that there exists one v3 already?

    e.g.,

    0    const bool all_v3{std::all_of(package.cbegin(), package.cend(), [](const auto& tx){ return tx->nVersion == 3; })};
    1    // Check inheritance rules within package.
    2    if (!all_v3) {
    3        // We already checked there was one at least
    4        Assume(std::any_of(package.cbegin(), package.cend(), [](const auto& tx){ return tx->nVersion == 3; }));
    5        return strprintf("txs in package are not all v3");
    6    }
    

    glozow commented at 4:17 pm on December 14, 2023:

    Er, the ugly thing is testmempoolaccept… since AcceptMultipleTransactions doesn’t actually enforce the child-with-parents thing, we can have this right now:

     0    def test_v3_in_testmempoolaccept(self):
     1        node = self.nodes[0]
     2
     3        self.log.info("Test that v3 inheritance is accurately assessed in testmempoolaccept")
     4        tx_v2 = self.wallet.create_self_transfer(version=2)
     5        tx_v2_from_v2 = self.wallet.create_self_transfer(utxo_to_spend=tx_v2["new_utxo"], version=2)
     6        tx_v3_from_v2 = self.wallet.create_self_transfer(utxo_to_spend=tx_v2["new_utxo"], version=3)
     7        tx_v3 = self.wallet.create_self_transfer(version=3)
     8        tx_v2_from_v3 = self.wallet.create_self_transfer(utxo_to_spend=tx_v3["new_utxo"], version=2)
     9        tx_v3_from_v3 = self.wallet.create_self_transfer(utxo_to_spend=tx_v3["new_utxo"], version=3)
    10
    11        # testmempoolaccept paths don't require child-with-parents topology. Ensure that topology
    12        # assumptions aren't made in inheritance checks.
    13        test_accept_v2_and_v3 = node.testmempoolaccept([tx_v2["hex"], tx_v3["hex"]])
    14        assert all([result["allowed"] for result in test_accept_v2_and_v3])
    15
    16        test_accept_v3_from_v2 = node.testmempoolaccept([tx_v2["hex"], tx_v3_from_v2["hex"]])
    17        assert all([result["package-error"] == "v3-violation" for result in test_accept_v3_from_v2])
    18
    19        test_accept_v2_from_v3 = node.testmempoolaccept([tx_v3["hex"], tx_v2_from_v3["hex"]])
    20        assert all([result["package-error"] == "v3-violation" for result in test_accept_v2_from_v3])
    21
    22        # Submit the 2 parents, and the children should pass.
    23        node.sendrawtransaction(tx_v2["hex"])
    24        node.sendrawtransaction(tx_v3["hex"])
    25        test_accept_v2_and_v3_children = node.testmempoolaccept([tx_v2_from_v2["hex"], tx_v3_from_v3["hex"]])
    26        assert all([result["allowed"] for result in test_accept_v2_and_v3_children])
    

    But it does blindly check the number of transactions in the package, so maybe you’re right and we should just do the simple inheritance check?


    instagibbs commented at 10:07 pm on December 14, 2023:
    this looks like a good test to add to tease out the differences in code paths

    glozow commented at 12:28 pm on December 15, 2023:
    Changed PackageV3Checks to properly look at ancestor and descendant sets within package, so it handles packages within testmempoolaccepts now
  80. in src/policy/v3_policy.cpp:72 in 6724b08988 outdated
    67+        }
    68+    }
    69+
    70+    // Sanity check that package itself obeys ancestor/descendant limits. Assumes that this is
    71+    // ancestor package-shaped. This check is not complete as we have not seen in-mempool ancestors yet.
    72+    if (!package.empty() && package.size() > V3_ANCESTOR_LIMIT && package.back()->nVersion == 3) {
    


    instagibbs commented at 8:12 pm on December 13, 2023:
    0    if (package.size() > V3_ANCESTOR_LIMIT) {
    

    since we already know the whole package is v3


    glozow commented at 12:28 pm on December 15, 2023:
    This line ended up going away since I wrote slightly smarter ancestor limit checking in PackageV3Checks, marking as resolved
  81. glozow force-pushed on Dec 14, 2023
  82. glozow force-pushed on Dec 14, 2023
  83. DrahtBot removed the label CI failed on Dec 14, 2023
  84. in src/validation.cpp:1337 in 53c6871dfd outdated
    1329+
    1330     LOCK(m_pool.cs);
    1331 
    1332     // Do all PreChecks first and fail fast to avoid running expensive script checks when unnecessary.
    1333     for (Workspace& ws : workspaces) {
    1334+        // This process is O(n^2) in the number of transactions, so only do it when v3 rules are
    


    instagibbs commented at 4:04 pm on December 14, 2023:
    this block is worth encapsulating and adding unit tests to ensure the populated sets ws.m_num_in_package_ancestors and ws.m_ancestors_of_in_package_ancestors match expected

    instagibbs commented at 3:25 pm on December 15, 2023:

    comment is misleading: computational cost is reduced when the package a connected component, i.e., up to 2 v3 transactions total

    it’s less restrained for some testmempoolaccept use-cases


    glozow commented at 5:04 pm on December 15, 2023:
    Not 100% sure how to encapsulate this exactly since Workspace is private to MemPoolAccept, which is private to validation. We have some tests that the ancestor sets are correctly calculated. Maybe we add a helper function? Or I can add some more assumes.

    glozow commented at 6:06 pm on December 15, 2023:
    Edited comment
  85. in src/policy/packages.h:93 in da18612d68 outdated
    87@@ -88,4 +88,8 @@ bool IsChildWithParents(const Package& package);
    88  * other (the package is a "tree").
    89  */
    90 bool IsChildWithParentsTree(const Package& package);
    91+
    92+/** Construct a map from the txid of a transaction to the txids of its in-package ancestor set,
    93+ * including itself. Input must be IsConsistentPackage, otherwise this returns an empty map. */
    


    ismaelsadeeq commented at 11:26 pm on December 14, 2023:

    IsConsistentPackage in param is Package?

    0/** Construct a map from the txid of a transaction to the txids of its in-package ancestor set,
    1 * including itself. Package must be IsConsistentPackage, otherwise this
    2 * this returns an empty map. Package  */
    

    glozow commented at 12:29 pm on December 15, 2023:
    This is gone now so marking as resolved
  86. in src/policy/packages.cpp:164 in da18612d68 outdated
    159+           std::map<Txid, std::set<Txid>>& ancestor_set_map)
    160+{
    161+    const Txid& curr_txid = curr_tx->GetHash();
    162+    // Already visited? Return now.
    163+    auto curr_result_iter = ancestor_set_map.find(curr_txid);
    164+    if (curr_result_iter == ancestor_set_map.end()) return;
    


    ismaelsadeeq commented at 11:38 pm on December 14, 2023:

    if (curr_result_iter == ancestor_set_map.end()) return; is redundant the map has already been populated with all package transactions, the statement will always be false. So assert for that. And return early when the transaction in-package ancestor set is not empty, that means its has been visited

    0    auto curr_result_iter = ancestor_set_map.find(curr_txid);
    1    // All transactions were populated to ancestor_set_map this should never happen
    2    assert(curr_result_iter != ancestor_set_map.end());
    3    // Already visited? Return now.
    4    if (!curr_result_iter->second.empty()) return;
    

    glozow commented at 12:27 pm on December 15, 2023:
    Yep good catch! I’ve now built this into PackageV3Checks instead of a separate function, so marking this as resolved.
  87. in src/policy/packages.cpp:216 in da18612d68 outdated
    211+        std::set<Txid> empty;
    212+        result.emplace(tx->GetHash(), empty);
    213+    }
    214+
    215+    // For each tx from beginning to end, populate the ancestor set map. This does a recursive DFS
    216+    // by tracing input prevouts; best-case runtime is when the list is already sorted.
    


    ismaelsadeeq commented at 11:41 pm on December 14, 2023:
    Package are always topologically sorted, so am assuming this will always be best case no?

    glozow commented at 12:29 pm on December 15, 2023:
    Yes. In the new function I’ve added an Assume(IsTopoSortedPackage(package))
  88. bitcoin deleted a comment on Dec 15, 2023
  89. glozow force-pushed on Dec 15, 2023
  90. DrahtBot added the label CI failed on Dec 15, 2023
  91. glozow force-pushed on Dec 15, 2023
  92. glozow force-pushed on Dec 15, 2023
  93. in src/validation.cpp:1339 in 2e2ea9b4ca outdated
    1334 
    1335     // Do all PreChecks first and fail fast to avoid running expensive script checks when unnecessary.
    1336     for (Workspace& ws : workspaces) {
    1337+        // This process is O(n^2) in the number of transactions, so only do it when v3 rules are
    1338+        // applicable. PackageV3Checks helps significantly limit the computation here.
    1339+        if (check_v3_rules && ws.m_ptx->nVersion == 3) {
    


    instagibbs commented at 3:24 pm on December 15, 2023:
    0        if (ws.m_ptx->nVersion == 3) {
    

    glozow commented at 6:06 pm on December 15, 2023:
    taken
  94. in src/policy/v3_policy.cpp:19 in 2e2ea9b4ca outdated
    14+#include <numeric>
    15+#include <vector>
    16+
    17+util::Result<std::map<Txid, std::set<Txid>>> PackageV3Checks(const Package& package)
    18+{
    19+    // Keep track of in-package ancestors and descendants as we find them so we can catch topology violations.
    


    instagibbs commented at 3:28 pm on December 15, 2023:
    would be nice to explain if the key a v3 tx or the ancestor set is, for name reading purposes
  95. in src/policy/v3_policy.cpp:76 in 2e2ea9b4ca outdated
    71+                        }
    72+
    73+                        // A v3 transaction with unconfirmed ancestors must be within
    74+                        // V3_CHILD_MAX_VSIZE. This check is not complete as we have not calculated
    75+                        // the sigop cost, which can increase the virtual size.
    76+                        const int64_t vsize = GetVirtualTransactionSize(*tx);
    


    instagibbs commented at 3:49 pm on December 15, 2023:
    0                        const int64_t vsize = GetVirtualTransactionSize(*tx, /*nSigOpCost=*/0, /*bytes_per_sigop=*/0);
    

    glozow commented at 9:33 am on December 18, 2023:
    Nice :+1:
  96. in src/validation.cpp:1319 in 2e2ea9b4ca outdated
    1310@@ -1288,10 +1311,56 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1311                    [](const auto& tx) { return Workspace(tx); });
    1312     std::map<uint256, MempoolAcceptResult> results;
    1313 
    1314+    // Not a complete check of v3 rules, helps us detect failures early.
    1315+    // Some data structures are only necessary if we are checking v3 rules; skip this work if those
    1316+    // rules are not applicable.
    1317+    const bool check_v3_rules{std::any_of(txns.cbegin(), txns.cend(), [](const auto& tx){ return tx->nVersion == 3; })};
    1318+
    1319+    // Map from txid of a transaction to its in-package ancestor set. Not MemPoolAccept-wide
    


    instagibbs commented at 4:00 pm on December 15, 2023:
    0    // Map from txid of a V3 transaction to its in-package ancestor set. Not MemPoolAccept-wide
    
  97. DrahtBot removed the label CI failed on Dec 15, 2023
  98. in src/validation.cpp:1354 in 2e2ea9b4ca outdated
    1349+                // that it is the entire list of txns.
    1350+                ws.m_num_in_package_ancestors = txns.size() - 1;
    1351+            }
    1352+
    1353+            // If this tx is one of our ancestors, add all of its in-mempool ancestors to ours.
    1354+            // If we are treating all transactions as our ancestor, always add.
    


    instagibbs commented at 4:16 pm on December 15, 2023:

    Anything that’s not-already PreCheck‘ed will necessarily have empty Workspace::m_ancestors sets, so there will be a mismatch in the over-estimates.

    (not sure it matters, just noting)


    glozow commented at 6:07 pm on December 15, 2023:
    I’ve changed this to just be preceding transactions instead of all of them, which also matches with this behavior. Added a comment
  99. in src/validation.cpp:1347 in 2e2ea9b4ca outdated
    1342+            const bool have_ancestor_set{ancestor_set_iter != in_package_ancestors.end() && ancestor_set_iter->second.size() >= 1};
    1343+            if (Assume(have_ancestor_set)) {
    1344+                // If there are in-package ancestors, update m_num_in_package_ancestors.
    1345+                // Sets in in_package_ancestors include the tx itself, so subtract 1.
    1346+                ws.m_num_in_package_ancestors = ancestor_set_iter->second.size() - 1;
    1347+            } else if (!txns.empty()) {
    


    instagibbs commented at 4:25 pm on December 15, 2023:
    txns can not be empty

    glozow commented at 6:06 pm on December 15, 2023:
    paranoia deleted
  100. in src/policy/v3_policy.h:59 in 2e2ea9b4ca outdated
    54+ * @param[in]   vsize               The sigop-adjusted virtual size of ptx.
    55+ * @returns an error string if any v3 rule was violated, otherwise std::nullopt.
    56+ */
    57+std::optional<std::string> ApplyV3Rules(const CTransactionRef& ptx,
    58+                                        const CTxMemPool::setEntries& ancestors,
    59+                                        unsigned int num_other_ancestors,
    


    instagibbs commented at 4:32 pm on December 15, 2023:
    0                                        unsigned int num_in_pkg_ancestors,
    
  101. in src/policy/v3_policy.h:58 in 2e2ea9b4ca outdated
    53+ *                                  would already be a violation of V3_ANCESTOR_LIMIT.
    54+ * @param[in]   vsize               The sigop-adjusted virtual size of ptx.
    55+ * @returns an error string if any v3 rule was violated, otherwise std::nullopt.
    56+ */
    57+std::optional<std::string> ApplyV3Rules(const CTransactionRef& ptx,
    58+                                        const CTxMemPool::setEntries& ancestors,
    


    instagibbs commented at 4:32 pm on December 15, 2023:
    0                                        const CTxMemPool::setEntries& mempoool_ancestors,
    
  102. in src/policy/v3_policy.h:70 in 2e2ea9b4ca outdated
    65+ * Check the following rules for transactions within the package:
    66+ * 1. A v3 tx must only have v3 unconfirmed ancestors.
    67+ * 2. A non-v3 tx must only have non-v3 unconfirmed ancestors.
    68+ * 3. A v3's ancestor set, including itself, must be within V3_ANCESTOR_LIMIT.
    69+ * 4. A v3's descendant set, including itself, must be within V3_DESCENDANT_LIMIT.
    70+ * 5. If a v3 tx has any unconfirmed ancestors, the tx's sigop-adjusted vsize must be within
    


    instagibbs commented at 4:48 pm on December 15, 2023:
    this function explicitly doesn’t check sigops adjusted vsize
  103. in src/policy/v3_policy.h:73 in 2e2ea9b4ca outdated
    68+ * 3. A v3's ancestor set, including itself, must be within V3_ANCESTOR_LIMIT.
    69+ * 4. A v3's descendant set, including itself, must be within V3_DESCENDANT_LIMIT.
    70+ * 5. If a v3 tx has any unconfirmed ancestors, the tx's sigop-adjusted vsize must be within
    71+ * V3_CHILD_MAX_VSIZE.
    72+ *
    73+ * Important: this function is necessary but insufficient to enforce these rules.  ApplyV3Rules must
    


    instagibbs commented at 4:50 pm on December 15, 2023:

    I’d really like it to be crystal clear what part of this function is necessary versus optimization.

    i.e., can we say something like this:

    " If PackageV3Checks() passes we know that each connected component does not violate V3 inheritance or topology constraints within the package itself."

  104. in test/functional/mempool_accept_v3.py:247 in 2e2ea9b4ca outdated
    242+            target_weight=4004,
    243+            version=3
    244+        )
    245+        tx_v3_child_multiparent = self.wallet.create_self_transfer_multi(
    246+            utxos_to_spend=[tx_v3_parent_normal["new_utxo"], tx_v3_parent_2_normal["new_utxo"]],
    247+            target_weight=4000,
    


    instagibbs commented at 4:59 pm on December 15, 2023:

    I think sometimes this can over-shoot, meaning we might hide the actual check we’re shooting for

    we don’t get a great error here either so it’s hard to check precisely; should we pass the whole reason back in results?


    glozow commented at 6:09 pm on December 15, 2023:
    Removed. Now using package_state.ToString() for “package_msg” result
  105. glozow force-pushed on Dec 15, 2023
  106. in src/policy/v3_policy.h:83 in 1a2f7253cf outdated
    77+ * @returns If all checks pass, a map from each v3 transaction, by txid, to the txids of its
    78+ * in-package ancestor set. Every ancestor set includes the tx itself. If this passed, we know that
    79+ * each connected component does not violate V3 inheritance or topology constraints within the
    80+ * package itself. If any checks fail, an error string detailing what failed.
    81+ * */
    82+util::Result<std::map<Txid, std::set<Txid>>> PackageV3Checks(const Package& package);
    


    glozow commented at 6:14 pm on December 15, 2023:
    Tbh since the limit is 1 ancestor, we could just make this a std::map<Txid, Txid> and save some space.
  107. instagibbs commented at 8:44 pm on December 15, 2023: member

    Here’s a small patch to add sigops-adjusted coverage for v3 children, with an intentional bug added to ensure it could be hit.

    https://gist.github.com/instagibbs/c5cb0796ceec81f0374ae614f8cdab7f

  108. glozow force-pushed on Dec 18, 2023
  109. glozow commented at 9:33 am on December 18, 2023: member

    Here’s a small patch to add sigops-adjusted coverage for v3 children

    Nice, taken

  110. in src/policy/v3_policy.h:14 in de230512e6 outdated
     9+#include <policy/packages.h>
    10+#include <policy/policy.h>
    11+#include <primitives/transaction.h>
    12+#include <txmempool.h>
    13+#include <util/result.h>
    14+
    


    ismaelsadeeq commented at 2:30 pm on December 20, 2023:

    iwyu

    0#include <set>
    

    glozow commented at 9:45 am on January 8, 2024:
    taken
  111. in src/policy/v3_policy.h:71 in de230512e6 outdated
    65+ * Check the following rules for transactions within the package:
    66+ * 1. A v3 tx must only have v3 unconfirmed ancestors.
    67+ * 2. A non-v3 tx must only have non-v3 unconfirmed ancestors.
    68+ * 3. A v3's ancestor set, including itself, must be within V3_ANCESTOR_LIMIT.
    69+ * 4. A v3's descendant set, including itself, must be within V3_DESCENDANT_LIMIT.
    70+ * 5. If a v3 tx has any unconfirmed ancestors, its vsize must be within V3_CHILD_MAX_VSIZE.
    


    ismaelsadeeq commented at 2:37 pm on December 20, 2023:
    Rule 5 is not accurately checked here. IMO its better to remove it completely and depend on ApplyV3Rules for an accurate child size limit check. The method might even be removed completely in the future cc #28345

    glozow commented at 2:32 pm on December 21, 2023:
    I disagree, I think we should exit as early as possible when something is too big and avoid needing to do expensive things like load a bunch of UTXOs from disk. The next line says “Important: this function is insufficient to enforce” etc.
  112. in src/policy/v3_policy.cpp:25 in de230512e6 outdated
    20+    // Since we enforce v3 inheritance rules as we build this set, sets should consist exclusively
    21+    // of v3 transactions.
    22+    std::map<Txid, std::set<Txid>> v3_ancestor_sets;
    23+    // Map from txid of a v3 transaction to its descendant set, including itself.
    24+    std::map<Txid, std::set<Txid>> v3_descendant_sets;
    25+
    


    ismaelsadeeq commented at 2:40 pm on December 20, 2023:

    We can just return early if all the transactions are not v3

    0    const auto any_v3 = std::any_of(package.begin(), package.end(), [](const auto& tx){ return tx->nVersion == 3;});
    1    if (!any_v3) { return v3_ancestor_sets;}
    

    glozow commented at 2:34 pm on December 21, 2023:
    This function is documented as “Should not be called for non-v3 packages.”

    glozow commented at 9:45 am on January 8, 2024:
    I’ve added an Assume checking that there is a v3 tx in the package and exiting early if not
  113. in src/policy/v3_policy.cpp:99 in de230512e6 outdated
    94+        }
    95+    }
    96+
    97+    // Find violations of descendant limits. When a package is sorted, it's most efficient to build
    98+    // descendant sets by iterating in reverse order.
    99+    for (auto package_iter = package.rbegin(); package_iter != package.rend(); ++package_iter) {
    


    ismaelsadeeq commented at 2:45 pm on December 20, 2023:

    Nit: I find tx_iter more readable than package_iter, since its a transaction iterator not package iterator

    0    for (auto tx_iter = package.rbegin(); tx_iter != package.rend(); ++tx_iter) {
    

    glozow commented at 9:45 am on January 8, 2024:
    taken
  114. in src/policy/v3_policy.cpp:100 in de230512e6 outdated
     95+    }
     96+
     97+    // Find violations of descendant limits. When a package is sorted, it's most efficient to build
     98+    // descendant sets by iterating in reverse order.
     99+    for (auto package_iter = package.rbegin(); package_iter != package.rend(); ++package_iter) {
    100+        if ((*package_iter)->nVersion == 3) {
    


    ismaelsadeeq commented at 2:49 pm on December 20, 2023:

    Nit: IMO its more readable to define an intermediate txid variable here also just like its done in the for loop above

    0     const Txid& my_txid{(*package_iter)->GetHash()};
    1     if ((*package_iter)->nVersion == 3) {
    

    glozow commented at 9:46 am on January 8, 2024:
    taken
  115. in src/policy/v3_policy.cpp:109 in de230512e6 outdated
    104+            Assume(my_descendant_set.size() >= 1);
    105+
    106+            for (const auto& ancestor_txid : my_ancestor_set) {
    107+                // My descendants are also my ancestor's descendants.
    108+                auto& ancestors_descendant_set = v3_descendant_sets.at(ancestor_txid);
    109+                ancestors_descendant_set.insert(my_descendant_set.cbegin(), my_descendant_set.cend());
    


    ismaelsadeeq commented at 2:59 pm on December 20, 2023:

    We can just skip if the ancestor has the transaction in it’s descendant set?

    0                if (ancestors_descendant_set.count((*tx_iter)->GetHash()) == 0 ) {
    1                    ancestors_descendant_set.insert(my_descendant_set.cbegin(), my_descendant_set.cend());
    

    glozow commented at 9:45 am on January 8, 2024:
    taken
  116. in src/test/util/txmempool.cpp:140 in 1dd62c3df4 outdated
    135+                Assert(parents.begin()->get().GetSharedTx()->nVersion == 3);
    136+            }
    137+        } else if (entry.GetCountWithAncestors() > 1) {
    138+            // All non-v3 transactions must only have non-v3 unconfirmed parents.
    139+            const auto& parents = entry.GetMemPoolParentsConst();
    140+            Assert(parents.begin()->get().GetSharedTx()->nVersion != 3);
    


    ismaelsadeeq commented at 2:08 pm on December 21, 2023:

    CheckMempoolV3Invariants docstring says we would verify

    any non-v3 tx must only have non-v3 ancestors

    We are only getting parents and checking the first parent?


    glozow commented at 2:37 pm on December 21, 2023:
    Ah you’re right, fixing

    glozow commented at 9:46 am on January 8, 2024:
    fixed
  117. in test/functional/mempool_accept_v3.py:28 in 7159f5383c outdated
    23+            func(self)
    24+        finally:
    25+            # Clear mempool
    26+            self.generate(self.nodes[0], 1)
    27+            # Reset config options
    28+            self.restart_node(0)
    


    ismaelsadeeq commented at 2:21 pm on December 21, 2023:

    After restarting here, we are restarting at the beginning of the test again with extra args Dev notes says

    Avoid stop-starting the nodes multiple times during the test if possible. A stop-start takes several seconds, so doing it several times blows up the runtime of the test.

    The wrapper can accept the extra args as argument and restart at the beginning only ?

     0def cleanup(extra_args=None):
     1    def decorator(func):
     2        def wrapper(self):
     3            try:
     4                if extra_args is not None:
     5                    self.restart_node(0, extra_args=extra_args)
     6                func(self)
     7            finally:
     8                # Clear mempool again after test
     9                self.generate(self.nodes[0], 1)
    10        return wrapper
    11    return decorator
    12
    13      ........ [@cleanup](/bitcoin-bitcoin/contributor/cleanup/)(extra_args=["-datacarriersize=1000"])
    14    def test_v3_acceptance(self):
    

    glozow commented at 9:46 am on January 8, 2024:
    taken, thanks
  118. in src/validation.cpp:1377 in bb3c276651 outdated
    1373@@ -1304,6 +1374,7 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1374         // updated if package replace-by-fee is allowed in the future.
    1375         assert(!args.m_allow_replacement);
    1376         m_viewmempool.PackageAddTransaction(ws.m_ptx);
    1377+        ++num_prechecks_passed;
    


    ismaelsadeeq commented at 2:22 pm on December 21, 2023:
    why increment num_prechecks_passed here ?

    glozow commented at 9:46 am on January 8, 2024:
    because otherwise it is always 0
  119. pox commented at 6:05 pm on January 1, 2024: contributor
    Peter Todd wrote in length about this proposal. It would be helpful if his points could be addressed (not necessarily in this github issue of course). Specifically his point about the effect on mining centralization ought to be addressed IMHO.
  120. darosior commented at 9:38 pm on January 1, 2024: member

    @pox in the post you link Peter Todd argues against v3 transactions in 3 points:

    1. v3 does not fix all pinning vectors;
    2. Using CPFP is less optimal than using RBF;
    3. “Anchor Outputs Are a Danger to Mining Decentralization” :tm:

    If we tune down the sensational part of the third point to “a non-incentive compatible relay policy is bad for mining decentralization” then all these 3 points are trivially true statements. Only, the conclusion of the blog post that “V3 transactions should not be be shipped at the moment” does not follow from these.

    v3 does not fix all pinning vectors

    v3 aims to fix what’s commonly referred to as “Rule 3 pinning”. That is, currently an attacker can make their victim pay up to 500x the required fees to replace an attacker’s low feerate transaction (more here). By limiting the size of child of unconfirmed transactions from the current 100,000 to 1000 vbytes the “v3” proposal effectively decreases the harm of performing this attack by a 100x factor. It’s well known this still leaves room for an attacker to increase the fees paid by their victim by at most a 5x factor. For instance, the motivations section of the ephemeral anchors BIP touches on this:

    v3 transactions, which this proposal is built on, greatly mitigates [RBF] pinning vectors, where an adversary can vastly increase the required fees to replace a transaction that will not be mined quickly, or disallow replacement altogether. Depending on certain factors, this economic value can reach over 500 times increase in required fees to replace the attacker’s transaction. V3 transactions end up reducing this attack surface by 100 times, resulting in 5 times increase in fees using the same running example.

    In his post Peter explains how an attacker can make their channel partner pay 1.5x the fees necessary for inclusion of their commitment transaction in a block. Sure, it’s known already: v3 mitigates this, it doesn’t solve it. It’s also disingenuous to point out the remaining 1.5x factor left under v3 without putting it in perspective to the 100x improvement v3 brings.

    The v3 relay regime isn’t perfect, it’s just 100x better for this application than the current one. This is not a reason not to do v3.

    Using CPFP is less optimal than using RBF

    Sure. But making RBF actually incentive compatible is a much larger undertaking (see #27677) than proposing a new opt-in relay regime with more constrained rules. v3 already helps closing a number of pinning vectors without having to first fix RBF.

    The whole point of fee-bumping is to not have to rely on previously-agreed-upon feerates, because you can’t predict what the next block feerate will be by the time you need to broadcast your transaction. At the moment, it’s not possible in an adversarial scenario to solely rely on RBF for this. One has to use CPFP (along with the carve-out).

    To avoid the use of CPFP Peter suggests in his blog post to get rid of this requirement and instead pre-sign a bunch of commitment + HTLC transactions in advance. This is less crazy than it sounds at first, but it does mean you are back to have to guess a feerate in advance above which your channel won’t be enforceable.

    It’s not clear that it’s a valid reason not to ship v3, which does not make this compromise.

    “Danger to decentralization”

    The post then goes on to explain how using the more expensive CPFP instead of the more efficient (but at the moment not incentive compatible) RBF is a “threat to mining decentralization” because it incentivizes Lightning users and big miners to cut an out-of-band deal. (Miner only includes the parent, gets paid out of band: cheaper for the Lightning user, more room to include other transactions in place of the child for the miner.)

    This is extremely misleading. Because it’s true, but the blame is incorrectly assigned to the v3 proposal. Truth is: it’s the status quo which provides an enormous incentive to cut out-of-band deals with miners. Not only because such tiny optimisations are already possible, but especially because of pinning vectors! If anything, the status quo is more a handwave handwave “threat to mining decentralization” than a post v3 world.

    By closing some pinning vectors, v3 is actually reducing the incentive for Lightning users to reach out to big miners directly. Therefore, this is not a reason not to ship v3. Rather the opposite.

  121. petertodd commented at 10:09 pm on January 1, 2024: contributor

    @darosior You keep on saying that RBF is not incentive compatible.

    Question: in my article, in the context of replace-by-fee for lightning commitment transactions, are you claiming that those replacements are not incentive compatible?

  122. petertodd commented at 10:20 pm on January 1, 2024: contributor

    @darosior

    The v3 relay regime isn’t perfect, it’s just 100x better for this application than the current one. This is not a reason not to do v3.

    Are you claiming that the anchor outputs in existing Lightning anchor channels are subject to transaction pinning?

  123. darosior commented at 10:37 pm on January 1, 2024: member

    @darosior You keep on saying that RBF is not incentive compatible. Question: in my article, in the context of replace-by-fee for lightning commitment transactions, are you claiming that those replacements are not incentive compatible?

    I think there’s been much confused talk about “incentive compatibility” in the past. It depends “what for” and “compared to what”.

    I’m not claiming that your proposal to pre-sign a bunch of transactions at different feerates is not incentive-compatible, just that it conveniently ignores one of the requirements which v3 fulfills: that you don’t have to choose a feerate in advance above which your channel isn’t enforceable. There are other cases where the current RBF rules would not be incentive compatible. For instance when another party can attach inputs to your transaction (ANYONECANPAY, ANYPREVOUT).

    Are you claiming that the anchor outputs in existing Lightning anchor channels are subject to transaction pinning?

    No, but that’s only because of the Lightning-specific CPFP carve-out. See also the second point of OP:

    Switching to a cluster-based mempool (see #27677 and #28676) requires the removal of CPFP carve out, which applications depend on. V3 + package RBF + ephemeral anchors + 1-parent-1-child package relay provides an intermediate solution.


    I feel like it’s not the right place to continue discussing the v3 proposal at a high-level. How about moving this discussion to https://delvingbitcoin.org/?

  124. petertodd commented at 11:08 pm on January 1, 2024: contributor

    @darosior You keep on saying that RBF is not incentive compatible. Question: in my article, in the context of replace-by-fee for lightning commitment transactions, are you claiming that those replacements are not incentive compatible?

    I think there’s been much confused talk about “incentive compatibility” in the past. It depends “what for” and “compared to what”.

    I’m not claiming that your proposal to pre-sign a bunch of transactions at different feerates is not incentive-compatible, just that it conveniently ignores one of the requirements which v3 fulfills: that you don’t have to choose a feerate in advance above which your channel isn’t enforceable. There are other cases where the current RBF rules would not be incentive compatible. For instance when another party can attach inputs to your transaction (ANYONECANPAY, ANYPREVOUT).

    Let me ask you a simple question: are we proposing to remove RBF? Because what you said above was: “But making RBF actually incentive compatible is a much larger undertaking (see #27677) than proposing a new opt-in relay regime with more constrained rules.” I suspect you’re not proposing to remove RBF, which makes the incentive compatibility of it in niche situations entirely irrelevant to my point that RBF is the better solution for this very simple problem of replacing a single transaction with another one of higher fees. Remember, that we can easily avoid all pinning problems by using the anchor channels trick of making all outputs unspendable.

    Now, as for the choosing a feerate in advance problem, I explained fully in my article, showing how it’s quite easy to pre-sign every conceivable feerate because there just aren’t that many of them. In fact, you could easily pre-sign feerates all the way to making the channel uneconomical to close, because you’ve spent every cent towards fees. This is not a problem.

    The largest lightning channels out there are about 5BTC. Even if you were willing to bump fees, all the way to spending the entire 5BTC towards fees, you’d need just 68 different fee variants to go all the way from 1sat/vbyte to spending the full 5BTC on fees, with a 25% increase for each each fee variant.

    Finally, as I explained, you can always design the protocol to fallback to existing anchor channels at the very highest fee rate. You only need to get to the point where you’ve met the minimum relay fee after all.

    Are you claiming that the anchor outputs in existing Lightning anchor channels are subject to transaction pinning?

    No, but that’s only because of the Lightning-specific CPFP carve-out. See also the second point of OP:

    So why did you mention it? What you said was “The v3 relay regime isn’t perfect, it’s just 100x better [regarding transaction pinning] for this application than the current one.”

    But as you admit, that’s not true: the v3 relay regime is worse than the current one for Lightning channels with regard to transaction pinning, as unlike the existing regime, it’s vulnerable to transaction pinning. You’re claim that v3 is 100x better simply is not true for the topic at hand.

    Now, if V3 had some specific examples of proposed contracting protocols other than lightning that would use it, and are valuable for bitcoin, maybe it’d be worth considering. But at the moment, it does not.

  125. petertodd commented at 11:09 pm on January 1, 2024: contributor
    While I’m at it, Concept NACK, due to the problems outlined in https://petertodd.org/2023/v3-transactions-review and https://petertodd.org/2023/v3-txs-pinning-vulnerability
  126. darosior commented at 9:34 am on January 2, 2024: member

    The largest lightning channels out there are about 5BTC. Even if you were willing to bump fees, all the way to spending the entire 5BTC towards fees, you’d need just 68 different fee variants to go all the way from 1sat/vbyte to spending the full 5BTC on fees, with a 25% increase for each each fee variant.

    So you’d potentially hand to your supposedly untrusted channel partner a signature for a transaction burning your whole 4.95BTC balance to fees? This trivially opens a blackmail vector: “sign a transaction which pays us both 2.5BTC or i burn your whole balance”.

    Pre-signing versions of a transaction at various feerates necessarily presents a tension between:

    1. A feerate above which the contract won’t be enforceable.
    2. A maximum amount of fees a malicious counterparty could force you to burn.

    In fact i think if you want to reasonably set (1) you’d have (2) such as a malicious can almost always make you burn more fees than the 1.5x factor you raised with v3?


    I’ve opened https://delvingbitcoin.org/t/v3-transaction-policy-for-anti-pinning to continue discussing v3 at a higher level and keep this PR focused on code review.

  127. t-bast commented at 9:37 am on January 2, 2024: contributor

    Now, as for the choosing a feerate in advance problem, I explained fully in my article, showing how it’s quite easy to pre-sign every conceivable feerate because there just aren’t that many of them. In fact, you could easily pre-sign feerates all the way to making the channel uneconomical to close, because you’ve spent every cent towards fees. This is not a problem.

    That proposal ignores important drawbacks on the lightning side. I’m actually quite surprised that you don’t even mention those drawbacks (which are known since I proposed a while ago something similar to what you’re advocating for and we already discussed them in the past).

    The main drawback in my opinion is: if you pre-sign commitment transactions at various feerates, the actual balance that you can use off-chain is the balance of the commitment with the highest feerate. That is quite an inefficient use of your liquidity, as you’ll always be locking a large amount of your lightning balance to pay the worst-case feerate! This shouldn’t be an issue with eltoo, where you bring new inputs to pay the fees instead of taking them from your off-chain balance, but since we don’t know whether we’ll get an eltoo-enabling soft fork, we have to work without it.

    Finally, as I explained, you can always design the protocol to fallback to existing anchor channels at the very highest fee rate. You only need to get to the point where you’ve met the minimum relay fee after all.

    If you do that, you’re just adding complexity without fundamentally fixing anything?

    I feel like it’s not the right place to continue discussing the v3 proposal at a high-level. How about moving this discussion to https://delvingbitcoin.org/?

    Agreed!

  128. glozow commented at 11:34 am on January 2, 2024: member

    Responding to https://petertodd.org/2023/v3-transactions-review

    In general, I agree that RBF is a great fee-bumping mechanism. The entire point of v3 and EA is to make RBF more useful and less prone to pinning, and to enable us to use it in conjunction with CPFP while eliminating some of the inefficiency.

    We seem to have entirely different perspectives about what RBF’s limitations are today. Pointing out limitations in RBF is not intended as a personal attack. There is no intention to remove RBF. We’re only trying to remove its limitations, many of which are fundamental to how mempool is currently structured.

    As you can see by the timeline of posts, we started this journey by gathering information on all the limitations and grievances with RBF and going through existing proposed solutions. Please give it a read as it may help your understanding here. It lists some ideas that have been brought up and discussed multiple times (including some that you suggested) that are insufficient or not feasible in a pre-cluster-mempool world.

    I’d suggest particularly focusing on these limitations and their implications:

    • Absolute fees / Rule 3 pinning: RBF rules require the replacement transaction pay a higher absolute fee than the aggregate fees paid by all original transactions.
    • Replacements don’t need to be more incentive compatible: it’s possible for a replacement transaction to confirm slower than the replaced one, leading to ACP pinning problems.
    • Transactions must meet RBF rules using only their own fees: i.e. there is no package RBF; CPFPing a transaction doesn’t help it replace conflicts in mempools.

    If we agree that these issues need to be fixed, then the plan here (and laid out in #27463) might make more sense. It includes:

    • 1-parent-1-child package relay, enabling (non-batched) CPFPs below min feerate to propagate
    • v3: a way to opt in to stricter package limits to avoid Rule 3 + 4 pinning. Enabling EA + the next few steps
    • package RBF: a way for these 1-parent-1-child packages to replace each other. This allows us to remove CPFP carveout while still providing protocols with presigned transactions with a robust fee-bumping option. One of the primary goals of v3 was to make package RBF possible before cluster mempool.
    • cluster mempool: address the problem of assess incentive compatibility, fixing the majority of RBF problems, as well as other fundamental limitations caused by the current mempool data structures
    • more complete package relay, including features and fixes that are simplified or enabled by cluster mempool

    This is an improvement by itself and is a stepping stone in a long-term solution to fix a number of problems. You suggested package relay as a solution; please propose an alternative implementation/timeline if you don’t think this is a correct approach.

    I addressed the “V3 Transactions are still vulnerable” article in my response on the mailing list. I just mailed an update to the math this morning using the number you provided. I think the math there, as well as some basic intuition on the descendant limit restriction, shows the 100x reduction that you are denying. Not sure how to make that point more clear :shrug: happy to speak in a more synchronous fashion if you’d like.

  129. petertodd commented at 5:38 pm on January 2, 2024: contributor

    @darosior

    The largest lightning channels out there are about 5BTC. Even if you were willing to bump fees, all the way to spending the entire 5BTC towards fees, you’d need just 68 different fee variants to go all the way from 1sat/vbyte to spending the full 5BTC on fees, with a 25% increase for each each fee variant.

    So you’d potentially hand to your supposedly untrusted channel partner a signature for a transaction burning your whole 4.95BTC balance to fees? This trivially opens a blackmail vector: “sign a transaction which pays us both 2.5BTC or i burn your whole balance”.

    You’ve misunderstood how Lightning works. While simplified explanations often talk about the commitment transaction, in fact Lightning channels actually have two slightly different commitment transactions, a local and remote transaction. Each party has a signature from the other side on their version of the commitment transaction, so broadcasting their commitment involves counter-signing it first.

    In the context of RBF using channels — while there may be special circumstances requiring a different allocation — the most obvious way to allocate fees is to take the fees from the balance on each respective side. So if Alice wants to broadcast a commitment transaction, her version of the fee variants all take the funds for the fees from her balance, and Bob has given Alice a set of signatures for each fee variant. Conversely, if Bob wants to broadcast, the funds for the fees are taken from his balance.

    Thus this blackmail attack simply is not possible as neither side has any way to spend the funds of their counterparty to fees.

    I’ve added another section to my post to make this point clear: https://petertodd.org/2023/v3-transactions-review#fee-allocation

  130. petertodd commented at 6:37 pm on January 2, 2024: contributor

    Now, as for the choosing a feerate in advance problem, I explained fully in my article, showing how it’s quite easy to pre-sign every conceivable feerate because there just aren’t that many of them. In fact, you could easily pre-sign feerates all the way to making the channel uneconomical to close, because you’ve spent every cent towards fees. This is not a problem.

    That proposal ignores important drawbacks on the lightning side. I’m actually quite surprised that you don’t even mention those drawbacks (which are known since I proposed a while ago something similar to what you’re advocating for and we already discussed them in the past).

    I’m aware. Those proposed drawbacks are incorrect.

    The main drawback in my opinion is: if you pre-sign commitment transactions at various feerates, the actual balance that you can use off-chain is the balance of the commitment with the highest feerate. That is quite an inefficient use of your liquidity, as you’ll always be locking a large amount of your lightning balance to pay the worst-case feerate! This shouldn’t be an issue with eltoo, where you bring new inputs to pay the fees instead of taking them from your off-chain balance, but since we don’t know whether we’ll get an eltoo-enabling soft fork, we have to work without it.

    That “drawback” is a misunderstanding of economics and accounting.

    The true value of your balance in a Lightning channel is always potentially reduced by the fees it would take to (force-)close the channel. CPFP or RBF does not change this: either way, if you need to (force-)close the channel, you will recover less funds than the apparent balance.

    Secondly, even in a naive analysis, the liquidity is not substantially reduced by RBF fees in circumstances where the Lightning channel is profitable: if potential fees are a substantial percentage of the channel balance, you’re not going to make money on routing. Meanwhile, the money you make on routing is proportional to balance. So let’s suppose that you pre-allocate 10% of your channel balance to paying closing fees, leaving the other 90% to route. If you earn 10% per year routing, you’ve only made 9% return on capital per year. Meanwhile, if you have to close that channel after a year, you’ve lost money. On the other hand, if you could allocate 100% to fees, your return on capital was just 10%, a 1% difference… and you’ve still lost money.

    But even that is actually incorrect: the highest feerate you could ever possibly need is the one where all of your balance in the channel goes to fees. Feerates beyond that are simply pointless: you might as well just wait for fees to decrease, to abandon the funds, as you’re not recovering any value. CPFP/anchor channels does allow you to pay even higher fee rates. But doing so doesn’t make any sense. Thus there is no liquidity wasted with RBF: you can spend 100% of the available channel balance on your side at any time. Of course, at some point the fee rate required implies that the channel reserve is probably too low. But that’s a problem we already have in existing Lightning channels; CPFP/anchors doesn’t change that.

    Finally, as I explained, you can always design the protocol to fallback to existing anchor channels at the very highest fee rate. You only need to get to the point where you’ve met the minimum relay fee after all.

    If you do that, you’re just adding complexity without fundamentally fixing anything?

    Nope. In the usual case where RBF works you’ll spend ~50% on fees, while avoiding the creation of large incentives for pools to centralize to offer out-of-band fees.

  131. petertodd commented at 7:04 pm on January 2, 2024: contributor

    Responding to https://petertodd.org/2023/v3-transactions-review

    In general, I agree that RBF is a great fee-bumping mechanism. The entire point of v3 and EA is to make RBF more useful and less prone to pinning, and to enable us to use it in conjunction with CPFP while eliminating some of the inefficiency.

    We seem to have entirely different perspectives about what RBF’s limitations are today. Pointing out limitations in RBF is not intended as a personal attack. There is no intention to remove RBF. We’re only trying to remove its limitations, many of which are fundamental to how mempool is currently structured.

    None of this discussion has anything to do with “personal attacks”. The question is if V3 transactions fixes a problem that exists in actual protocols, not hypothetical situations. Bringing up issues with RBF that are not relevant to the actual protocols being discussed is misleading.

    Furthermore, what I’m showing is that V3 transactions, as proposed, fails to improve the situation for Lightning channels. The claimed tx pinning vector is not relevant, and V3 doubles down on a serious design flaw with regard to out-of-band fee payments. We should fix that design flaw, and RBF almost certainly the best solution possible.

    What you need to do now, if you wish to progress further on V3 transactions, is show how it will fix actual problems in existing or proposed contracting systems such as Lightning. Once we have identified those problems, we can discuss solutions, and see if V3 can fix those actual problems.

    The root problem here is we have done this backwards: we have a rough idea of what some problems could be, V3 was proposed, and people have gotten excited that it seems to fix these hypothetical problems.

    I’d suggest particularly focusing on these limitations and their implications:

    * Absolute fees / Rule 3 pinning: RBF rules require the replacement transaction pay a higher absolute fee than the aggregate fees paid by all original transactions.
    
    * Replacements don't need to be more incentive compatible: it's possible for a replacement transaction to confirm slower than the replaced one, leading to ACP pinning problems.
    
    * Transactions must meet RBF rules using only their own fees: i.e. there is no package RBF; CPFPing a transaction doesn't help it replace conflicts in mempools.
    

    This kind of thinking is precisely the problem: you’ve listed a bunch of potential problems, without any discussion of how actual protocols are impacted by those problems. This is bad design: you should focus on the actual problems real protocols run into, and then figure out how to solve those problems.

    I addressed the “V3 Transactions are still vulnerable” article in my response on the mailing list. I just mailed an update to the math this morning using the number you provided. I think the math there, as well as some basic intuition on the descendant limit restriction, shows the 100x reduction that you are denying. Not sure how to make that point more clear 🤷 happy to speak in a more synchronous fashion if you’d like.

    Again, I am not denying that there is a 100x reduction in the hypothetical situation of transaction pinning. What I am pointing out is that Lightning already fixed that transaction pinning problem with the design of anchor channels, making your 100x reduction irrelevant.

    This entire discussion is similar to installing a lift kit on a Honda Civic mainly used to pick up groceries, and bragging about how you’ve increased ground clearance to 1m. It’s a cool engineering challenge. But if our actual problem is high gas prices, not low ground clearance, why are we adding the extra weight and complexity of the lift kit?

  132. instagibbs commented at 7:21 pm on January 2, 2024: member
    moving to delving thread as we’re essentially debating an LN spec at this point: https://delvingbitcoin.org/t/v3-transaction-policy-for-anti-pinning/340/4
  133. petertodd commented at 7:26 pm on January 2, 2024: contributor

    moving to delving thread as we’re essentially debating an LN spec at this point: https://delvingbitcoin.org/t/v3-transaction-policy-for-anti-pinning/340/3

    We are not debating the LN spec.

    The LN spec doesn’t use V3 transactions, and as of anchor channels, is not vulnerable to the problems V3 transactions aims to solve. As I mentioned in my post, package relay would be helpful to anchor channels. But of course, package relay does not depend on V3.

    This discussion is not about general Bitcoin protocol design. The question is if V3 transactions actually solves a real problem in a real protocol, thus making it a worthwhile addition to Bitcoin Core.

  134. instagibbs commented at 7:47 pm on January 2, 2024: member
    As noted in delving, pinning is not solved for anchor channels(even if we assume package relay/rbf is implemented): If an adversary splits the view of which commitment transaction is broadcasted, the remote copy can become the pin since the defender is unable to propagate their spend of the remote commit tx anchor.
  135. instagibbs commented at 8:01 pm on January 2, 2024: member

    This discussion is not about general Bitcoin protocol design. The question is if V3 transactions actually solves a real problem in a real protocol, thus making it a worthwhile addition to Bitcoin Core.

    V3 is useful for any transactions where you don’t want to be RBF-pinned for sending to arbitrary scripts. For example in splicing, rather than requiring each address being sent to including a 1 OP_CSV clause over each spending condition, it can now be arbitrary scriptPubKeys. More generally, it allows this relaxation for any payment where you want to RBF the payment.

  136. darosior commented at 8:24 pm on January 2, 2024: member

    You’ve misunderstood how Lightning works. While simplified explanations often talk about […]

    I see how a superficial understanding of Lightning and pinning issues could lead you to think that. However your hot fix to your proposal does not patch all its flaws.

    In the context of RBF using channels — while there may be special circumstances requiring a different allocation — the most obvious way to allocate fees is to take the fees from the balance on each respective side. So if Alice wants to broadcast a commitment transaction, her version of the fee variants all take the funds for the fees from her balance, and Bob has given Alice a set of signatures for each fee variant. Conversely, if Bob wants to broadcast, the funds for the fees are taken from his balance.

    You don’t only need fees for your established balance, but also for your in-flight balance. Alice may only have the reserve as her balance in the channel (or no balance at all, for instance right after Bob opened a channel to her) and have to resolve an incoming HTLC as soon as possible. Here Bob could trivially pin her attempt to resolve the HTLC by broadcasting the first version of his commit tx which Alice won’t be able to replace. She would have to broadcast a pair <Alice’s commit tx, htlc success with fees> to replace Bob’s commit tx, but for this you need package RBF which needs v3.

    Last thing i’ll say here, v3 (+EA) is not only useful for current Lightning channels. You can think about it as a better carve-out. The CPFP carve-out is a very Lightning-specific hack to the mempool validation rules. v3 is a more general way of achieving the same, and more, for any protocol.


    That’s my last message on this topic here. I don’t want to further derail this pull request.

    I’m happy to continue discussing v3 at Delving as long as we keep the discussion intellectually honest. I’m not interested in this little game of arguing to appear smart (or try to, at least) instead of arguing to make progress toward a better and technically sound proposal.

  137. petertodd commented at 0:32 am on January 3, 2024: contributor

    As noted in delving, pinning is not solved for anchor channels(even if we assume package relay/rbf is implemented): If an adversary splits the view of which commitment transaction is broadcasted, the remote copy can become the pin since the defender is unable to propagate their spend of the remote commit tx anchor.

    You’re really describing a form of MITM attack. If the target learns about the existence of the other commitment transaction, be it a revoked transaction or the remote’s version of the current one, they can easily get that transaction mined by spending their anchor output (or other outputs if it’s a revoked state).

    Secondly, V3 transactions doesn’t fully fix this either. For channels that can have large numbers of HTLCs in flight, an adversary can pin with a revoked commitment from a point in the channel’s lifetime where the # of HTLCs in flight towards them was large.

    Notably, this kind of circumstance shows why concerns about RBF with all the funds on one side of the channel are a little silly… Lightning is broken in a lot of ways when you get into that circumstance.

    V3 is useful for any transactions where you don’t want to be RBF-pinned for sending to arbitrary scripts. For example in splicing, rather than requiring each address being sent to including a 1 OP_CSV clause over each spending condition, it can now be arbitrary scriptPubKeys. More generally, it allows this relaxation for any payment where you want to RBF the payment.

    You’re splicing example is one where V3 is not clearly incentive compatible, and you’re encouraging wallets to degrade privacy by making transactions from different wallets clearly distinct. It’s a good thing that, eg, Phoenix has shipped with splicing and without that restriction.

    Now, if the OP wanted to argue for those kinds of detailed tradeoffs, the correct way to do so would be with detailed design documents. But that hasn’t happened. There’s not even a BIP for this.

  138. petertodd commented at 1:25 am on January 3, 2024: contributor

    You’ve misunderstood how Lightning works. While simplified explanations often talk about […]

    I see how a superficial understanding of Lightning and pinning issues could lead you to think that. However your hot fix to your proposal does not patch all its flaws.

    It’s not a “hot fix”, it’s the only way it could have ever worked. The remote and local transactions are fundamental to Lightning; I didn’t mention that explicitly because I thought it would be obvious to anyone familiar with the protocol in depth.

    In the context of RBF using channels — while there may be special circumstances requiring a different allocation — the most obvious way to allocate fees is to take the fees from the balance on each respective side. So if Alice wants to broadcast a commitment transaction, her version of the fee variants all take the funds for the fees from her balance, and Bob has given Alice a set of signatures for each fee variant. Conversely, if Bob wants to broadcast, the funds for the fees are taken from his balance.

    You don’t only need fees for your established balance, but also for your in-flight balance. Alice may only have the reserve as her balance in the channel (or no balance at all, for instance right after Bob opened a channel to her) and have to resolve an incoming HTLC as soon as possible. Here Bob could trivially pin her attempt to resolve the HTLC by broadcasting the first version of his commit tx which Alice won’t be able to replace. She would have to broadcast a pair <Alice’s commit tx, htlc success with fees> to replace Bob’s commit tx, but for this you need package RBF which needs v3.

    If Bob broadcasts his first version of his commit tx, he’s broadcasting a revoked state and Alice can simply take his funds.

    Now, if you mean that Bob is broadcasting his current version of his remote tx, that situation was already discussed on Twitter. That situation is kinda busted in general, as Lightning’s incentives are a bit dubious unless both parties have funds in the channel. Using a signed anchor output that only Alice can spend is a perfectly reasonable approach, as it’s rare enough that it won’t be a substantial threat to mining decentralization via out-of-band fees. It’s probably also reasonable to consider limiting HTLC size in general during initial channel startup until a reasonable channel reserve is reached.

    Last thing i’ll say here, v3 (+EA) is not only useful for current Lightning channels. You can think about it as a better carve-out. The CPFP carve-out is a very Lightning-specific hack to the mempool validation rules. v3 is a more general way of achieving the same, and more, for any protocol.

    As I keep saying, that’s nice to speculate about. But these potential protocols need to be discussed in detail so costs and benefits can be worked out. It is disappointing that I seem to have been the first person to point out how dangerous anchor outputs are to mining decentralization; CPFP usage in general has this potential problem, and we should be careful not to encourage more of it.

    I’m happy to continue discussing v3 at Delving as long as we keep the discussion intellectually honest. I’m not interested in this little game of arguing to appear smart (or try to, at least) instead of arguing to make progress toward a better and technically sound proposal.

    I’m not on delving and I have no intent of doing so at the moment. Putting critical protocol discussion on infrastructure that is not cryptographically signed, and under the centralized control of parties with a direct interest in the discussion, is not a good idea. The email list remains a much better alternative as email is signed via both DKIM and, in some cases, PGP. GitHub second best, as at least the infrastructure is run by people entirely independent of Bitcoin Core.

    Incidentally, I’ve already had a few people tell me privately that your personal attacks here, like attempting to claim I have a “superficial understanding of Lightning”, and writing this off as a “little game of arguing to appear smart”, makes you look “butthurt” and “petty”.

  139. t-bast commented at 10:10 am on January 3, 2024: contributor

    Thanks for your answers @petertodd.

    I think we may be talking a bit past each other in some of those comments, because most of this is too vague and ignores important low-level details. I’ll try to highlight the most important high-level points below.

    So if Alice wants to broadcast a commitment transaction, her version of the fee variants all take the funds for the fees from her balance, and Bob has given Alice a set of signatures for each fee variant. Conversely, if Bob wants to broadcast, the funds for the fees are taken from his balance.

    I’m a bit lost, that’s not at all how lightning works today? The fees are always taken from the balance of the channel initiator, regardless of whether it’s Alice’s or Bob’s commitment transaction.

    That’s a design decision that could potentially be revisited, and it has been evaluated a few times already, but doing so creates other drawbacks.

    The true value of your balance in a Lightning channel is always potentially reduced by the fees it would take to (force-)close the channel. CPFP or RBF does not change this: either way, if you need to (force-)close the channel, you will recover less funds than the apparent balance.

    I think you didn’t understand my point (or maybe you are thinking about something that works differently than lightning today, but I’m not sure how exactly it would work). What I was saying is that pre-signing multiple versions of the commitment at various feerates forces your lightning balance (and other lightning constraints) to use the most restrictive variant of the commitment.

    That means using RBF is much more restrictive than CPFP: with CPFP, you are only constrained by whatever fee you want to pay at broadcast time, while with RBF you are constrained by the highest feerate you may be willing to pay. The difference between those values is going to be large most of the time, which is quite inefficient.

    The balance you can actually use in your lightning channel (if you’re the initiator) is to_local - channel_reserve - commit_tx_fees. If you’re pre-signing N commitment transactions at feerates [f1; ...; fN], the balance you can actually use is to_local - channel_reserve - max(commit_tx_fees1, ..., commit_tx_feesN). Otherwise you wouldn’t be able to really offer a commitment transaction at all the chosen feerates.

    Pre-signing multiple feerates creates other complex subtleties on the lightning side, as was detailed in comments on my spec PR. Making the commitment transaction pay no on-chain fees greatly simplifies the lightning protocol, and provides a better decoupling from on-chain fees, ensuring that you only care about them at broadcast time and they don’t interfere while you’re transacting on lightning. Since force-closing is exceptional, I believe this is a design choice that makes a lot of sense, by optimizing for the most common case.

    That “drawback” is a misunderstanding of economics and accounting.

    While your point may be an arguable trade-off for routing nodes, it ignores end users, who really want to maximize their liquidity usage. And the vast majority of channels will be with end users, so it’s an important case to optimize for IMHO.

  140. glozow commented at 10:42 am on January 3, 2024: member

    This kind of thinking is precisely the problem: you’ve listed a bunch of potential problems, without any discussion of how actual protocols are impacted by those problems.

    Peter, please read the RBF improvements mailing list thread to get up to date on users’ problems with RBF.

    Yes, v3 makes no sense if you think RBF is perfect today. I agree that we cannot have a productive discussion about how to solve problems if we don’t agree on what they are. Given that thread is dedicated to discussing RBF issues, feel free to respond there. Let’s work on coming to an agreement on RBF first, otherwise we will continue to talk in circles here.

    Now, if the OP wanted to argue for those kinds of detailed tradeoffs, the correct way to do so would be with detailed design documents. But that hasn’t happened.

    Documentation is included with this pull request, along with links to the discussions about the problems it solves and alternatives considered. Feel free to browse discussions that happened over the last few years, including

    I completely disagree with the statements about there being no real identified problems or exploration of the rest of the solution space. There is of course no concrete threshold of “enough discussion,” and I am happy to discuss further if it means coming up with a better solution. But please keep the discussion on this PR in scope. Again, there is a RBF limitations / improvements thread. Pages of comments about LN commitment tx design makes it difficult for collaborators to follow the discussion and see review comments about the code itself, while accomplishing very little to improve the code here.

    There’s not even a BIP for this.

    This is a mempool policy proposal; I see no precedent that such a change requires a BIP.

  141. petertodd commented at 5:04 am on January 4, 2024: contributor

    On Wed, Jan 03, 2024 at 02:42:45AM -0800, Gloria Zhao wrote:

    This kind of thinking is precisely the problem: you’ve listed a bunch of potential problems, without any discussion of how actual protocols are impacted by those problems.

    Peter, please read the RBF improvements mailing list thread to get up to date on users’ problems with RBF.

    Yes, v3 makes no sense if you think RBF is perfect today. I agree that we cannot have a productive discussion about how to solve problems if we don’t agree on what they are. Given that thread is dedicated to discussing RBF issues, feel free to respond there. Let’s work on coming to an agreement on RBF first, otherwise we will continue to talk in circles here.

    As you know, I don’t believe RBF is perfect. I even relatively recently, after that RBF improvements thread, proposed my own improvement to RBF rules, the always-replaceable invariant:

    https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-November/021175.html

    You and I have discussed RBF’s flaws in person multiple times, as you have admitted below.

    The question at hand here is not whether or not RBF is perfect in all hypothetical scenarios. It’s whether or not V3 is actually succeeding at solving real problems, without introducing new ones, without doubling down on existing mistakes, and without adding too much technical debt.

    Notably, the fact that anchor outputs pose a significant risk of making large miners significantly more profitable is a big problem. We should not be encouraging more usage of them.

    Now, if the OP wanted to argue for those kinds of detailed tradeoffs, the correct way to do so would be with detailed design documents. But that hasn’t happened.

    Documentation is included with this pull request, along with links to the discussions about the problems it solves and alternatives considered. Feel free to browse discussions that happened over the last few years, including

    Both of these documents are conceptual overview documents, not in-depth engineering documents. Not once do you actually work out concrete scenarios with real numbers from real protocols.

    By contrast, my very short https://petertodd.org/2023/v3-txs-pinning-vulnerability actually does work out a real scenario in a real protocol, with real transaction sizes, while figuring out exactly what an attacker can do and calculating the advantage the attacker has.

    It’s notable that in spite of all your above conceptual overview documents, you never actually did that, and as you have admitted, the numbers you guessed in your claims about how well V3 works did not reflect real world usage of Lightning.

    • the 250 comments on #25038
    • recall that you were there for some of the in-person discussions about RBF and package relay.

    I completely disagree with the statements about there being no real identified problems or exploration of the rest of the solution space. There is of course no concrete threshold of “enough discussion,” and I am happy to discuss further if it means coming up with a better solution. But please keep the discussion on this PR in scope. Again, there is a RBF limitations / improvements thread. Pages of comments about LN commitment tx design makes it difficult for collaborators to follow the discussion and see review comments about the code itself, while accomplishing very little to improve the code here.

    There’s not even a BIP for this.

    This is a mempool policy proposal; I see no precedent that such a change requires a BIP.

    BIP-125 Replace-By-Fee comes to mind… which was written just over 8 years ago, when Lightning didn’t even exist, and RBF’s intended purpose was just to let people fee bump their own on-chain transactions. It was a much simpler time, and even then it got a BIP.

    Obviously, now that we’re having to deal with much more complex contracting protocols with multiple parties, and much more complex tradeoffs, we need to step up our game and properly document our proposed changes and do proper analysis of pros and cons with respect to real protocols and future upgrade scenarios.

  142. petertodd commented at 5:44 am on January 4, 2024: contributor

    On Wed, Jan 03, 2024 at 02:10:22AM -0800, Bastien Teinturier wrote:

    Thanks for your answers @petertodd.

    I think we may be talking a bit past each other in some of those comments, because most of this is too vague and ignores important low-level details. I’ll try to highlight the most important high-level points below.

    So if Alice wants to broadcast a commitment transaction, her version of the fee variants all take the funds for the fees from her balance, and Bob has given Alice a set of signatures for each fee variant. Conversely, if Bob wants to broadcast, the funds for the fees are taken from his balance.

    I’m a bit lost, that’s not at all how lightning works today? The fees are always taken from the balance of the channel initiator, regardless of whether it’s Alice’s or Bob’s commitment transaction.

    That’s a design decision that could potentially be revisited, and it has been evaluated a few times already, but doing so creates other drawbacks.

    No, that is not how Lightning works. With anchor outputs, beyond the basic fee level, fees are paid by whichever side wants to pay them, potentially both parties at once. Furthermore, people are proposing (via ephemeral anchors and package relay) that fees always be paid this way, with the commitment transaction itself having no fee at all.

    Economically speaking, what I’m proposing with RBF channels is very close to the status quo.

    The true value of your balance in a Lightning channel is always potentially reduced by the fees it would take to (force-)close the channel. CPFP or RBF does not change this: either way, if you need to (force-)close the channel, you will recover less funds than the apparent balance.

    I think you didn’t understand my point

    I understand your point. I’m talking about the true economics of a lightning channel, not the raw numbers.

    (or maybe you are thinking about something that works differently than lightning today, but I’m not sure how exactly it would work). What I was saying is that pre-signing multiple versions of the commitment at various feerates forces your lightning balance (and other lightning constraints) to use the most restrictive variant of the commitment.

    That means using RBF is much more restrictive than CPFP: with CPFP, you are only constrained by whatever fee you want to pay at broadcast time, while with RBF you are constrained by the highest feerate you may be willing to pay. The difference between those values is going to be large most of the time, which is quite inefficient.

    Again, what I showed in my post, and explained multiple times above, was that:

    1. Any conceivable feerate, at very fine granularity, is feasible to pre-sign. Computers are fast, and signatures are small, so signing a few dozen variants to cover everything is not hard.
    2. It is pointless to pay more money in fees than you can recover from the channel.

    The only nuance anyone has identified with #2 is when large in-flight HTLC’s exist, and the channel balance is smaller than the fee you might want to pay.

    The balance you can actually use in your lightning channel (if you’re the initiator) is to_local - channel_reserve - commit_tx_fees. If you’re pre-signing N commitment transactions at feerates [f1; ...; fN], the balance you can actually use is to_local - channel_reserve - max(commit_tx_fees1, ..., commit_tx_feesN). Otherwise you wouldn’t be able to really offer a commitment transaction at all the chosen feerates.

    I’ve already demonstrated in a message above how in profitable routing node scenarios, for actual economical usage of the channel with reasonable revenue estimates, reserving funds for pay fees is an insignificant percentage of the channel balance. I would suggest you read that and actually work through a real example. Also, I would stress that you think about what the economics of the channel reserve actually means: if fees are significant in relation to it, it isn’t high enough to represent a real incentive.

    As for end user wallets, keep reading.

    Pre-signing multiple feerates creates other complex subtleties on the lightning side, as was detailed in comments on my spec PR. Making the commitment transaction pay no on-chain fees greatly simplifies the lightning protocol, and provides a better decoupling from on-chain fees, ensuring that you only care about them at broadcast time and they don’t interfere while you’re transacting on lightning. Since force-closing is exceptional, I believe this is a design choice that makes a lot of sense, by optimizing for the common case.

    No, force-closing is main the limiting factor in Lightning safety engineering: the need to tolerate large numbers of force closes at once, without running out of block space, limits how much we can possible scale lightning. It is very important to keep force closing efficient to allow as many lightning users as possible.

    That “drawback” is a misunderstanding of economics and accounting.

    While your point may be an arguable trade-off for routing nodes, it ignores end users, who really want to maximize their liquidity usage. And the vast majority of channels will be with end users, so it’s an important case to optimize for IMHO.

    Quite the opposite: to maximize liquidity usage you want to get rid of the very inconvenient fact that anchor outputs forces you to have UTXO’s lying around with sufficient funds to pay fees in all circumstances. As I explained in my V3 Transactions Review post, anchor channels are particularly harmful for end user wallets. In fact, what’s probably the most common non-custodial end-user implementation out there, Phoenix, recently added limited RBF channel support. And they don’t keep around extra UTXO’s to spend via anchors, because of course, that impacts liquidity!

  143. t-bast commented at 9:06 am on January 4, 2024: contributor

    No, that is not how Lightning works. With anchor outputs, beyond the basic fee level, fees are paid by whichever side wants to pay them, potentially both parties at once.

    You’re confusing fees paid directly by the commitment transaction (which are always paid by the channel initiator, regardless of whose commitment transaction it is) and fees paid by CPFP-ing the anchor output. Which doesn’t make any sense to me since you’re advocating for removing the CPFP part! I guess we’re just talking past each other here, probably not worth digging more into that.

    I understand your point. I’m talking about the true economics of a lightning channel, not the raw numbers.

    I believe you still haven’t understood my point about the restrictions on the usable lightning balance, but since you’re not actually proposing a detailed protocol change to use pre-signed transactions it’s impossible to really make progress ¯_ (ツ)_/¯

    In fact, what’s probably the most common non-custodial end-user implementation out there, Phoenix, recently added limited RBF channel support. And they don’t keep around extra UTXO’s to spend via anchors, because of course, that impacts liquidity!

    Yup, I know, I wrote that code.

  144. glozow commented at 11:19 am on January 4, 2024: member

    This is a mempool policy proposal; I see no precedent that such a change requires a BIP.

    BIP-125 Replace-By-Fee comes to mind… which was written just over 8 years ago, when Lightning didn’t even exist, and RBF’s intended purpose was just to let people fee bump their own on-chain transactions. It was a much simpler time, and even then it got a BIP.

    A BIP was not required for RBF to be considered or merged. PR #6871 was merged on Nov 27, 2015. After discussion about adding documentation on December 3, 2015, PR https://github.com/bitcoin/bips/pull/261 for BIP125 was opened on Dec 11, 2015 and merged on January 8, 2016.

    I think it’s fine to follow this example, if putting the documentation in the BIP repository is what we want. However, while we are still reviewing the code and considering changes to it, I think it is much more sensible for the code and docs to live next to each other. Afterwards, we can copy it to a BIP.

    Documentation exists and serves its purpose of explaining what the code does. We can copy it to a BIP or some other format if we want to, but a BIP is not a prerequisite or an indication of the quality of this proposal.

  145. petertodd commented at 11:54 am on January 4, 2024: contributor

    On Thu, Jan 04, 2024 at 01:06:19AM -0800, Bastien Teinturier wrote:

    No, that is not how Lightning works. With anchor outputs, beyond the basic fee level, fees are paid by whichever side wants to pay them, potentially both parties at once.

    You’re confusing fees paid directly by the commitment transaction (which are always paid by the channel initiator, regardless of whose commitment transaction it is) and fees paid by CPFP-ing the anchor output. Which doesn’t make any sense to me since you’re advocating for removing the CPFP part! I guess we’re just talking past each other here, probably not worth digging more into that.

    “confusing”? What do you think I meant by “beyond the basic fee level”? I am well aware of how Lightning works.

  146. petertodd commented at 12:05 pm on January 4, 2024: contributor

    On Thu, Jan 04, 2024 at 03:19:50AM -0800, Gloria Zhao wrote:

    This is a mempool policy proposal; I see no precedent that such a change requires a BIP.

    BIP-125 Replace-By-Fee comes to mind… which was written just over 8 years ago, when Lightning didn’t even exist, and RBF’s intended purpose was just to let people fee bump their own on-chain transactions. It was a much simpler time, and even then it got a BIP.

    A BIP was not required for RBF to be considered or merged. PR #6871 was merged on Nov 27, 2015. After discussion about adding documentation on December 3, 2015, PR https://github.com/bitcoin/bips/pull/261 for BIP125 was opened on Dec 11, 2015 and merged on January 8, 2016.

    Yes, as I said, standards change. The BIP was written prior to releasing the code in a public Bitcoin Core release.

    I think it’s fine to follow this example, if putting the documentation in the BIP repository is what we want. However, while we are still reviewing the code and considering changes to it, I think it is much more sensible for the code and docs to live next to each other. Afterwards, we can copy it to a BIP.

    Look, I don’t care if the BIP-type documentation is actually in the BIP repo. What I care is that you’ve gone through that analysis process in a carefully considered way. You’re welcome to put that analysis in this pull-req for now. But you have to actually do it.

    Documentation exists and serves its purpose of explaining what the code does. We can copy it to a BIP or some other format if we want to, but a BIP is not a prerequisite or an indication of the quality of this proposal.

    Bitcoin’s market cap has incresed by a factor of 100x since RBF was released, and what we do with Bitcoin has become far more complicated. The care and attention needed to make a good proposal has gone up.

    Documentation does not merely explain was the code does. It also allows us to do a real engineering evaluation to figure out if what the code does is worthwhile. That why it’s not enough to give vague examples of problems V3 is trying to fix. You have to actually work out examples, in detail, and do proper engineering analysis on that.

    You haven’t done that, which is exactly how we had the V3 ephemeral anchor pinning vector debacle.

  147. glozow force-pushed on Jan 5, 2024
  148. DrahtBot added the label CI failed on Jan 5, 2024
  149. glozow force-pushed on Jan 8, 2024
  150. DrahtBot removed the label CI failed on Jan 8, 2024
  151. in src/txmempool.cpp:1161 in 4e746e223a outdated
    1160         // equal to txn which were removed with no block in between.
    1161         CFeeRate removed(it->GetModFeesWithDescendants(), it->GetSizeWithDescendants());
    1162-        removed += m_incremental_relay_feerate;
    1163-        trackPackageRemoved(removed);
    1164-        maxFeeRateRemoved = std::max(maxFeeRateRemoved, removed);
    1165+        if (removed >= m_min_relay_feerate) {
    


    achow101 commented at 7:40 pm on January 8, 2024:

    In 4e746e223a016c00abcaece6002fbbf37ff19f73 “[mempool] evict everything paying 0 fees in TrimToSize()”

    This seems like it should belong in a separate commit? At least the motivation for this is not entirely clear from the code comments nor the commit message.


    glozow commented at 4:48 pm on January 10, 2024:
    True, it’s helpful but not strictly necessary here. I think I’ll just take this out of this PR and defer it to when I reopen #27018 on top of this. edit; kept in

    glozow commented at 2:28 pm on January 16, 2024:
    I’ve removed this commit. Going to open followup PR for it shortly edit: nvm

    glozow commented at 2:50 pm on January 18, 2024:
    Ah, fuzzer crash was because I had removed the “evict everything paying 0 fees in TrimToSize()` commit without removing that invariant check. I’ve added that back along with documentation for why it’s relevant here. Sorry for the flip flop.
  152. in src/policy/v3_policy.cpp:67 in 299680a5c7 outdated
    62+                        tx->GetWitnessHash().ToString(), it_nonv3_package_parent->second.ToString()))};
    63+                }
    64+
    65+                // Look for a v3 in-package parent. The ancestor set cannot exceed V3_ANCESTOR_LIMIT.
    66+                if (auto it_v3_package_parent = v3_txid_to_wtxid.find(parent_txid); it_v3_package_parent != v3_txid_to_wtxid.end()) {
    67+                    auto& my_ancestor_set = v3_ancestor_sets.at(my_txid);
    


    achow101 commented at 8:08 pm on January 8, 2024:

    In 299680a5c736783fbb03fd448064ed35ef1a42a0 “[policy] add v3 policy rules”

    nit: This could be at the top of the loop as well instead of retrieving it for every input.


    glozow commented at 2:28 pm on January 16, 2024:
    done
  153. in src/test/txvalidation_tests.cpp:286 in 299680a5c7 outdated
    281+        BOOST_CHECK(PackageV3Checks({mempool_tx_v3, tx_many_sigops}));
    282+        // ...but sigop limit is.
    283+        const auto expected_error_str{strprintf("v3 child tx %s is too big: %u > %u virtual bytes",
    284+                                                tx_many_sigops->GetWitnessHash().ToString(), total_sigops * DEFAULT_BYTES_PER_SIGOP, V3_CHILD_MAX_VSIZE)};
    285+        BOOST_CHECK(*ApplyV3Rules(tx_many_sigops, *ancestors, /*num_in_pkg_ancestors=*/0, empty_conflicts_set,
    286+                                  /*vsize=*/std::max(total_sigops * DEFAULT_BYTES_PER_SIGOP, bip141_vsize))
    


    achow101 commented at 8:28 pm on January 8, 2024:

    In 299680a5c736783fbb03fd448064ed35ef1a42a0 “[policy] add v3 policy rules”

    nit: Could use GetVirtualTransactionSize instead of calculated the sigops adjusted size manually.


    glozow commented at 2:28 pm on January 16, 2024:
    Taken. Also realized I forgot to divide by 4, so fixed the test
  154. in src/validation.cpp:884 in 6de06e116f outdated
    881+    // The only exception is v3 transactions.
    882+    if (!bypass_limits && ws.m_ptx->nVersion != 3 && ws.m_modified_fees < m_pool.m_min_relay_feerate.GetFee(ws.m_vsize)) {
    883         // Even though this is a fee-related failure, this result is TX_MEMPOOL_POLICY, not
    884         // TX_RECONSIDERABLE, because it cannot be bypassed using package validation.
    885-        return state.Invalid(TxValidationResult::TX_MEMPOOL_POLICY, "min relay fee not met",
    886+        return state.Invalid(ws.m_ptx->nVersion == 3 ? TxValidationResult::TX_RECONSIDERABLE : TxValidationResult::TX_MEMPOOL_POLICY,
    


    achow101 commented at 8:40 pm on January 8, 2024:

    In 6de06e116f0dbe632be0e4babd45d3f59c117171 “[policy/validation] allow v3 transactions with certain restrictions”

    This seems unnecessary since if the tx was v3, then we wouldn’t be able to reach this error condition.


    glozow commented at 2:28 pm on January 16, 2024:
    Right, removed
  155. in test/functional/mempool_accept_v3.py:181 in fd55f70257 outdated
    176+        assert_equal(node.getrawmempool(), [])
    177+        tx_v2_from_v3 = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=tx_v3_block["new_utxo"], version=2)
    178+        tx_v3_from_v2 = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=tx_v2_block["new_utxo"], version=3)
    179+        tx_v3_child_large = self.wallet.send_self_transfer(from_node=node, utxo_to_spend=tx_v3_block2["new_utxo"], target_weight=5000, version=3)
    180+        assert_greater_than(node.getmempoolentry(tx_v3_child_large["txid"])["vsize"], 1000)
    181+        assert_equal(set(node.getrawmempool()), set([tx_v2_from_v3["txid"], tx_v3_from_v2["txid"], tx_v3_child_large["txid"]]))
    


    achow101 commented at 9:01 pm on January 8, 2024:

    In fd55f70257456e804cadd7af3e70f783de373008 “[functional test] v3 transaction submission”

    nit: use self.check_mempool, here and elsewhere.


    glozow commented at 2:28 pm on January 16, 2024:
    Right, done
  156. ariard commented at 2:11 am on January 11, 2024: member
    I think this is correct to say that nversion=3 is a weak pinning mitigation. One can always use a revoked state with max outgoing HTLC outputs=483 to pin. So 483 * 33 + 1000, you have ~18k vbytes state, mempool backlog at 20 sat/vb, you can steal ~100 USD payments. Yes max_offered_htlc chan policy param can be introduced at the LN-level to bound more commitment size. However mempool backlog will always make the level of pin exposure dynamic, a bad thing. (Let’s be nice and not introduce cunning adversaries who has mastered both pinning and replacement cycling). And yes it sucks if nversion=3 introduces non-null mining income asymmetries. Better thing if we go back to the whiteboard about pinning mitigations imho.
  157. glozow commented at 1:31 pm on January 11, 2024: member

    One can always use a revoked state with max outgoing HTLC outputs=483 to pin. So 483 * 33 + 1000, you have ~18k vbytes state, mempool backlog at 20 sat/vb, you can steal ~100 USD payments.

    Ok @instagibbs helped me parse what you’re trying to say here (thanks) and IIUC you’re talking about a scenario in which:

    • The counterparties in a channel have negotiated a very high number of maximum in-flight HTLCS, so they have at some point signed a commitment transaction that was 18KvB in size.
    • This is a revoked state now, but if published, means that the honest party may need to pay 18KvB*20sat/vB = 360,000sat in fees to replace it. Since that might be much higher than what the honest party wants to pay for a CPFP of their more recent channel state.
    • The honest party is able to grab those funds with the revocation key if this tx confirms, but it might just sit in the mempool and be hard to replace.
    • You’re saying that since this is still possible in a v3 world, v3 is not effective in pinning prevention.

    Is this a pinning issue or a problem with how they negotiated the max htlcs? It’s one thing for an attacker to attach descendants to the shared transaction and take advantage of the permissive limit. It’s another thing to willingly sign a huge transaction with somebody and be surprised later if they broadcast it.

    Yes max_offered_htlc chan policy param can be introduced at the LN-level to bound more commitment size.

    Is that different from max_accepted_htlcs? Or do you mean a hard limit within the protocol? https://github.com/lightning/bolts/blob/8a64c6a1cef979b3f0cecb00ba7a48c2d28b3588/02-peer-protocol.md?plain=1#L159-L162

    That seems to be the solution. There isn’t anything we can do at the mempool policy level to prevent conflicting transactions from being very different sizes, except to decrease the maximum standard transaction size.

  158. t-bast commented at 2:04 pm on January 11, 2024: contributor

    Another thing to note about this attack is that the attacker would be losing funds in the process if the attacked node simply spends the revoked commitments outputs (instead of trying to replace it). Whenever one side publishes a revoked commitment, they will lose any funds that they had in the channel (which is at least their channel reserve, thus 1% of the channel). So the attacked node would pay a large on-chain fee, but would also earn the funds that the attacker loses. In the worst case scenario, it means that both sides lose money if the fee is larger than the attacker’s channel balance (which may be an acceptable scorched earth strategy).

    Lightning node operators should use sensible values for their channel parameters (such as max_accepted_htlcs) to ensure that the attacker has more to lose if they publish a revoked commitment (that’s why eclair’s default value for max_accepted_htlcs has been 30 for years, which is much lower than 483).

  159. ariard commented at 3:30 am on January 12, 2024: member

    I realized after posting the quick thoughts yesterday that some replacement tactic alters significantly the anti-v3 pinning mitigation at the advantage of an adversary economics, so here one of the most adversarial pinning scenario I can come with.

    Assume the attacker strategy is a rule-3 based pinning targeting the double-spend of forwaded HTLC over a target LN routing node. Let’s call them Alice and Mallory. Assume the channel is 1_000_000 sats, max_htlc_value_in_flight=50% (i.e 500_000 sats), cltv_expiry_delta=144, all liquidity balance is on Alice-side, Mallory’s channel_reserve is non-existent (inbound liquidity channel out-of-band buy). Mallory’s topology setup cost has been 1000 sats in period of low-fees. Assume nversion=3 with 1 child 1k limit and its support at the LN-level.

    Here the sequence of actions:

    1. Mallory routes over Alice 483 HTLCs, gets local revoked state of size 21k vbytes at 20 sats/vbyte
    2. Mallory waits for a mempool backlog of depth 144 blocks with the feerate superior at 20 sat/vbytes
    3. Network mempool rolling min fee is at 5 sat/vbytes, assuming default maxmempool size
    4. Mallory routes a HTLC over Alice from a puppet inbound node of amount 440_000 satoshis
    5. Mallory broadcasts in network mempools revoked state 21k at 20 sats/vbyte + child 1k at 20 sats/vbyte
    6. Alice cannot over-pay in absolute fees to replace Mallory state due to “max at stake” 440k sats
    7. The mempool backlog of 144 blocks is swallowed by blocks confirmations ; The inbound HTLC is expired
    8. Mallory replaces first her malicious state with a commitment tx 1k vbyte at 10 sats/vbytes + child 1k vbytes at 400 sats/vbyte (+ RBF penalty)
    9. Mallory replaces this high-fee CPFP on a “parallel” pinning state and repeat the “parallel-CPFP” trick N times
    10. Mallory double-spend the inbound HTLC with a commitment_tx + HTLC-timeout of package size 1k at 20 sat/vbytes

    The per-channel attacking cost for Mallory are the following: a) chan topology setup = 1000 sats b) inbound chan double-spend package cost = 20k sats c) outbound chan commitment tx cost = 10k cost

    Per-channel total cost: 31k.

    There is an overall CPFP cost of 440k sats + N * rbf_penalty * 1k. Assuming N = 10 and RBF penalty = 1 sat/vbyte, the overall CPFP cost is 450k sats.

    The per-channel attacker gain for Mallory is 440k sats.

    The total gain for Mallory is the following equation: 440_000 * 10 - (31_000 * 10 + 450_000), i.e `3'640k sats.

    Assuming the above demonstration is correct, in reason of “parallel” pinning against an uncoordinated set of target LN routing nodes, an attacker can realize a substantial gain incentivizig the attempt of pinning attack at scale, even in a nversion=3 transaction-relay network of nodes world.

    I think an alternative solution like “replace-by-feerate” suffers from the same type of “parallel” package tampering weakness, where an adversary can “adaptatively” plug high-feerate malicious package, then dismembers them to block the confirmation of a LN routing node aiming to HTLC-timeout offered outputs on its commitment transaction.

    I believe the most-robust pinning solution is taking the long-term direction of consensus changes, where a) commitment transaction weight should be constant - all HTLC encapsulated in a covenant-tree and b) feerate set by an IN_OUT_AMOUNT opcode where the fee value is self-contained from spending commitment transaction input. Fee bidding among a set of trust-minimized off-chain counterparties would be a monotonically increasing process happening in public network mempools. This would remove package malleability and preventing pinning / replacement cycling kind of attack games. While this is yet to be demonstrated, I think it’s generalizing well to all time-sensitive L2s.

    I think the concern of mining income asymmetries favored by generalized CPFP usage is valid. However, whatever we’re finally doing as protocol devs in Bitcoin Core or any base-layer implementation, this does not prevent each mining pool to deploy on their own “anti pinning nversion=3” service, as we’re seeing with “transaction accelerator”. I can see LN node operators adopting such new service as implemented correctly by mining pools / miners (this is a big assumption), it would provide minimal protection in face of naive pinning adversaries. (I.e the ones who don’t have the domain expertise to execute the variation of the attack exposed above). As a historical note, we weighted mining-level pinning mitigations years ago and we disregarded them due to concerns on mining decentralization and incentives misalignment, if you’re LN counterparty is a miner too.

    Finally, I’m ~0 on continuing or not to implement and deploy nversion=3 as a default mempool policy rules over the network. I just fear LN folks will go for years-long of integrating a frail mechanism in their software, while at the meantime hitting a wormhole of issues related to heterogeneity of nodes policies / ressources and use-case incentives, like we have seen with full-rbf.

  160. ariard commented at 3:47 am on January 12, 2024: member

    Is this a pinning issue or a problem with how they negotiated the max htlcs? It’s one thing for an attacker to attach descendants to the shared transaction and take advantage of the permissive limit. It’s another thing to willingly sign a huge transaction with somebody and be surprised later if they broadcast it.

    See the attack scenario laid out more clearly.

    Is that different from max_accepted_htlcs? Or do you mean a hard limit within the protocol? https://github.com/lightning/bolts/blob/8a64c6a1cef979b3f0cecb00ba7a48c2d28b3588/02-peer-protocol.md? plain=1#L159-L162

    max_accepted_htlcs is a limit on the number of inbound HTLCs. max_offered_htlc would be some new BOLT-channel policy parameter to limit the number of outbound HTLCs, the idea has floated few times among LN folks to bind max commitment transaction size.

    That seems to be the solution. There isn’t anything we can do at the mempool policy level to prevent conflicting > transactions from being very different sizes, except to decrease the maximum standard transaction size.

    Yes there is nothing at mempool policy level we can do to limit conflicting transaction sizes, and this level of “package malleability" being exploited by the adversary. I think this one of the key issue.

    Another thing to note about this attack is that the attacker would be losing funds in the process if the attacked node > simply spends the revoked commitments outputs (instead of trying to replace it). Whenever one side publishes a revoked > commitment, they will lose any funds that they had in the channel (which is at least their channel reserve, thus 1% of the > channel). So the attacked node would pay a large on-chain fee, but would also earn the funds that the attacker loses. In > the worst case scenario, it means that both sides lose money if the fee is larger than the attacker’s channel balance > (which may be an acceptable scorched earth strategy).

    See the math above, works with latest valid commitment routing at max_accepted_htlc / hyptothetical max_offered_htlc and I think you can arrange to have a 0 channel_reserve thanks to inbound liquidity buy (though non-standardized mechanism). Bounding more LN package weight, payment throughput trade-off and at the end an adversary can “batch” the attack to be “break-even”.

    Yes, you can adopt nversion=3 parameters and chan policy parameters so restrictive to have a single HTLC output and a single CPFP of the minimal size, though a) adversary can still “jump” the CPFP over many “parallel” pinning attacks assume adversary have superior mempool views than targets and can choose when to forward HTLC and have them cltv_expiry_delta locks over a path and b) breaks every time we would upgrade channel type for new segwit versions or whatever and corresponding witness size.

  161. t-bast commented at 8:47 am on January 12, 2024: contributor

    max_accepted_htlcs is a limit on the number of inbound HTLCs. max_offered_htlc would be some new BOLT-channel policy parameter to limit the number of outbound HTLCs, the idea has floated few times among LN folks to bind max commitment transaction size.

    This new max_offered_htlc parameter is unnecessary. An HTLC sender unilaterally decides whether they send HTLCs or not, and can ensure that they never send more outgoing HTLCs than their max_accepted_htlcs. That has always been eclair’s behavior by the way: our default is max_accepted_htlcs = 30, we have many peers that set their max_accepted_htlcs = 483, but we never offer them more than 30 HTLCs.

    I believe the most-robust pinning solution is taking the long-term direction of consensus changes, where a) commitment transaction weight should be constant - all HTLC encapsulated in a covenant-tree and b) feerate set by an IN_OUT_AMOUNT opcode where the fee value is self-contained from spending commitment transaction input.

    This would indeed be a potential longer term solution (I believe that Eltoo does something similar), but as it requires a soft-fork, this is much longer term than mitigating this issue now (and I still believe v3/EA does improve this issue, because the attack you’re describing most likely also works without v3 and costs even more for the attacked node).

    1. Mallory routes a HTLC over Alice from a puppet inbound node of amount 440_000 satoshis

    I don’t understand in which direction this HTLC is flowing (the from and inbound seem contradictory), and I’m not sure whether the HTLCs you described in step 1 were fulfilled or failed. I’m failing to understand your attack scenario: many details are missing in my opinion (which HTLCs are failed/fulfilled, and at what step?). Can you provide a more thorough description of every step so that we can analyze this better?

  162. [refactor] change Workspace::m_conflicts and adjacent funcs/structs to use Txid
    It's preferable to use type-safe transaction identifiers to avoid
    confusing txid and wtxid. The next commit will add a reference to this
    set; we use this opportunity to change it to Txid ahead of time instead
    of adding new uses of uint256.
    158623b8e0
  163. glozow force-pushed on Jan 16, 2024
  164. DrahtBot added the label CI failed on Jan 16, 2024
  165. ariard commented at 2:06 am on January 18, 2024: member

    This new max_offered_htlc parameter is unnecessary. An HTLC sender unilaterally decides whether they send HTLCs or not, and can ensure that they never send more outgoing HTLCs than their max_accepted_htlcs. That has always been eclair’s behavior by the way: our default is max_accepted_htlcs = 30, we have many peers that set their max_accepted_htlcs = 483, but we never offer them more than 30 HTLCs.

    Good if eclair is already limiting the number of outgoing HTLCs. I dont know if all other implementations are doing it.

    This would indeed be a potential longer term solution (I believe that Eltoo does something similar), but as it requires a soft-> fork, this is much longer term than mitigating this issue now (and I still believe v3/EA does improve this issue, because the > attack you’re describing most likely also works without v3 and costs even more for the attacked node)

    I know. I just think we’re not able to give precise anti-pinning safety bounds to v3 because of a) cltv_expiry_delta selection and b) uncertainty on the mempool backlog. See my next analysis comment.

    I don’t understand in which direction this HTLC is flowing (the from and inbound seem contradictory), and I’m not sure whether the HTLCs you described in step 1 were fulfilled or failed. I’m failing to understand your attack scenario: many details are missing in my opinion (which HTLCs are failed/fulfilled, and at what step?). Can you provide a more thorough description of every step so that we can analyze this better?

    That topology Mallet - Alice - Mallory, where Mallet is the puppet inbound node. HTLC on Alice-Mallory link is fulfilled after pinning success. HTLC on Mallet-Alice link is failed, thus double-spend. There is a first wave of routed HTLCs to get a max size commitment tx, then only 1 routed HTLC, as a revoked state is used to pin. Note, you can use a valid commitment state, with 482 dust threshold HTLC+1 and one HTLC at 440k sats you’ll double-spend.

  166. ariard commented at 2:39 am on January 18, 2024: member

    Let’s do the anti-pinning analysis with the most restrictve package vbyte size:

    • 1 commitment tx 268 bytes
    • 1 child 1kb

    Package size: 1268 bytes.

    This is assuming a commitment tx with only 1 HTLC output at max.

    With mempool backlog superior to 20 sat/byte and depth=cltv_expiry_delta, one can pin 25k sats payment. Adversary cost: 5360 (commitment tx fee) + 1000 (child rbf-penalty) + 43 (channel opening output) Adversary gain (per-channel): 18597 satoshis.

    If we reduce the size of v3 CPFP to 200 bytes, we have adversary gain (per-channel): 3000 sats.

    The main issue with v3 is a node operator stay always exposed to high-fee mempool backlog creating opportunistic pinning exploitations for an adverdary. With a mempool depth superior to X (the sat/vbyte) and Y (the MB depth of transaction superior to X - must be superior to target cltv_expiry_delta), the remaining pinning exposure stay “max package size” * X. E.g if cltv_expiry_delt=144, mempool backlog of depth block_size * 144 superior to 20 sat/byte and max package size 4k byte, your pinning exposure is ~80k sats.

    As liquidity is a scarce resource, LN node ops have an incentive to scale down 144, or at least keep it bounded in case the channel goes-to-chain. This lowers the likeliness of mempool environment fulfilling opportunistic conditions of backlog for pinning attackers to target LN nodes. Bounding the numbers of HTLCs per-channel reduces the LN node payment throughput, while max_accepted_htlc of dozens of HTLCs are rarely reached in practice, this will be less certain in the future.

    Of course, a LN node operator might run mempool backlog predictions, and from this statically or dynamically adapt its max pinning exposure (by playing on the max_accepted_htlc and cltv_expiry_delta). However, if the prediction fails and once you have HTLCs locked paymnent paths, you’re irremediably exposed to pinning attacks, potentially overriding your max acceptable “pinning exposure” (like we have for dust with max_dust_htlc_exposure).

    So yes v3 reduces pinning exposure (in the absolute). However no, it doesn’t solve all the pinning scenarios in my opinion, and it’s a weak solution if you have medium to advanced attackers that can predict mempool backlogs and opportunistic conditions of pinning exploitation better than the average LN node. I’m neutral on v3 deployment, I just fear this is not a pinning mitigation which is going to age well…

  167. petertodd commented at 4:48 am on January 18, 2024: contributor

    However no, it doesn’t solve all the pinning scenarios in my opinion

    Notably, from the discussion I see on delvingbitcoin, it appears that V3 transactions are not intended to be used for HTLC-x transactions. Which means they’ll remain pinnable, due to the SIGHASH_ANYONECANPAY signatures, allowing anyone to turn them into large, pinned, high fee/low fee-rate transactions by adding additional inputs (and outputs).

  168. t-bast commented at 2:39 pm on January 18, 2024: contributor

    Which means they’ll remain pinnable, due to the SIGHASH_ANYONECANPAY signatures, allowing anyone to turn them into large, pinned, high fee/low fee-rate transactions by adding additional inputs (and outputs).

    I may be missing something here, can you describe in more details how anyone could turn them into large pinning txs? They do require signatures from both participants, and one of them is going to be SIGHASH_ALL. So the only participant that may play pinning games is the owner of the HTLC-x transaction. The only issue I could see with that is if they’re doing this with HTLC-success txs while eclipsing their peer to prevent them from learning the preimage. As detailed here, that can be mitigated by various mechanisms and the attacker has a risk of losing funds, so I don’t think this should be related in any way to v3?

  169. glozow force-pushed on Jan 18, 2024
  170. t-bast commented at 2:58 pm on January 18, 2024: contributor

    Thanks for the clarification @ariard, I think I understand your scenario better. But there’s one point I don’t understand: in step 6 (Alice cannot over-pay in absolute fees to replace Mallory state due to "max at stake" 440k sats), Alice doesn’t want to replace Mallory’s revoked commitment, she wants it to confirm to spend all the output to herself?

    It’s easy for Alice to make the revoked commitment confirm, she can use any output of that transaction to CPFP and she doesn’t need to add any of her wallet inputs, she can just use the channel funds for that.

    Alice can then use a low feerate to spend all the HTLC outputs of the confirmed revoked commitment, because to get them back, Mallory would have to get her HTLC-x transaction confirmed (which costs fees to Mallory) and wait the to_self_delay to broadcast her 3rd-stage tx (at which point Alice would simply spend the output of the HTLC-x transaction to herself through the revocation path).

    So the only issue I see here is that it may be uneconomical to sweep funds from a commitment transaction that has many small HTLC outputs, but that’s a more general trade-off of lightning and is why node operators should tweak their max_accepted_htlcs accordingly?

  171. DrahtBot removed the label CI failed on Jan 18, 2024
  172. petertodd commented at 4:54 pm on January 18, 2024: contributor

    I may be missing something here, can you describe in more details how anyone could turn them into large pinning txs? They do require signatures from both participants, and one of them is going to be SIGHASH_ALL.

    Thanks. Yes, that’s a mistake on my part and you are correct. I hid my comment as “outdated” to reflect that.

  173. ariard commented at 5:58 pm on January 18, 2024: member

    It’s easy for Alice to make the revoked commitment confirm, she can use any output of that transaction to CPFP and she doesn’t need to add any of her wallet inputs, she can just use the channel funds for that.

    No you can’t spend in-mempool htlc outputs 1 CSV post-anchor. Assuming no 1 CSV, yes in my scenario. However attacker can construct revoked or valid pinning state to have the 483 outputs as inbound HTLC to inflate. Alice no preimage to spend them. Only anchor output available and liquidity has to be brought from fee-bumping reserves to replace max at stake 440k sats. Yes, you can honest blind CPFP at max at stake and no more pinning issues. If a) you maintain that level of fee-bumping reserves and b) you have a wormholes of fee griefing issues. Fee griefing no good if counterparties are miners or concurrent LSPs that wish to push you out of market.

  174. t-bast commented at 9:17 am on January 19, 2024: contributor

    No you can’t spend in-mempool htlc outputs 1 CSV post-anchor. Assuming no 1 CSV, yes in my scenario.

    Yes, I’m analyzing this with the changes related to v3 in mind, in which we will remove the CSV 1.

    However attacker can construct revoked or valid pinning state to have the 483 outputs as inbound HTLC to inflate. Alice no preimage to spend them.

    But Alice doesn’t need the preimage to spend them, this is a revoked commitment, she spends through the revocation path. Unless you’re thinking of a different scenario this time? If that’s the case, please elaborate the scenario precisely, otherwise we can’t really assess whether it’s a real issue or not…

  175. glozow commented at 5:08 pm on January 19, 2024: member
    CI is green btw :green_circle:
  176. in doc/policy/version3_transactions.md:89 in f3d4916eac outdated
    84+attaching a CPFP at broadcast time ("anchor output"). Without package RBF, multiple anchor outputs
    85+would be required to allow each counterparty to fee-bump any presigned transaction. With package
    86+RBF, since the presigned transactions can replace each other, 1 anchor output is sufficient.
    87+
    88+5. A V3 transaction that has an unconfirmed V3 ancestor cannot have a sigop-adjusted virtual size
    89+   larger than 1000vB.
    


    ajtowns commented at 2:07 am on January 22, 2024:

    It might be interesting to relax these rules to be:

    • A v3 transaction can have a max descendent count of 5 (not 2)
    • A v3 transaction with an ancestor count of two or more can have a maximum (sigop-adjusted) descendant size of 1000vb
    • A v3 transaction can have at most 1 unconfirmed parent (rather than limiting ancestor count to at most 2)

    (where ancestor/descendent size/count includes itself)


    glozow commented at 8:55 am on January 22, 2024:
    Why 5?

    ajtowns commented at 9:53 am on January 22, 2024:

    It sounded good. 7 might be better: a tx that spends and creates an ephemeral anchor, as well as spending and creating a p2tr output (funding for fees plus change) should be about 164 vbytes, and 1000vb/164vb is about 6, for 7 total. Or you could just not limit the descendent count directly, but rely on the 1000vb limit to do it – even with minimal 65vb txs, you’d only get 15 descendents in 1000vb.

    NB: Just suggesting this as something that might be worth exploring, I don’t think it should be included in this PR.

  177. in doc/policy/version3_transactions.md:73 in f3d4916eac outdated
    68+underestimation because D can bump C without B's help. This is resolved if V3 transactions can only
    69+have V3 ancestors, as then C cannot have another child.
    70+
    71+*Note*: This rule is enforced during reorgs.
    72+
    73+4. A V3 transaction cannot have more than 1 unconfirmed descendant.
    


    ajtowns commented at 2:13 am on January 22, 2024:
    Perhaps this should be “An unconfirmed V3 transaction cannothave more than 1 unconfirmd descendent” ?
  178. ajtowns commented at 2:25 am on January 22, 2024: contributor

    I find myself continually forgetting what the v3 rules are (particularly vs the ephemeral anchor rules, package relay rules, cluster mempool rules, or potential v4+ rules). It would be nice to have a really concise reference, that leaves the rationales/comparisons to an appendix or something. I think when stripped of historical baggage, it only needs to be:

    unconfirmed v3 transactions:

    • are always subject to rbf rules
    • can have at most one unconfirmed v3 ancestor or one unconfirmed v3 descendent, but not both (may have many confirmed ancestors, may not have any uncofirmed v2 ancestors or descendents)
    • a v3 descendent is limited to 1000vb
    • if the tx has a descendent, only the combination is subject to mempool minfee rules

    (which is only really 4 rules, rather than 7, since it combines 2/3/4/6 into a single rule)

  179. in doc/policy/version3_transactions.md:86 in f3d4916eac outdated
    81+
    82+*Rationale*: (lower bound) at least 1 descendant is required to allow CPFP of the presigned
    83+transaction. The contract protocol can create presigned transactions paying 0 fees and 1 output for
    84+attaching a CPFP at broadcast time ("anchor output"). Without package RBF, multiple anchor outputs
    85+would be required to allow each counterparty to fee-bump any presigned transaction. With package
    86+RBF, since the presigned transactions can replace each other, 1 anchor output is sufficient.
    


    ajtowns commented at 4:26 am on January 22, 2024:

    This seems underspecified: what’s the behaviour if you have a v3 transaction in the mempool, with two or more outputs, one of which is spent by another v3 in-mempool transaction, and another (extremely high fee) transaction comes in, spending the other output?

    Is it first-come-first-served, making descendent-count-pinning much easier? Or is it “we’ll swap the old for the new if it has a better feerate diagram”? Or something else?


    glozow commented at 9:19 am on January 22, 2024:

    This seems underspecified: what’s the behaviour if…

    The behavior is what the existing RBF rules say (so no sibling eviction). If there’s going to be 2 anchors I feel like v3 is not the right version to use.

    is it “we’ll swap the old for the new if it has a better feerate diagram”?

    I think sibling eviction can be considered, and feerate diagram comparisons as well. But the idea of splitting up #25038 was to decouple v3 topology restrictions from the (package) RBF rules that would be possible as a result of those restrictions. Because

    • These RBF rules can be considered without the v3 topology rules changing. For a user, what’s relevant is likely that they can’t use additional unconfirmed inputs and can do package RBF. But whether the rule is based on ancestor score vs feerate diagram isn’t really something that would change their behavior.
    • Some of the (package) RBF ideas aren’t limited to v3 (rather just limited to cluster size 2 replacements). And after cluster mempool, those rules might just apply to all transactions. Whereas v3 would still have the same topology restrictions even if ancestor/descendant limits were replaced with cluster limits.

    ajtowns commented at 9:57 am on January 22, 2024:
    Adding an additional child isn’t an RBF event though – the children don’t “conflict” per se as they don’t spend the same outputs – so I don’t think this is covered by RBF rules at all? It’s fine if that’s the behaviour, and fine if it’s relaxed later; I just think it should just be documented explicitly here.

    glozow commented at 10:50 am on January 22, 2024:
    Ok, I’ll clarify that a second child is rejected regardless of fee(rate).

    glozow commented at 2:44 pm on January 24, 2024:
    fwiw I’ve opened #29306 which implements the alternative approach. Again I’m trying to keep the scope of this PR to the 1p1c topology restriction, and we can consider additional features like sibling eviction independently.
  180. in src/kernel/mempool_removal_reason.cpp:14 in c6c2063963 outdated
    10@@ -11,7 +11,7 @@ std::string RemovalReasonToString(const MemPoolRemovalReason& r) noexcept
    11 {
    12     switch (r) {
    13         case MemPoolRemovalReason::EXPIRY: return "expiry";
    14-        case MemPoolRemovalReason::SIZELIMIT: return "sizelimit";
    15+        case MemPoolRemovalReason::SIZELIMIT: return "sizelimit or <=0 fee";
    


    ajtowns commented at 9:11 am on January 22, 2024:

    Wouldn’t it be better to distinguish these two reasons, ie introduce a ZERODESCFEE reason or so? Something like:

    0bool over_size = DynamicMemoryUsage() > sizelimit;
    1bool zerodescfee = (!over_size) && min_relay_fee > 0 && it->ModFeeWithDesc() <= 0;
    2if (!over_size && !zerodescfee) break;
    3
    4...
    5
    6RemoveStaged(stage, false, over_size ? SIZELIMIT : ZERODESCFEE);
    

    glozow commented at 3:28 pm on January 23, 2024:
    added
  181. in src/validation.cpp:778 in e721f4cad4 outdated
    776-                if (!m_pool.m_full_rbf && !SignalsOptInRBF(*ptxConflicting)) {
    777+                // All V3 transactions are considered replaceable.
    778+                //
    779+                // Replaceability signaling of the original transactions may be
    780+                // ignored due to node setting.
    781+                if (!m_pool.m_full_rbf && !SignalsOptInRBF(*ptxConflicting) && ptxConflicting->nVersion != 3) {
    


    ajtowns commented at 10:01 am on January 22, 2024:

    Maybe time to do:

    0const bool allow_rbf = ...;
    1if (allow_rbf) { ... }
    

    glozow commented at 3:28 pm on January 23, 2024:
    done
  182. glozow commented at 10:33 am on January 22, 2024: member

    I find myself continually forgetting what the v3 rules are (particularly vs the ephemeral anchor rules, package relay rules, cluster mempool rules, or potential v4+ rules). It would be nice to have a really concise reference, that leaves the rationales/comparisons to an appendix or something. I think when stripped of historical baggage, it only needs to be: …

    I’ve consolidated the rules in the BIP (https://github.com/bitcoin/bips/pull/1541), agree it’s easier to understand this way.

    I’ll delete or clean up the version3_transactions.md doc here now that the information has been moved to the BIP. We can create a more concise reference that includes the various components, though I’m not sure how much of that should be in this PR. Perhaps somewhere more live like the devwiki for now, while things are still wip?

  183. in src/policy/v3_policy.h:21 in a46e8036ce outdated
    16+#include <string>
    17+
    18+// This module enforces rules for transactions with nVersion=3 ("v3 transactions") which help make
    19+// RBF abilities more robust.
    20+
    21+// v3 only allows 1 parent and 1 child.
    


    ismaelsadeeq commented at 11:55 am on January 22, 2024:

    Indicate that the restrictions is only for unconfirmed v3 transaction here and other places.

    0// v3 only allows 1 unconfirmed parent and 1 unconfirmed child.
    

    glozow commented at 3:22 pm on January 23, 2024:
    done
  184. ismaelsadeeq commented at 12:56 pm on January 22, 2024: member

    I’ve consolidated the rules in the BIP (bitcoin/bips#1541), agree it’s easier to understand this way.

    I’ve looked at the BIP and this implementation matches the specification in the BIP

    I’ll delete or clean up the version3_transactions.md doc here now that the information has been moved to the BIP.

    +1

    Changes since my last review

  185. glozow force-pushed on Jan 23, 2024
  186. glozow force-pushed on Jan 23, 2024
  187. DrahtBot added the label CI failed on Jan 23, 2024
  188. glozow force-pushed on Jan 25, 2024
  189. glozow commented at 4:48 pm on January 25, 2024: member
    Last push incorporated a rework to PackageV3Checks from @sdaftuar (thanks), addressing a problem discussed in irc yesterday
  190. in src/policy/v3_policy.cpp:66 in e168e9f1be outdated
    61+            // Find the parent and extract the information we need for v3 checks.
    62+            int parent_version = 0;
    63+            // fixme: parent hash is txid, should return err strings with wtxid?
    64+            Txid parent_hash = Txid::FromUint256(uint256(0));
    65+            Wtxid parent_wtxid = Wtxid::FromUint256(uint256(0));
    66+            int other_mempool_descendants = 0;
    


    instagibbs commented at 5:44 pm on January 25, 2024:
    has_mempool_sibling since I’m thinking in reference to ptx and it’s a binary result?

    sdaftuar commented at 1:49 pm on January 26, 2024:

    Agreed, that makes sense. Something like

    0bool has_mempool_sibling{false};
    1...
    2has_mempool_sibling = (parent->GetCountWithDescendants() > 1);
    

    ?


    glozow commented at 5:11 pm on January 26, 2024:
    Done
  191. in src/validation.cpp:1314 in 79b8d8237a outdated
    1310@@ -1288,16 +1311,21 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1311                    [](const auto& tx) { return Workspace(tx); });
    1312     std::map<uint256, MempoolAcceptResult> results;
    1313 
    1314+
    


    instagibbs commented at 6:00 pm on January 25, 2024:
    random newline

    glozow commented at 5:11 pm on January 26, 2024:
    Deleted
  192. in src/validation.cpp:1319 in 79b8d8237a outdated
    1333@@ -1306,6 +1334,15 @@ PackageMempoolAcceptResult MemPoolAccept::AcceptMultipleTransactions(const std::
    1334         m_viewmempool.PackageAddTransaction(ws.m_ptx);
    1335     }
    1336 
    1337+    // At this point we have all in-mempool ancestors, and we know every transaction's vsize.
    


    instagibbs commented at 6:03 pm on January 25, 2024:
    can assert that package_with_ancestors.ancestor_counts.size() == workspaces.size()?

    glozow commented at 5:11 pm on January 26, 2024:
    Added
  193. in src/validation.cpp:1320 in 79b8d8237a outdated
    1315     LOCK(m_pool.cs);
    1316 
    1317     // Do all PreChecks first and fail fast to avoid running expensive script checks when unnecessary.
    1318+    PackageWithAncestorCounts package_with_ancestors{txns};
    1319     for (Workspace& ws : workspaces) {
    1320+        // This process is computationally expensive, so only do it when v3 rules are applicable.
    


    instagibbs commented at 6:03 pm on January 25, 2024:
    old comment?

    glozow commented at 5:11 pm on January 26, 2024:
    Deleted
  194. in src/validation.cpp:968 in 79b8d8237a outdated
    960@@ -946,6 +961,14 @@ bool MemPoolAccept::PreChecks(ATMPArgs& args, Workspace& ws)
    961     }
    962 
    963     ws.m_ancestors = *ancestors;
    964+    // Even though just checking direct mempool parents for inheritance would be sufficient, we
    965+    // check using the full ancestor set here because it's more convenient to use what we have
    966+    // already calculated.
    967+    CTxMemPool::setEntries collective_ancestors = *ancestors;
    968+    collective_ancestors.merge(ws.m_ancestors_of_in_package_ancestors);
    


    instagibbs commented at 6:08 pm on January 25, 2024:

    at the introductory commit, I don’t see how m_ancestors_of_in_package_ancestors is non-empty. unused?

    I suspect replaced by package_with_ancestors usage?


    sdaftuar commented at 2:50 pm on January 26, 2024:
    Yes I think this can be scrapped now – ApplyV3Rules only operates on a single transaction and its in-mempool parents; correct validation for packages should now all happen in PackageV3Checks.

    instagibbs commented at 2:58 pm on January 26, 2024:

    I fuzzed the PR with ApplyV3Rules turned off for AcceptMultipleTransactions cases and didn’t hit any issues, so I believe sufficient validation is contained within PackageV3Checks :partying_face:

    if that is indeed the case, we should be very clear in docs


    glozow commented at 5:10 pm on January 26, 2024:
    Added “Should be called for package transactions to fail more quickly” to the doc (as in: not must call but should call).

    glozow commented at 5:10 pm on January 26, 2024:
    Or I guess we could turn it off for AcceptMultiple?

    instagibbs commented at 5:36 pm on January 26, 2024:
    If we’re focusing on small packages for now, can we just disable it, rename the function something “obviously” single txn related, and then in future updates we can reintroduce the check as an optimization if we like?
  195. DrahtBot removed the label CI failed on Jan 25, 2024
  196. in src/validation.cpp:631 in 79b8d8237a outdated
    624@@ -622,6 +625,14 @@ class MemPoolAccept
    625         /** A temporary cache containing serialized transaction data for signature verification.
    626          * Reused across PolicyScriptChecks and ConsensusScriptChecks. */
    627         PrecomputedTransactionData m_precomputed_txdata;
    628+
    629+        /** Number of in-package ancestors, excluding itself. If applicable, populated at the start
    630+         * of each PreChecks iteration within AcceptMultipleTransactions. */
    631+        unsigned int m_num_in_package_ancestors{0};
    


    instagibbs commented at 6:11 pm on January 25, 2024:
    unused

    glozow commented at 5:08 pm on January 26, 2024:
    Cleaned up, thanks
  197. in src/validation.cpp:635 in 79b8d8237a outdated
    630+         * of each PreChecks iteration within AcceptMultipleTransactions. */
    631+        unsigned int m_num_in_package_ancestors{0};
    632+
    633+        /** The in-mempool ancestors of this transaction's in-package ancestors. If applicable,
    634+         * populated at the start of each PreChecks iteration within AcceptMultipleTransactions. */
    635+        CTxMemPool::setEntries m_ancestors_of_in_package_ancestors;
    


    instagibbs commented at 6:11 pm on January 25, 2024:
    unused
  198. instagibbs commented at 6:14 pm on January 25, 2024: member

    PreChecks loop looks so much cleaner, but there’s some dead code and like before I’m not super clear which checks are required in AcceptMultipleTransactions vs anti-DoS type checks. For example, does ApplyV3Rules matter inside AcceptMultipleTransactions?

    I’d like it to be crystal clear to not introduce regressions, and moving forward to cluster mempool we keep all this in mind for future simplification.

  199. instagibbs commented at 6:24 pm on January 25, 2024: member
    concept ACK, for the record
  200. in src/policy/v3_policy.cpp:99 in 5a5607193d outdated
    94+            if (parent_version != 3) {
    95+                return strprintf("v3 tx %s cannot spend from non-v3 tx %s",
    96+                                 ptx->GetWitnessHash().ToString(), parent_wtxid.ToString());
    97+            }
    98+
    99+            // The mempool or in-package parent cannot have any other in-mempool children.
    


    instagibbs commented at 7:42 pm on January 25, 2024:

    Was thinking about why this is ok. AcceptMultipleTransactions doesn’t allow RBF, and future package RBF should work with this?

    0            // The mempool or in-package parent cannot have any other in-mempool children.
    1            // We don't need to handle the "single RBF" case as that is
    2            // handled in ApplyV3Rules for single transaction packages.
    3            // Future package RBF would conflict with the parent as well,
    4            // making this check sufficient.
    5            
    

    sdaftuar commented at 2:04 pm on January 26, 2024:
    My view is that we’ll end up reworking things substantially in order to enable package RBF, but I agree with the code comment suggestion.

    instagibbs commented at 2:57 pm on January 26, 2024:
    @sdaftuar do you mean for #28984 or for generalized package RBF? I’m thinking shorter term here.

    sdaftuar commented at 3:13 pm on January 26, 2024:
    Oh right, yes I was thinking longer term.. never mind!

    glozow commented at 3:29 pm on January 26, 2024:

    I don’t think single conflict RBF carve out is ever relevant in v3 package RBF.

    If your mempool parent has another mempool child, it doesn’t matter if you have {RBF, sibling eviction, package RBF} within a package. Enumerating the possibilities in 1p1c:

    0            // The mempool or in-package parent cannot have any other in-mempool children. Since this is
    1            // a package, we don't need to check whether this can be a to-be-replaced sibling.
    2            // In a package, there must be a v3 violation even if a replacement is happening. Either:
    3            // - Package parent is replacing something and has both mempool parent and package child.
    4            // - Package child is replacing something and has both mempool parent and package parent.
    

    instagibbs commented at 4:17 pm on January 26, 2024:
    not sure I like this suggested text. anyone else want to give it a shot?

    glozow commented at 4:52 pm on January 26, 2024:
    I’ve made it “We don’t check whether the sibling is to-be-replaced (done in ApplyV3Rules) because that doesn’t apply in a package.”
  201. ariard commented at 10:04 pm on January 25, 2024: member

    Yes, I’m analyzing this with the changes related to v3 in mind, in which we will remove the CSV 1.

    I don’t know if it’s robust to remove the CSV 1 and potential interactions of invalidated chain of transactions with m_recent_rejects.

    But Alice doesn’t need the preimage to spend them, this is a revoked commitment, she spends through the revocation path. Unless you’re thinking of a different scenario this time?

    Still the same scenario than laid out in my comment here: #28948 (comment). Pinning state might be 483 outbound HTLC (or your max_accepted_htlc value) of amount 331 satoshi (dust limit + 1). 483 * 331 = 159k, still less than 440k sats of the targeted HTLC on the valid state. Of course, now the 159k of value can be used as complementary of a honest CPFP to over fee-bump, so I see that more as lowering of the fee-bumping reserves requirements (which is good). Not a solution to the pinning itself, where the pinning lays in the economics being at the advantage of the attacker due to batch pinning.

  202. glozow commented at 9:58 am on January 26, 2024: member

    I definitely appreciate the discussions about LN usage, but it would be best to take it to the Delving thread. Here’s the link for {switching, imbuing} {commitment, HTLC-X} transactions to use {v3, ephemeral anchors}: https://delvingbitcoin.org/t/lightning-transactions-with-v3-and-ephemeral-anchors/418/24

    Similarly, high level discussion about the proposal is best on the thread here: https://delvingbitcoin.org/t/v3-transaction-policy-for-anti-pinning/340/32.

    It’s getting more difficult to see the code review comments here, so trying to keep things on topic.

  203. in src/policy/v3_policy.cpp:49 in e168e9f1be outdated
    44+    const auto in_package_parents{FindInPackageParents(package_with_ancestors, ptx)};
    45+
    46+    // Now we have all ancestors, so we can start checking v3 rules.
    47+    if (ptx->nVersion == 3) {
    48+        // v3 transactions can have at most 1 unconfirmed parent
    49+        if (mempool_ancestors.size() + in_package_parents.size() > 1) {
    


    sdaftuar commented at 1:47 pm on January 26, 2024:
    For consistency with the rest of the code relating to the v3 rules, perhaps we should use the V3_ANCESTOR_LIMIT variable here, and use the static_assert like in ApplyV3Rules to indicate to code readers that this only works because the limit is 1?

    glozow commented at 5:11 pm on January 26, 2024:
    Done
  204. sdaftuar commented at 2:54 pm on January 26, 2024: member

    I haven’t yet reviewed the tests – still plan to do that.

    FYI, despite my earlier comment suggesting we resurrect #27018, after giving this further thought I realized we don’t need to worry about the issue of 0-fee transactions in the mempool at all until we get to the point that we’re doing package relay/validation from the p2p network, and potentially even then we may not need such behavior (as there’s just a free-relay concern that is easy to mitigate). So in the interests of narrowing the scope of this PR to make review/discussion easier, I think it would make sense to drop the commits relating to eviction of <=0 fee transactions for now.

  205. [rpc] return full string for package_msg and package-error 9a29d470fb
  206. glozow force-pushed on Jan 26, 2024
  207. glozow force-pushed on Jan 26, 2024
  208. in test/functional/mempool_accept_v3.py:248 in 46eacf922b outdated
    243+        assert_equal(result['package_msg'], f"v3-violation, tx {tx_v3_child_multiparent['wtxid']} would have too many ancestors")
    244+        self.check_mempool([])
    245+
    246+        self.check_mempool([])
    247+        result = node.submitpackage([tx_v3_parent_normal["hex"], tx_v3_child_heavy["hex"]])
    248+        assert_equal(result['package_msg'], f"v3-violation, v3 child tx {tx_v3_child_heavy['wtxid']} is too big: 1005 > 1000 virtual bytes")
    


    instagibbs commented at 4:49 pm on January 29, 2024:
    1005 is pretty magical, though I realize we have no way of actually computing vsize of a child not in mempool. sad!

    glozow commented at 11:50 am on January 30, 2024:
    Not sure if this was a joke :sweat_smile: I’ve changed it to use tx_v3_child_heavy['tx'].get_vsize() now and added a comment for why we just use weight. Is that something that sometimes doesn’t work?

    instagibbs commented at 1:22 pm on January 30, 2024:
    ok that’s better even if it’s not sigops adjusted(?) thanks!

    glozow commented at 10:38 am on January 31, 2024:
    yeah not sigops adjusted, but we don’t expect that to be necessary here so I think it’s fine with a comment
  209. in src/policy/v3_policy.cpp:84 in 46eacf922b outdated
    79+            } else {
    80+                // Ancestor must be in the package. Find it.
    81+                auto &parent_index = in_package_parents[0];
    82+                // If the in-package parent has mempool ancestors, then this is
    83+                // a v3 violation.
    84+                if (package_with_ancestors.ancestor_counts[parent_index] > 0) {
    


    instagibbs commented at 4:56 pm on January 29, 2024:

    this doesn’t have coverage in functional or unit tests, though I’m not sure this can be hit?

    TxA(in mempool) -> TxB -> TxC

    this is the minimal topology, right? However, this condition will trigger failure at https://github.com/bitcoin/bitcoin/pull/28948/files#diff-a19c07329799a164e1e3bd89e7cd07f1a8e5e97b88f2b24c9bef17df04479c74R53


    sdaftuar commented at 6:44 pm on January 29, 2024:

    I believe you’re correct that this code is not reachable in a functional test, but the reason is that when PV3C(TxB) is invoked, we’ll notice that it is a v3 child and that it has an in-package descendant, and fail further down (https://github.com/bitcoin/bitcoin/pull/28948/commits/965a23f33404b5e3cac41b7c4e17298e96ff25bf#diff-a19c07329799a164e1e3bd89e7cd07f1a8e5e97b88f2b24c9bef17df04479c74R118-R120).

    There is a test for this topology in the functional tests, see mempool_accept_v3.py here: https://github.com/bitcoin/bitcoin/pull/28948/commits/9a9ed3d500bcb4cccc6f780ce37ace53c6b85585#diff-15a1888c9151fc1d182c23e34b71d691f70df448bceb9eb78c8296f18854b6a3R282-R284

    (Hmm now I’m trying to remember why I thought we needed this check at all! Would be simpler if we dropped all these ancestor counts?)


    glozow commented at 11:49 am on January 30, 2024:
    Indeed! I’ve removed the duplicate check
  210. in src/policy/v3_policy.cpp:104 in 46eacf922b outdated
     99+
    100+            // The mempool or in-package parent cannot have any other in-mempool children. We don't
    101+            // check whether the sibling is to-be-replaced (done in ApplyV3Rules) because that
    102+            // doesn't apply in a package.
    103+            if (has_mempool_sibling) {
    104+                return strprintf("tx %u would exceed descendant count limit", parent_wtxid.ToString());
    


    instagibbs commented at 5:01 pm on January 29, 2024:

    this doesn’t have functional or unit test coverage, and this also seems difficult to hit in functional tests.

    I think it can be hit in unit tests by creating the necessary in-mempool parent-child chain, then calling PackageV3Checks with just the single tx.


    glozow commented at 11:49 am on January 30, 2024:
    Removed the duplicate check, nice

    instagibbs commented at 8:27 pm on January 30, 2024:
    double-checking: are we sure we can’t hit this? @sdaftuar

    glozow commented at 11:19 am on January 31, 2024:

    I think indeed we cannot hit this, but it’s because we call ApplyV3Rules in a package.

    When package is a parent+child, has_mempool_sibling doesn’t matter since having any mempool ancestor should cause this to fail immediately. In that way, this function kind of assumes that package is a connected component.

    If package has unrelated transactions, we’d let a mempool sibling slip through in this function. However, since we are calling ApplyV3Rules within PreChecks, the mempool sibling is caught there. So I guess the “not strictly necessary in package validation” is not really correct for our “testmempoolaccept is allowed to have unrelated transactions” use case.


    sdaftuar commented at 1:27 pm on January 31, 2024:
    @glozow’s analysis looks right to me. I don’t think it matters too much way we go – either we should update the code comments to explain that correctness relies on invoking ApplyV3Rules and PV3C on package transactions, or we can just leave the duplicate check in here so that PV3C stands on its own for package validation. I’d probably vote for just updating the comment but I don’t feel strongly either way.

    instagibbs commented at 2:58 pm on January 31, 2024:
    Leaving the check in with an Assume if we don’t expect to hit it probably best, with a comment?

    glozow commented at 3:25 pm on January 31, 2024:
    Put the check in with an Assume and comment. Also renamed ApplyV3Rules to SingleV3Checks. Also added functional tests targeting mempool siblings.
  211. in src/test/fuzz/package_eval.cpp:176 in f3a7b6efdd outdated
    172@@ -171,11 +173,11 @@ FUZZ_TARGET(tx_package_eval, .init = initialize_tx_pool)
    173             // Create transaction to add to the mempool
    174             const CTransactionRef tx = [&] {
    175                 CMutableTransaction tx_mut;
    176-                tx_mut.nVersion = CTransaction::CURRENT_VERSION;
    177+                tx_mut.nVersion = fuzzed_data_provider.ConsumeBool() ? CTransaction::CURRENT_VERSION : 3;
    


    instagibbs commented at 5:47 pm on January 29, 2024:
    maybe use TX_MAX_STANDARD_VERSION instead

    glozow commented at 11:48 am on January 30, 2024:
    Done
  212. instagibbs commented at 5:56 pm on January 29, 2024: member

    reviewed through 46eacf922b40b0e5edd03b8dfff8002e65a3ede5

    with some test questions and nits. I checked the major topological checks have test coverage, ran fuzzer for quite a while with ApplyV3Rules disabled in package context to check that PackageV3Checks is sufficient for package context.

  213. in src/policy/v3_policy.h:59 in 965a23f334 outdated
    54+std::optional<std::string> ApplyV3Rules(const CTransactionRef& ptx,
    55+                                        const CTxMemPool::setEntries& mempool_ancestors,
    56+                                        const std::set<Txid>& direct_conflicts,
    57+                                        int64_t vsize);
    58+
    59+struct PackageWithAncestorCounts {
    


    sdaftuar commented at 7:32 pm on January 29, 2024:
    As @instagibbs pointed out elsewhere, we don’t need to check these ancestor_counts in PackageV3Checks after all, so we can get rid of this struct and simply pass in the Package to PV3C.

    glozow commented at 11:49 am on January 30, 2024:
    Removed PackageWithAncestorCounts, now just using Package. Did a bit of refactoring inside PackageV3Checks so that we can use references instead of copies for the Txid and Wtxid.
  214. glozow force-pushed on Jan 30, 2024
  215. in src/test/fuzz/package_eval.cpp:176 in b60a30aa32 outdated
    172@@ -171,11 +173,11 @@ FUZZ_TARGET(tx_package_eval, .init = initialize_tx_pool)
    173             // Create transaction to add to the mempool
    174             const CTransactionRef tx = [&] {
    175                 CMutableTransaction tx_mut;
    176-                tx_mut.nVersion = CTransaction::CURRENT_VERSION;
    177+                tx_mut.nVersion = TX_MAX_STANDARD_VERSION ? CTransaction::CURRENT_VERSION : 3;
    


    instagibbs commented at 1:28 pm on January 30, 2024:
    0                tx_mut.nVersion = fuzzed_data_provider.ConsumeBool() ? CTransaction::CURRENT_VERSION : TX_MAX_STANDARD_VERSION;
    

    more what I meant


    glozow commented at 2:36 pm on January 30, 2024:
    Oof oops :facepalm: fixed
  216. glozow force-pushed on Jan 30, 2024
  217. glozow force-pushed on Jan 31, 2024
  218. ariard commented at 9:52 pm on February 1, 2024: member

    I definitely appreciate the discussions about LN usage, but it would be best to take it to the Delving thread. Here’s the link > for {switching, imbuing} {commitment, HTLC-X} transactions to use {v3, ephemeral anchors}: https://delvingbitcoin.org/t/lightning-transactions-with-v3-and-ephemeral-anchors/418/24

    no time for delving - apologies.

    concern is on the lack of anti-pinning robustness of v3 policy, see my previous comments. if correct and leave a substantial pinning loophole, i’m worried in the future LSPs might directly rely on tx accelerators. or they might refuse low-value payments and as such outlaw a class of economic users out of lightning. so i don’t think it’s great long-term on a) mining centralization and b) lightning centralization. at best a temporary “pis-aller” in the same way than opt-in RBF was one. so my question to the ones acking v3 policy, do you understand it’s not really a robust anti-pinning mitigation ? and the potential downsides on lightning and mining ecosytems

  219. ariard commented at 3:16 am on February 5, 2024: member

    so i took time to test a NTA pinning scenario: ariard@84e12b8 which is known since https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/018011.html

    I’m interested if there is a full-branch with package-relay + v3 tx policy available somewhere.

    I don’t know if it’s robust against NTA pinning and if in consequence we should not be better off to reconsider the whole package relay + v3 bitcoin engineering package. take it nicely as a statement to be corroborated for now.

    my apologies if you wasted time for nothing reviewing this PR so far and started to make version=3 LN design on the whiteboard.

  220. petertodd commented at 9:39 am on February 5, 2024: contributor

    I’m interested if there is a full-branch with package-relay + v3 tx policy available somewhere.

    To be clear, package relay with package RBF is meant to be resistant to these kinds of network topology aware (NTA) pinning scenarios. With correctly implemented package relay it doesn’t matter that a CPFP tx did a CPFP on top of a commitment transaction that no-one but the originator has: provided the fee (or with RBFR, fee-rate) outbids the competing commitment transaction, it will be replaced as nodes evaluate the new package as a whole.

    I do not believe all these scenarios have been fully tested. But that is at least is the intent to eventually implement this protection.

    With (one-shot) replace-by-fee-rate while my blog didn’t go into explicit detail, when used with CPFP and a package relay implementation, it also should be immune NTA pinning for the same reason (RBFing the commitment transaction itself with fee variants is of course already immune to NTA-specific pinning techniques). Indeed, you should be able to make a RBFR implementation compatible with V3 transactions by simply declaring nVersion=3 transactions to be standard, without implementing any of the V3 logic.

  221. glozow commented at 11:16 am on February 5, 2024: member

    so i took time to test a NTA pinning scenario: ariard/bitcoin@84e12b8 which is known since https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2020-June/018011.html

    To summarize the “network topology aware” scenario, what happens with Alice and Bob’s channel is:

    1. Bob broadcasts his commitment transaction + cpfp from his anchor
    2. This transaction propagates to everyone except Alice, somehow. [1]
    3. Alice broadcasts her commitment transaction + cpfp from her anchor
    4. Alice’s transactions don’t propagate because her commitment tx conflicts with Bob’s commitment tx.

    This (1) requires the attacker to somehow control transaction propagation so that some specific nodes get their transactions and some don’t (2) is accounted for here. I referenced your post in the original posts, as it’s one of the scenarios that requires package RBF.

    I’m interested if there is a full-branch with package-relay + v3 tx policy available somewhere.

    Yes, I wrote this branch a few months ago: https://github.com/glozow/bitcoin/commits/2023-11-1p1c-v3-ea/ I also included a generic (i.e. not LN-specific) version of this scenario as part of the package RBF + package relay test suite. See the “[functional test] package RBF network” commit.

    Additionally, your branch seems to be based on master and use nVersion=2 transactions? Feel free to cherry-pick your test on top of that branch - also be sure to use submitpackage instead of sendrawtransaction. You can also create a branch with #28984 + #28970 for “cluster size 2” package RBF instead of v3-only, but you’d want something like v3 to mitigate package RBF rule 3 pinning (where Bob attaches a huge high fee / low feerate child to bob_commitment_tx).

    I don’t know if it’s robust against NTA pinning

    If the issue is about alice_commitment_tx + alice_cpfp_on_alice_tx replacing bob_commitment_tx, this proposal (v3 + package RBF + 1p1c package relay) does resolve it and is in fact designed primarily for that purpose.

    We walked through the combinations of transactions that can be broadcast ahead of time by the adversary here on Delving. I understand that you have limited time, but I’d appreciate a bit of effort in checking to see if your concerns have already been addressed.

    [1]: In your test, this is achieved this by manually disconnecting nodes. Since you connect the nodes only after submitting both sets of transactions and they are both normal Bitcoin Core nodes, it is expected that they would not announce prior transactions to each other. This is unrealistic in practice - transactions might not propagate or take a long time, but if attackers can disconnect nodes reliably, we have worse problems. A more realistic way to simulate these scenarios is to submit the below-min-feerate parent to one node only with prioritisetransaction (so that it is submitted but not propagated), emulating realistic policy differences that might exist between nodes. You could also put some more nodes in between to add latency, though that might sometimes be flaky.

  222. petertodd commented at 12:15 pm on February 5, 2024: contributor

    @glozow

    To summarize the “network topology aware” scenario, what happens with Alice and Bob’s channel is:

    1. Bob broadcasts his commitment transaction + cpfp from his anchor
    
    2. This transaction propagates to everyone except Alice, somehow. [1]
    
    3. Alice broadcasts her commitment transaction + cpfp from her anchor
    
    4. Alice's transactions don't propagate because her commitment tx conflicts with Bob's commitment tx.
    

    This (1) requires the attacker to somehow control transaction propagation so that some specific nodes get their transactions and some don’t (2) is accounted for here. I referenced your post in the original posts, as it’s one of the scenarios that requires package RBF.

    This scenario isn’t a “somehow” one; I’ve personally seen this scenario happen a few times by accident with LND with commitment transactions that were close to minimum relay fees. Basically, the other side’s commitment transaction doesn’t get to my node in time, and then my node also published it’s commitment transaction + anchor CPFP. The result being that fee bumping didn’t work until I manually fixed it by finding an anchor CPFP spend of the other side, and manually publishing it on other nodes.

    Anyway, as we both agree, package RBF fixes this.

  223. glozow commented at 1:00 pm on February 5, 2024: member
    (yes, see footnote [1])
  224. glozow force-pushed on Feb 5, 2024
  225. glozow commented at 4:39 pm on February 5, 2024: member
    Last push made all the debug strings include txid and wtxid of transactions - @sdaftuar pointed out it’s more useful to have both.
  226. in src/policy/v3_policy.h:33 in 8d2c91ea40 outdated
    28+static constexpr int64_t V3_CHILD_MAX_VSIZE{1000};
    29+// These limits are within the default ancestor/descendant limits.
    30+static_assert(V3_CHILD_MAX_VSIZE + MAX_STANDARD_TX_WEIGHT / WITNESS_SCALE_FACTOR <= DEFAULT_ANCESTOR_SIZE_LIMIT_KVB * 1000);
    31+static_assert(V3_CHILD_MAX_VSIZE + MAX_STANDARD_TX_WEIGHT / WITNESS_SCALE_FACTOR <= DEFAULT_DESCENDANT_SIZE_LIMIT_KVB * 1000);
    32+
    33+/** Must be called for every transaction, even if not v3. Not strictly necessary for package transactions.
    


    instagibbs commented at 2:35 pm on February 8, 2024:

    Sub-packages of size 1 still need the check.

    0/** Must be called for every transaction, even if not v3. Not strictly necessary for transactions accepted through AcceptMultipleTransactions().
    

    glozow commented at 9:47 pm on February 8, 2024:
    added
  227. in src/policy/v3_policy.cpp:49 in 8d2c91ea40 outdated
    44+    /** Wtxid used for debug string */
    45+    const Wtxid& m_wtxid;
    46+    /** nVersion used to check inheritance of v3 and non-v3 */
    47+    decltype(CTransaction::nVersion) m_version;
    48+    /** If parent is in mempool, whether it has any descendants in mempool. */
    49+    bool m_has_descendant;
    


    instagibbs commented at 3:41 pm on February 8, 2024:

    nit for reading clarity:

    0    bool m_has_mempool_descendant;
    

    glozow commented at 9:46 pm on February 8, 2024:
    done
  228. in src/policy/v3_policy.cpp:52 in 8d2c91ea40 outdated
    47+    decltype(CTransaction::nVersion) m_version;
    48+    /** If parent is in mempool, whether it has any descendants in mempool. */
    49+    bool m_has_descendant;
    50+
    51+    ParentInfo() = delete;
    52+    ParentInfo(const Txid& txid, const Wtxid& wtxid, decltype(CTransaction::nVersion) version, bool has_descendant) :
    


    instagibbs commented at 3:42 pm on February 8, 2024:

    nit for reading clarity:

    0    ParentInfo(const Txid& txid, const Wtxid& wtxid, decltype(CTransaction::nVersion) version, bool has_mempool_descendant) :
    

    glozow commented at 9:46 pm on February 8, 2024:
    done
  229. in src/policy/v3_policy.cpp:76 in 8d2c91ea40 outdated
    70+            return strprintf("tx %s (wtxid=%s) would have too many ancestors",
    71+                             ptx->GetHash().ToString(), ptx->GetWitnessHash().ToString());
    72+        }
    73+
    74+        const bool has_parent{mempool_ancestors.size() + in_package_parents.size() > 0};
    75+        if (has_parent) {
    


    instagibbs commented at 3:47 pm on February 8, 2024:

    May be easier to read if we immediately return std::nullopt if !has_parent? Could even push the check above https://github.com/bitcoin/bitcoin/pull/28948/files#diff-a19c07329799a164e1e3bd89e7cd07f1a8e5e97b88f2b24c9bef17df04479c74R69

    simplest case made clear


    glozow commented at 9:32 pm on February 8, 2024:
    Hmmm I find early returns in the middle of a function a bit brittle. Would be easier to write a bug if e.g. we added another v3 rule at the bottom of the function.

    instagibbs commented at 9:35 pm on February 8, 2024:
    fair enough, I just keep forgetting I can assume it has a parent because this block is pretty large. maybe a skill issue on my part

    glozow commented at 9:46 pm on February 8, 2024:
    Gonna mark this as resolved 👍
  230. in src/policy/v3_policy.cpp:201 in 8d2c91ea40 outdated
    196+            return strprintf("v3 child tx %s (wtxid=%s) is too big: %u > %u virtual bytes",
    197+                             ptx->GetHash().ToString(), ptx->GetWitnessHash().ToString(), vsize, V3_CHILD_MAX_VSIZE);
    198+        }
    199+
    200+        // Check the descendant counts of in-mempool ancestors.
    201+        if (!mempool_ancestors.empty()) {
    



    glozow commented at 9:47 pm on February 8, 2024:
    removed
  231. in src/policy/v3_policy.cpp:204 in 8d2c91ea40 outdated
    199+
    200+        // Check the descendant counts of in-mempool ancestors.
    201+        if (!mempool_ancestors.empty()) {
    202+            const auto& parent_entry = *mempool_ancestors.begin();
    203+            // If there are any ancestors, this is the only child allowed. The parent cannot have any
    204+            // other descendants.
    


    instagibbs commented at 4:03 pm on February 8, 2024:
    Might want to be extra clear there could be multiple children already due to a reorg.

    glozow commented at 9:47 pm on February 8, 2024:
    added a comment
  232. in src/policy/v3_policy.cpp:210 in 8d2c91ea40 outdated
    205+            const auto& children = parent_entry->GetMemPoolChildrenConst();
    206+            // Don't double-count a transaction that is going to be replaced. This logic assumes that
    207+            // any descendant of the V3 transaction is a direct child, which makes sense because a V3
    208+            // transaction can only have 1 descendant.
    209+            const bool child_will_be_replaced = !children.empty() &&
    210+                std::any_of(children.cbegin(), children.cend(),
    


    instagibbs commented at 4:17 pm on February 8, 2024:

    I think this is more correct in the case of a reorg? We should accept if it double-spends all siblings.

    0                std::all_of(children.cbegin(), children.cend(),
    

    A case that exercises this stuff in test_v3_reorg would be nice too. Try submitting another child when there’s more than one child already due to reorg.


    sdaftuar commented at 4:53 pm on February 8, 2024:

    I don’t think so, otherwise this is needlessly making things worse than it needs to be.

    Imagine transactions C1 and C2 are both v3, both in the mempool, with confirmed v3 parent P. C2 can be arbitrarily big, since it entered the mempool with no unconfirmed parents.

    P is reorged out, and now P, C1, and C2 are all in the mempool.

    Suppose D is an RBF of C1 that would pass our RBF rules. However, C2 is a big transaction, and being forced to conflict with C2 (eg due to sibling eviction or whatever) might make D fail, even though the mempool would be strictly improved by just letting D replace C1, and leaving C2 in place.

    Anyway I don’t think this edge case behavior is super important, but if we have to pick something, I think what is implemented here is probably right.


    glozow commented at 9:50 pm on February 8, 2024:
    I agree that in the case above D shouldn’t need to conflict with all children, so will leave as is.

    instagibbs commented at 2:21 pm on February 9, 2024:
    I still would like a test for this, maybe as a follow-up

    ismaelsadeeq commented at 7:03 am on February 10, 2024:

    @instagibbs To test my understanding of the discussion here I created a test for this case https://github.com/ismaelsadeeq/bitcoin/commit/87fc7f0a8d8ad1fc2af60ee6cc678df5e6fb01dd

    I agree we should not force tx to conflict with reorged siblings, since we dont enforce v3 rules after reorg.

    Maybe this should be documented in the comment.


    glozow commented at 2:35 pm on February 12, 2024:
    @ismaelsadeeq that test looks good to me 👍 can I cherry-pick this into #29424?

    ismaelsadeeq commented at 2:41 pm on February 12, 2024:
    Yes 👍
  233. instagibbs commented at 4:28 pm on February 8, 2024: member
    reviewed through 8d2c91ea40be1c33b6a64168203c51a19afcd7f8
  234. sdaftuar commented at 7:33 pm on February 8, 2024: member

    Updating my concept ACK. Reading through the numerous comments on this PR, the objections that have been raised have nothing to do with v3, and instead have to do with applications that might use it, and what their best design is. I think applications are free to optimize further, but this PR is not the place to do spec design of layer 2 protocols.

    The way I see this issue is that RBF pinning due to the total fee requirement1 is a real issue for users, and has been since we first introduced RBF, long before layer 2 protocols were in use. So the idea that we’d mitigate the issue by instituting some restriction on children is a very old one, and is not unique to a single project (like LN). However, the particular choice of parameters here – exactly 1 small child, that is only permitted to have 1 parent – is indeed an attempt to meet the LN use case.

    So far, we’ve had some support from LN spec developers (@t-bast, @thebluematt) for the idea that the v3 rules meet their use case. The others have been silent so far, so it’s not yet clear that v3 will be adopted; but it’s also not yet clear that it won’t be.

    Moreover, there is another issue besides pinning that is relevant – the CPFP carveout rule. I brought up in #29319 that in order to eventually be able to deploy cluster mempool without breaking current use cases, we’ll need some alternative to the carveout rule. It sounds like v3 + sibling eviction (proposed in #29306) may be sufficient – again, we have concept ACKs from @t-bast and @thebluematt and we’re waiting to hear from others who have been silent so far. To unblock progress on other work, my suggestion is that we do the following:

    • modify this PR to introduce the v3 policy and validation rules, but not make v3 transactions standard for relay yet – this should be a very small code change I think
    • merge this PR (once it’s been sufficiently re-reviewed and ACKed, of course)
    • continue to develop sibling eviction for v3, and imbued v3 semantics on existing LN transactions (separate PRs we can propose for consideration, which seem like part of a potential roadmap)
    • Once v3 gets sufficient ACKs from the LN community, merge a PR that would enable v3 transactions as standard

    This also gives us time to make any potential changes to the v3 child size policy rule prior to deployment, if we get feedback from other developers about what numbers are workable (eg perhaps 500vb is better). And alternatively, if this approach ultimately is nacked by the LN community or proves unworkable altogether, then we can always revert the code introduced here, as the rules would never have been deployed on mainnet. However, given that this PR is a blocker for many other lines of work right now, I’d like to get this merged in some form so that we can move on to the rest (sibling eviction, cluster-size-2 rbf, 1p1c relay, cluster mempool, etc).

    I’ve been re-reviewing this PR myself today, and I see that @instagibbs has left a few more comments, and my overall sense is that the code here is pretty well reviewed and tested, and should be ready for merge soon.


    1. Dropping the total fee requirement may be achievable in the future with further research, but there is not currently a well-thought out proposal that would allow us to do this while ensuring incentive compatibility of replacements and a way to prevent free relay on the network. ↩︎

  235. [policy] add v3 policy rules
    Co-authored-by: Suhas Daftuar <sdaftuar@gmail.com>
    eb8d5a2e7d
  236. [policy/validation] allow v3 transactions with certain restrictions
    Co-authored-by: Suhas Daftuar <sdaftuar@gmail.com>
    9a1fea55b2
  237. test framework: Add and use option for tx-version in MiniWallet methods 27c8786ba9
  238. [functional test] v3 transaction submission
    Co-authored-by: Suhas Daftuar <sdaftuar@gmail.com>
    Co-authored-by: Gregory Sanders <gsanders87@gmail.com>
    1fd16b5c62
  239. [fuzz] v3 transactions and sigop-adjusted vsize
    Ensure we are checking sigop-adjusted virtual size by creating setups
    and packages where sigop cost is larger than bip141 vsize.
    
    Co-authored-by: Gregory Sanders <gsanders87@gmail.com>
    e643ea795e
  240. [doc] v3 signaling in mempool-replacements.md 29029df5c7
  241. glozow force-pushed on Feb 8, 2024
  242. glozow commented at 9:57 pm on February 8, 2024: member

    To unblock progress on other work, my suggestion is that we do the following:

    * modify this PR to introduce the v3 policy and validation rules, but not make v3 transactions standard for relay yet -- this should be a very small code change I think
    

    Pushed to address @instagibbs review and make v3 not-yet-standard.

  243. ariard commented at 10:29 pm on February 8, 2024: member

    To be clear, package relay with package RBF is meant to be resistant to these kinds of network topology aware (NTA) pinning scenarios. With correctly implemented package relay it doesn’t matter that a CPFP tx did a CPFP on top of a commitment transaction that no-one but the originator has: provided the fee (or with RBFR, fee-rate) outbids the competing commitment transaction, it will be replaced as nodes evaluate the new package as a whole.

    i’m not sure if the adversary injects malicious package A in target’s mempool, and package B in other network mempools. a v3 parent is not necessarily a LN commitment transaction however CPFP can be in conflict between A / B. see the ugly NTA pinning scenario described in my old mail.

    Yes, I wrote this branch a few months ago: https://github.com/glozow/bitcoin/commits/2023-11-1p1c-v3-ea/ I also included a generic (i.e. not LN-specific) version of this scenario as part of the package RBF + package relay test suite. See the “[functional test] package RBF network” commit.

    thanks interesting to test both NTA and “loophole” pinnings on this branch described earlier to t-bast here

    If the issue is about alice_commitment_tx + alice_cpfp_on_alice_tx replacing bob_commitment_tx, this proposal (v3 + package RBF + 1p1c package relay) does resolve it and is in fact designed primarily for that purpose.

    sure package relay fixes this, less sure even if you add package RBF if you combine with some leeway given by 1000vb limit

    Updating my concept ACK. Reading through the numerous comments on this PR, the objections that have been raised have nothing to do with v3, and instead have to do with applications that might use it, and what their best design is. I think applications are free to optimize further, but this PR is not the place to do spec design of layer 2 protocols.

    no, if we follow this line of reasoning imo it’s like saying it’s okay to have IPsec or TLS shipping with weak crypto primitives the objections which have been raised are very related to the v3, at least the 1000vb, see the “loophole” here secp256k1 dev are spending amount of time to make its high-level api safe for “dumb” users, we should do the same.

    So far, we’ve had some support from LN spec developers (@t-bast, @TheBlueMatt) for the idea that the v3 rules meet their use case. The others have been silent so far, so it’s not yet clear that v3 will be adopted; but it’s also not yet clear that it won’t be.

    i broke enough time LN spec dev stuff to take their technical opinions at their fair prices.

    Moreover, there is another issue besides pinning that is relevant – the CPFP carveout rule. I brought up in #29319 that in order to eventually be able to deploy cluster mempool without breaking current use cases, we’ll need some alternative to the carveout rule. It sounds like v3 + sibling eviction (proposed in #29306) may be sufficient – again, we have concept ACKs from @t-bast and @TheBlueMatt and we’re waiting to hear from others who have been silent so far. To unblock progress on other work, my suggestion is that we do the following:

    the CPFP carveout rule can be removed with or without v3, its security benefits quasi-null in my opinion. see comment other issues.

    I’ve been re-reviewing this PR myself today, and I see that @instagibbs has left a few more comments, and my overall sense is that the code here is pretty well reviewed and tested, and should be ready for merge soon. @sdaftuar do you have a lightning node running somewhere on signet (maybe instagibbs has some v3 CLN branch) ? my pleasure to try to break it with “loophole” or NTA scenarios, at least try the “loophole” one. if i or anyone fail to exploit it, great my pinning concerns are unfounded if i or succeed to exploit, we’ll learn and can think more about anti-pinning mitigations.

  244. in doc/policy/mempool-replacements.md:15 in 29029df5c7
    10@@ -11,7 +11,8 @@ their in-mempool descendants (together, "original transactions") if, in addition
    11 other consensus and policy rules, each of the following conditions are met:
    12 
    13 1. The directly conflicting transactions all signal replaceability explicitly. A transaction is
    14-   signaling replaceability if any of its inputs have an nSequence number less than (0xffffffff - 1).
    15+   signaling BIP125 replaceability if any of its inputs have an nSequence number less than (0xffffffff - 1).
    16+   A transaction also signals replaceibility if its nVersion field is set to 3.
    


    sdaftuar commented at 1:20 am on February 9, 2024:
    nit: “replaceability” (not worth fixing unless you need to touch this PR again)

    glozow commented at 2:34 pm on February 12, 2024:
    Fixed in #29424
  245. in src/test/fuzz/tx_pool.cpp:412 in e643ea795e outdated
    408@@ -407,6 +409,9 @@ FUZZ_TARGET(tx_pool, .init = initialize_tx_pool)
    409         const bool accepted = res.m_result_type == MempoolAcceptResult::ResultType::VALID;
    410         if (accepted) {
    411             txids.push_back(tx->GetHash());
    412+            // Only check fees if accepted and not bypass_limits, otherwise it's not guaranteed that
    


    sdaftuar commented at 1:31 am on February 9, 2024:
    I can’t figure out what this comment means? Maybe a holdover from when we were trimming 0-fee stuff?

    glozow commented at 2:34 pm on February 12, 2024:
    Removed in #29424
  246. in test/functional/test_framework/wallet.py:366 in 27c8786ba9 outdated
    362@@ -360,7 +363,12 @@ def create_self_transfer(self, *,
    363         send_value = utxo_to_spend["value"] - (fee or (fee_rate * vsize / 1000))
    364 
    365         # create tx
    366-        tx = self.create_self_transfer_multi(utxos_to_spend=[utxo_to_spend], locktime=locktime, sequence=sequence, amount_per_output=int(COIN * send_value), target_weight=target_weight)
    367+        tx = self.create_self_transfer_multi(
    


    sdaftuar commented at 1:41 pm on February 9, 2024:
    Is dropping the locktime argument here intentional? (I guess all the explicit arguments are caught in kwargs?)

    glozow commented at 2:34 pm on February 12, 2024:
    Yes I think so.
  247. sdaftuar commented at 2:19 pm on February 9, 2024: member

    Code review ACK. Left some comments that are just nits, nothing that I think needs to be addressed unless you have to fix more things anyway.

    Will do another round of testing shortly…

  248. in src/policy/v3_policy.cpp:187 in eb8d5a2e7d outdated
    182+    static_assert(V3_DESCENDANT_LIMIT == 2);
    183+
    184+    // The rest of the rules only apply to transactions with nVersion=3.
    185+    if (ptx->nVersion != 3) return std::nullopt;
    186+
    187+    // Check that V3_ANCESTOR_LIMIT would not be violated, including both in-package and in-mempool.
    


    sdaftuar commented at 2:53 pm on February 9, 2024:
    nit: The reference to “in-package” doesn’t fit here.

    glozow commented at 2:34 pm on February 12, 2024:
    removed in #29424
  249. in src/policy/v3_policy.cpp:148 in eb8d5a2e7d outdated
    143+            if (it->GetTx().nVersion == 3) {
    144+                return strprintf("non-v3 tx %s (wtxid=%s) cannot spend from v3 tx %s (wtxid=%s)",
    145+                                 ptx->GetHash().ToString(), ptx->GetWitnessHash().ToString(),
    146+                                 it->GetSharedTx()->GetHash().ToString(), it->GetSharedTx()->GetWitnessHash().ToString());
    147+            }
    148+        }
    


    sdaftuar commented at 3:23 pm on February 9, 2024:
    I think if I comment out these lines of code, no unit tests or functional tests fail – is it possible this isn’t covered anywhere? (I didn’t run the fuzzer with this change.) EDIT: I guess this would be caught in SingleV3Checks anyway?

    glozow commented at 10:04 pm on February 9, 2024:
    Yes, seems this is because SingleV3Checks already covers it. I can add a comment and put the condition under an Assume?

    glozow commented at 2:34 pm on February 12, 2024:
    Actually, in #29424 I’ve added unit tests instead of adding an Assume here (which would cause those tests to crash). Sorry for the flip flop but should address the fact that these blocks aren’t covered by tests.
  250. sdaftuar commented at 3:54 pm on February 9, 2024: member

    ACK 29029df5c700e6940c712028303761d91ae15847

    In addition to my code review and running a slightly earlier version of this branch on a year’s worth of historical data, I also did a bunch of mutation testing to confirm code coverage for every component of the new logic (found one block that I think is unreachable, which I commented on above, but doesn’t seem like an issue either way).

  251. DrahtBot requested review from ismaelsadeeq on Feb 9, 2024
  252. DrahtBot requested review from instagibbs on Feb 9, 2024
  253. in src/test/txvalidation_tests.cpp:121 in eb8d5a2e7d outdated
    116+            tx_v2_from_v3->GetHash().ToString(), tx_v2_from_v3->GetWitnessHash().ToString(),
    117+            mempool_tx_v3->GetHash().ToString(), mempool_tx_v3->GetWitnessHash().ToString())};
    118+        BOOST_CHECK(*SingleV3Checks(tx_v2_from_v3, *ancestors_v2_from_v3, empty_conflicts_set, GetVirtualTransactionSize(*tx_v2_from_v3)) == expected_error_str);
    119+
    120+        Package package_v3_v2{mempool_tx_v3, tx_v2_from_v3};
    121+        BOOST_CHECK_EQUAL(*PackageV3Checks(tx_v2_from_v3, GetVirtualTransactionSize(*tx_v2_from_v3), package_v3_v2, empty_ancestors), expected_error_str);
    


    instagibbs commented at 4:13 pm on February 9, 2024:

    should cover the missing case

    0        BOOST_CHECK_EQUAL(*PackageV3Checks(tx_v2_from_v3, GetVirtualTransactionSize(*tx_v2_from_v3), package_v3_v2, empty_ancestors), expected_error_str);
    1        BOOST_CHECK_EQUAL(*PackageV3Checks(tx_v2_from_v3, GetVirtualTransactionSize(*tx_v2_from_v3), {tx_v2_from_v3}, *ancestors_v2_from_v3), expected_error_str);
    

    glozow commented at 2:33 pm on February 12, 2024:
    #29424 added a unit test
  254. instagibbs commented at 4:15 pm on February 9, 2024: member

    looks like if I revert TX_MAX_STANDARD_VERSION to 3, all unit/functional tests pass?

    ACK 29029df5c700e6940c712028303761d91ae15847 modulo that

  255. glozow commented at 9:58 pm on February 9, 2024: member

    looks like if I revert TX_MAX_STANDARD_VERSION to 3, all unit/functional tests pass?

    I suppose my change at https://github.com/bitcoin/bitcoin/pull/28948/files#diff-cc0c6a9039a1c9fe38b8a21fe28391fffbac9b8531dfda0f658919a9f74b46baR793 is partially to blame. I made it TX_MAX_STANDARD_VERSION + 1 instead of 3, thinking it might be easier to flip in the future.

    I’m working on a followup to address the comments, and can incorporate if I need to retouch.

  256. achow101 commented at 4:29 am on February 10, 2024: member
    ACK 29029df5c700e6940c712028303761d91ae15847
  257. achow101 merged this on Feb 10, 2024
  258. achow101 closed this on Feb 10, 2024

  259. in src/policy/v3_policy.cpp:86 in eb8d5a2e7d outdated
    81+                                 vsize, V3_CHILD_MAX_VSIZE);
    82+            }
    83+
    84+            const auto parent_info = [&] {
    85+                if (mempool_ancestors.size() > 0) {
    86+                    // There's a parent in the mempool.
    


    ismaelsadeeq commented at 6:31 am on February 10, 2024:

    nit: This is obvious I think, not really adding much, comment should be below explaining why we are assuming descendant count is 1 below


    glozow commented at 2:33 pm on February 12, 2024:
    #29424replaced these 2 comments with a slightly more helpful one at the top of the block.
  260. in src/policy/v3_policy.cpp:94 in eb8d5a2e7d outdated
    89+                    return ParentInfo{mempool_parent->GetTx().GetHash(),
    90+                                      mempool_parent->GetTx().GetWitnessHash(),
    91+                                      mempool_parent->GetTx().nVersion,
    92+                                      /*has_mempool_descendant=*/mempool_parent->GetCountWithDescendants() > 1};
    93+                } else {
    94+                    // Ancestor must be in the package. Find it.
    


    ismaelsadeeq commented at 6:33 am on February 10, 2024:

    nit: same here I think

  261. in test/functional/mempool_sigoplimit.py:168 in 9a29d470fb outdated
    164@@ -165,7 +165,8 @@ def create_bare_multisig_tx(utxo_to_spend=None):
    165         # But together, it's exceeding limits in the *package* context. If sigops adjusted vsize wasn't being checked
    166         # here, it would get further in validation and give too-long-mempool-chain error instead.
    167         packet_test = self.nodes[0].testmempoolaccept([tx_parent.serialize().hex(), tx_child.serialize().hex()])
    168-        assert_equal([x["package-error"] for x in packet_test], ["package-mempool-limits", "package-mempool-limits"])
    169+        expected_package_error = f"package-mempool-limits, package size {2*20*5000} exceeds ancestor size limit [limit: 101000]"
    


    ismaelsadeeq commented at 6:55 am on February 10, 2024:

    The magic number can be declared with a descriptive variable name, MAX_PUBKEYS_PER_MULTISIG = 20 bytespersigop = 5000

    We can reuse them in the same function

    0expected_package_error = f"package-mempool-limits, package size {2*MAX_PUBKEYS_PER_MULTISIG*bytespersigop} exceeds ancestor size limit [limit: 101000]"
    

    glozow commented at 2:33 pm on February 12, 2024:
    Done in #29424
  262. ismaelsadeeq commented at 6:59 am on February 10, 2024: member
    Post merge ACK
  263. ariard commented at 8:54 pm on February 11, 2024: member

    @sdaftuar @achow101 @instagibbs

    Congrats guys to ACK and merge a +1100 LoC PR a Friday evening with still standing objections on the very heart of the PR. What a lack of professionalism…Still my pleasure to break v3 anti-pinning mitigations in the future, with a pinning toolkit. If script kiddies don’t make it before me and go to savage the LN ecosystem, who would have naively thought they’re safe. @glozow

    I think there is medium difficulty follow-up to this PR to make pinning far harder to exploit. Namely a) moving 1000 kb limit on the client-side and b) making it cover the whole package, not the child only. This can be embedded in bip331 changes or in any future p2p extensions. That way LN channels could adjust their pinning exposure in the level of trust they have towards their counterparties. And L2s won’t have to bargain the 1000 kb limit that doesn’t fit their use-case.

  264. glozow deleted the branch on Feb 12, 2024
  265. instagibbs commented at 3:21 pm on February 12, 2024: member

    @ariard

    I’ll let people decide for themselves our professionalism or lack thereof.

    I asked hundreds of comments ago to stop using this PR as a venue for debating the finer points of the LN spec, obviously unheeded. There’s a BOLT repo, which is an appropriate place on github to discuss these things.

    You are also free to lobby different projects to not use these primitives moving forward and push whatever solutions you think are necessary, even consensus changes. Meanwhile people who find this useful and want to, they can use it, including use-cases which fall outside of your sphere of interest.

    As was noted already: We’re nearing feature freeze, which means if we miss the window the rebasing would continue on for yet another release cycle. It’s left as non-standard which leaves some room for finding a slightly better local optimum, perhaps informed by @sdaftuar’s investigating work and more feedback.

    To end a positive note, I’m looking forward to iterative improvements, hoping to make things more generally useful for systems deployed today, improve incentive compatibility, without policing their use-cases:

    1. 1p1c relay: #28970
    2. sibling eviction for v3: #29306
    3. limited package rbf: #28984
    4. cluster mempool: #28676

    and leveraging a totally ordered mempool in interesting ways beyond.

  266. fanquake referenced this in commit 37fdf5a492 on Feb 13, 2024
  267. ariard commented at 5:54 pm on February 15, 2024: member

    @instagibbs

    I maintain your review and the ones of @sdaftuar and @achow101 are lacking professionalism.

    For a PR which is called “anti-pinning” (it’s all in the title), there has been no substantial answer on my concerns that it does not address neither rbf rule3 pinning (the 1000 vbyte rule limit offers to give a way, see “loophole” scenario exposed above), neither topological-based scenario at the tx-relay network level (NTA one), and that actually “hard” limits are a source of “no-one-fits-all” bargainging for use-cases operations (as we have seen with mempoolfullrbf).

    On the review process itself and calling to stop using this PR to debate the robustness of the mitigation, this is insulting and a flagrant proof of your lack of system engineering culture. When software engineers are designing new cryptopgraphic primtives or kernel mechanism (e.g linux cgroups), they are of course considering how those mechanisms are used by downstream use-cases.

    The current design and review mentality you’re exposing and I understanding you’re advocating has been proven multiple times a source of potential security failures (e.g CVE-2021-31876 or the 1 CSV added on non-anchor outputs commitment transactions as original carve out mechanism semantics were unclear). Beyond, you’re falling in the trap of the protocol design process communication silos I’ve been pointong out before and I aimed to address in the past with the “L2s Onchain Support IRC Workshop”

    On the remark that people who find this useful and want to, they can use it, which I understanding that LN maintainers or anyone else (e.g exchange batching) are free to level up their bitcoin use-cases with nversion=3 semantics. Sadly one has to be very realistic and observe that LN security track record is not great. When you’re doing adversarial reviews on financial software you cannot trust the incentives of the developers “in defense”, as they have always a moral interest to claim attack X or attack Y is “mitigated”, playing some kind of security theater.

    Finally, on the call of approaching feature freeze, this is neither a convicining argument in my eyes. In critical system design, development and maintenance there is a social phenomena called process fatigue. Namely, when engineers are getting in front of some deadlines after lengthy work cycles, they are rushing to deliver a technical project, provoking downstream major failure (e.g known historic example is Space Shuttle Challenger disaster).

    There is no magical reason that Bitcoin Core review process is not affected by such phenomena of “process fatigue” and as such when there is no review consensus on security-first contribution in one of the part of the codebaee like the mempool, practical wisdom would recommend to delay it to the next release cycle. This is better than yet-another “poor hot fixes” in production were we have to mobilize half-of the engineering, vendors and users ecosystem time, and jeopardize their funds.

    To conclude your answer is a poor one sustaining my precedent accusation of lacking professionalism. You’re still free to communicate any example or demonstration infrastructure running with v3 semantics that I can target with demo pinning attacks. And as such back up your Bitcoin Core technical review with your credibility and reputation, in a skin-in-the-game approach.

  268. ariard commented at 5:57 pm on February 15, 2024: member
    @glozow are you still interested if I do a proof-of-concept bitcoin core branch or idea draft of a) moving the “package limit” on the use-case side and b) make it dynamic (in the limit of absolute core DoS robustness) ? I think it can be backup in future bip331 changes. v3 is your work and this is your PR making anti-pinning claims after-all.
  269. ariard commented at 3:46 am on February 19, 2024: member

    are you still interested if I do a proof-of-concept bitcoin core branch or idea draft of a) moving the “package limit” on the use-case side and b) make it dynamic (in the limit of absolute core DoS robustness) ? I think it can be backup in future bip331 changes. v3 is your work and this is your PR making anti-pinning claims after-all.

    did it with #29448

  270. in src/Makefile.am:699 in 29029df5c7
    695@@ -694,6 +696,7 @@ libbitcoin_common_a_SOURCES = \
    696   netbase.cpp \
    697   net_permissions.cpp \
    698   outputtype.cpp \
    699+  policy/v3_policy.cpp \
    


    hebasto commented at 11:45 am on March 1, 2024:
    Considering the current code organisation, I can’t see any reasons to include policy/v3_policy.cpp into the libbitcoin_common library. Both its symbols, i.e., SingleV3Checks and PackageV3Checks are referenced in the validation.cpp only. As such, policy/v3_policy.cpp inclusion into the libbitcoin_node library is enough.

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-21 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me