[WIP] Expand CPFP “carve-out” rule from N=1 to N=100 #18725

pull instagibbs wants to merge 2 commits into bitcoin:master from instagibbs:many_more_relay changing 3 files +44 −16
  1. instagibbs commented at 5:47 pm on April 21, 2020: member

    Chasing concept Acks and discussion. Probably broken. Need p2p wisdom which I lack.

    The motivating use-case here is this would allow batched payouts to be safely done for custodial services without them being pinned. Specifically this would allow up to N+1 payments outputs, one of which can be a self-send to the service, which is then spent and RBF’d as needed, similar to the motivating LN use-case given in the code. This could greatly simplify payout logic for services, and allow them to save in fees more aggressively.

    Built on https://github.com/bitcoin/bitcoin/pull/18723

  2. Expand on contracting carve-out rule test c5db55a6a0
  3. Expand CPFP carve-out rule to 100 small spends rather than 1 84ac4a1122
  4. JeremyRubin commented at 7:08 pm on April 21, 2020: contributor

    Yes, let’s do this as soon as possible.

    But Concept NACK for now. The mempool project (which I’ve added this to) needs to make more progress before something like this is remotely feasible. Added this to the tracker for that project for when the time comes!

  5. instagibbs renamed this:
    Expand CPFP "carve-out" rule from N=1 to N=100
    [WIP] Expand CPFP "carve-out" rule from N=1 to N=100
    on Apr 21, 2020
  6. JeremyRubin added this to the "Goals" column in a project

  7. DrahtBot added the label Tests on Apr 21, 2020
  8. DrahtBot added the label Validation on Apr 21, 2020
  9. in src/validation.cpp:761 in 84ac4a1122
    765         // this, see https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016518.html
    766-        if (nSize >  EXTRA_DESCENDANT_TX_SIZE_LIMIT ||
    767-                !m_pool.CalculateMemPoolAncestors(*entry, setAncestors, 2, m_limit_ancestor_size, m_limit_descendants + 1, m_limit_descendant_size + EXTRA_DESCENDANT_TX_SIZE_LIMIT, dummy_err_string)) {
    768+
    769+        // Immediately bail from carve-out logic if transaction isn't relatively small
    770+        if (nSize >  EXTRA_DESCENDANT_TX_SIZE_LIMIT) {
    


    ariard commented at 11:50 pm on April 21, 2020:

    A 1000vbyte size * 100 seems enough to one party owning multiple carve-out outputs to decrease feerate of unconfirmed parent tx enough to stuck it in the mempool. I think that’s already a risk today but right now custodial service, being broadcaster, can iterative batch on its own output to reach first DEFAULT_DESCENDANT_LIMIT and avoid any junk-feerate squatting branch.

    That said the batching case MAY be far less adversarial that LN one, and worst-case scenario may be just few more blocks to wait instead of a fund loss like in LN.


    instagibbs commented at 12:48 pm on April 22, 2020:

    The batching case, AFAICT is less complicated, and outright theft is not a result if somehow things go wrong. Instead it may simply result in certain processors not batching, or doing smaller batches to avoid pinning hundreds of user payouts, or simply paying a much higher feerate to avoid the situation.

    It’s a practical problem in real-world deployments today to make sure that the more general case of #15046 can be true. To be clear I find these counter-arguments persuasive:

    1. We don’t have a principled enough view of these changes and it may become a DoS
    2. Actually this doesn’t work for the stated use-case because X

    The fact that LN is 1000x more complicated and the original carve-out was insufficient is less persuasive imo.


    JeremyRubin commented at 8:18 pm on April 22, 2020:

    That’s inaccurate with batching because people can pay out to contracts that contain HTLCs (not that they should, but the payments are still timing sensitive potentially).

    One space of a solution would be to have a defined eviction order in the transaction; e.g., spends from output S n+1 > output n or outputs are ordered by value spent with order tiebreaker. I think the decision rule should not look at the spending transaction itself (e.g. for feerate) because it introduces a bunch of problems. The issue with a solution like this is it’s economically irrational, a miner should take the highest fee paying transaction. So these policies can be a stopgap convenience so that people have a better UX with behaving nodes, but it can’t be expected to be followed longer term.

    IMO the only real solution is a larger mempool reworking, which decouples accepting and storing a transaction from eviction and from mining more aggressively. That way we can dumbly accept transactions if they connect with no computational risk, evict whenever required, and mine on a best effort understanding of fee rates. The first step in accomplishing this is finishing the MemPool Project first batch of refactors which improve all of our traversal algorithms. Once that is done, we’ll have a better understanding of what we can tolerably raise the limits to with the current architecture (including better carve out limits). Following that, some of these bigger picture modularization refactors can be done as needed. There’s not a strict ordering between these, but it’s easy to change a “pure function” to another equivalent “pure function”, and harder to re architect an entire data structure that will have new behaviors.


    instagibbs commented at 8:23 pm on April 22, 2020:

    That’s inaccurate with batching because people can pay out to contracts that contain HTLCs (not that they should, but the payments are still timing sensitive potentially).

    fair enough, people may be building really bad ideas on top


    ariard commented at 11:38 pm on April 22, 2020:

    That’s inaccurate with batching because people can pay out to contracts that contain HTLCs (not that they should, but the payments are still timing sensitive potentially).

    That would be a zero-conf channel which is already under a double-spend threat, at least understood by people who implemented it. But yes we shouldn’t make it easier to build insecure protocol..


    JeremyRubin commented at 1:36 am on April 23, 2020:
    Well the issues come up in other contexts too where it’s expressly not an issue if it’s known the parent cannot be malleated, such as a batch where you are 1 of N parties to have signed the N of N, but the outputs are independent. Perfectly safe to use those HTLCs between any of the N people, but pinning can still be an issue.
  10. ariard commented at 11:54 pm on April 21, 2020: member
    I lean toward Concept NACK too, before to make carve-out more generic to support wider user-cases I think mempool-pinning cases should be more documented to inform better protocol designer doing anything unsafe (even if carve-out isn’t directly an issue, who can spend what when and how may be leveraged, see post on RBF-pinning on the ml)
  11. DrahtBot commented at 8:51 pm on April 23, 2020: member

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #17383 (Refactor: Move consts to their correct translation units by jnewbery)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  12. DrahtBot commented at 1:09 pm on April 25, 2020: member

    🐙 This pull request conflicts with the target branch and needs rebase.

  13. DrahtBot added the label Needs rebase on Apr 25, 2020
  14. instagibbs closed this on Apr 26, 2020

  15. DrahtBot locked this on Feb 15, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-23 21:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me