Package relay design questions #14895

issue sdaftuar openend this issue on December 7, 2018
  1. sdaftuar commented at 7:46 pm on December 7, 2018: member

    Hi,

    I’ve been thinking about some improvements to transaction relay and wanted some feedback about the design goals.

    Motivation

    A transaction is only accepted to our mempool and then relayed to our peers if (a) our mempool already has all unconfirmed dependencies of the transaction, and (b) the transaction itself passes a bunch of policy checks (feerate, package size limits, etc).

    It is possible for one transaction to fail our feerate policy check, but had it been bundled with another child transaction, that the transactions together should make it in to our mempool (eg because the child’s feerate is sufficiently high). Here are some scenarios where this can come up:

    • A node on the network has a smaller mempool than the default/typical setting. Our default mempool size is 300MB (huge); it seems like it should be the case that someone running a node with, say, a 20MB mempool ought to have all the transactions that will be appearing in the next block. However, nodes running with smaller mempools can have a higher effective minimum feerate required for transaction acceptance (this is a consequence of our mempool limiting/eviction policy), and so high feerate children of a low feerate parent – which might appear in the next block – may never make it in to the mempool of such a node.

    • Matt Corallo recently wrote about an example on the bitcoin-dev mailing list involving lightning transactions, where pre-signed transactions might be broadcast to the blockchain long after they were generated, and thus not have been created with a fee that is sufficient to be confirmed quickly (or even be accepted to node mempools). In such situations, channel participants may need to use chained transactions (CPFP) in order to increase the confirmation speed of such transactions, and that implies we may need to introduce a mechanism for those parent transactions to be relayed along with their higher feerate children, even if the parent transaction would be rejected by itself.

    Are there other scenarios that suffer without a package relay solution, which have unique design requirements?

    Design questions

    Who is responsible for initiating package relay, sender or recipient?

    While the sender of a transaction that is chained off a low-fee parent is well-positioned to guess that a transaction might need package relay in order to propagate, expecting relayers to do the same seems onerous and potentially bandwidth wasteful. For example, one way for a sender-initiated package relay system to work might be for a sender to announce via an INV (or similar) all ancestor txid’s along with a child txid. However, what would the protocol look like beyond that first hop – would each node on the network initiate package relay with their peers as well? It seems like this could be used in a way that would cause the same txid’s to be announced repeatedly on the same links in the network, which seems wasteful. Is there a sender-initiated strategy that works better than this naive idea that we should consider?

    On the other hand, the recipient of a transaction can tell if a transaction’s parents are missing and can therefore know to initiate package relay from the sending peer (which means that package relay would only occur on links that require it, eg to/from low-memory-mempool nodes).

    Should we update the p2p protocol to accommodate package relay, or rely on existing functionality?

    In theory, we may be able to shoehorn a recipient-initiated package relay scheme within the existing p2p protocol. A recipient who needs the parents of a transaction could iteratively request them, and then topologically sort the result once all ancestors have arrived, and then proceed with validation.

    But in practice, it seems like it might be much simpler to reason about the code and computational complexity if we add some kind of special p2p messages to assist in the process, so that rather than iteratively request parents (for example) we could just ask a peer for all unconfirmed ancestors of a transaction, perhaps even already sorted in topological order. It’s also possible that it’d be helpful to add information or requirements around the package feerate of such packages.

    The obvious benefit to not updating the p2p protocol is that we could implement package relay in a new release and the whole network would support providing packages to new software. The downside is potential code complexity and possibly somewhat less efficient processing of transactions due to repeatedly having to check for missing ancestors and doing the topological sort (but perhaps those could be mitigated with the right implementation).

    What Denial-of-Service concerns do we need to address?

    In addition to the usual things we worry about (bandwidth attacks on the network, CPU or other resource exhaustion attacks), does package relay need to incorporate further anti-DoS measures to be useful?

    For example, suppose there is some low feerate transaction A, and it has multiple children: B, C, D, E … One of our peers relays transaction B to us, so we ask for A, but decide that A+B is not good enough for our mempool. How many times will we be willing to re-try A, if C, D, E, etc are also relayed to us?

    One approach might be to only attempt the same transaction once over some time period (say by adding it to our reject filter, which gets cleared out every block), so that the same transaction cannot be repeatedly used to CPU-attack us; however, this would seem to conflict with the lightning use-case, where we would not want an adversary to be able to prevent relay of some parent transaction.

    Moreover, any situation where we might test a transaction’s signatures prior to a (package-) policy failure that prevents the transaction from being ultimately accepted to the mempool would open the door to CPU-exhaustion attacks. This would imply a design for package processing and acceptance where policy checks on the whole package are performed before signature validation – then, if a signature is found to be invalid, we could ban the peer. Are there any other practical solutions for this? If we take this approach, then it seems we could re-try A over and over with limited downside.

    It’s important to note the existing resource usage that transaction relay can already incur – when we receive a new transaction, we generally have to retrieve the inputs from the utxo set, which typically means that an attacker can make us do disk reads costlessly. Given that this seems difficult to avoid, I don’t think package relay needs to limit an attacker’s ability to have us look up inputs. I think this implies that in the previous example, being willing to retry A+C, A+D, etc and looking up A’s inputs over and over shouldn’t be considered an increase in attack surface.

    Consistency requirements?

    Are there any additional consistency requirements we should (or must) impose on package relay? Currently it is the case that the order that transactions are received by a node does determine which transactions make it into the mempool (some examples that come to mind: transactions that are just at the mempool min fee rate; transactions that depend on common ancestors which are near the descendant chain limit; a transaction that would conflict with another vs descendants of that original transaction; I’m sure there are more examples).

    Naively, we might also expect it to be the case that depending on how a package is relayed, it might affect whether a child transaction is accepted. For instance, if we have a parent transaction A, with two children B and C, then it’s possible that B+A has a high enough feerate to be accepted via package relay, but C+A does not – so relaying B first would get all 3 transactions in, while relaying C first would only get A and B in.

    This effect could be magnified if there were (for example) anti-DoS rules that would prevent A from being attempted too many times.

    Would this kind of behavior significantly harm any use cases that we’re trying to support?

  2. laanwj added the label P2P on Dec 7, 2018
  3. laanwj added the label Brainstorming on Dec 7, 2018
  4. ryanofsky commented at 6:37 pm on April 10, 2019: contributor

    Making CPFP work better, and making small mempools work better while avoiding DoS risks and complications from updating the P2P protocol is an important thing to work on even if no one immediately benefits from changes right now.

    But the design space is so huge that it would probably help to propose a concrete change or set of changes if you want feedback, so there is something to start thinking concretely about and respond to. Even very rough or strawman proposals could be good starting points.

  5. maflcko commented at 6:43 pm on April 10, 2019: member
    The final goal seems far out to achieve and I believe suhas has started working on some intermediate milestones that would already come with advantages (such as getting rid of the shared mapRelay, IIRC).
  6. ajtowns commented at 8:57 pm on May 16, 2019: contributor

    Think receiver-initiator is probably better, but you could possibly do sender-initiated package relay by saying “is this tx’s feerate > parent’s fee rate, and is parent’s fee rate < peer’s announced minfee cutoff”. Don’t see how you could make sender-initiator work without p2p changes though.

    It sounds like we could get the simplest case (one tx doing CPFP for one or more others, but no grandparents, and immediately getting them all into the top 2 or 3 MB of mempool) without p2p changes, just by noticing you don’t have the parents, requesting them, and handling the new tx and any requested parents as a package. Might be a good phase 1.

    (EDIT: had written receiver-initiator where I meant sender-initiator)

  7. moneyball commented at 12:57 pm on June 6, 2019: contributor

    For context, I am responding to this in the spirit of “PR shepherd” as discussed the previous day. Given this “PR” (Issue) is on the high priority list, I have prioritized taking a look.

    My read of the status of this Issue is as follows:

    • the author has stated a motivation, a set of design questions, and a discussion of how to think about the design.
    • commenters have requested a specific design to review and (N)ACK
    • it has now been added to the high priority list chasing a Concept ACK

    If we were to adopt the new language separating Concept ACK and Design ACK (#16149), then it seems this Issue is ready for Concept (N)ACKs, but not Design (N)ACKs. Given the current process which combines Concept and Design, then it seems like this Issue needs the author to propose a concrete design. @sdaftuar can you clarify whether you’re seeking a Concept ACK in order to proceed to defining a specific design? @ryanofsky, @MarcoFalke, @ajtowns assuming we are just seeking Concept (not Design) ACK, do you have the information needed to weigh in?

  8. sdaftuar commented at 3:18 pm on June 6, 2019: member

    @moneyball Thanks for the nudge and taking a look here.

    I do have a strawman proposal for a p2p protocol change in support of package relay that I wrote up a while ago: https://gist.github.com/sdaftuar/8756699bfcad4d3806ba9f3396d4e66a

    My intuition right now is that we can make package acceptance much easier to reason about if the recipient of an orphan transaction can ask a peer for the unconfirmed parents of a given transaction, which should be sent back in topological order.

    To make progress on this project, I’m breaking up this work into several steps:

    1. Refactor the mempool acceptance logic to support a concept of “package acceptance”, which would be motivated by better orphan handling (for instance, maybe we can take some simple case, like a transaction missing exactly 1 parent, and process the two transactions together as a package). I expect that refactoring the mempool in this way to be major work and require careful review.

    2. Once that mempool refactor has been done, then I think we could propose a p2p protocol change (such as my above proposal, or some other more efficient or more clever proposal if anyone has a better idea).

    I’d love feedback on whether this path seems reasonable, or also if someone has a better idea of how to achieve the end result.

  9. ajtowns commented at 11:08 am on June 7, 2019: contributor
    Concept ACK I think – splitting mempool acceptance and being able to submit a package as a single unit seems plausible; and doing that before p2p changes also seems plausible. I think we ought to be able to break out smaller useful pieces as we go along though.
  10. fanquake added the label Needs Conceptual Review on Jun 17, 2019
  11. TheBlueMatt commented at 5:06 pm on June 20, 2019: contributor
    Looking forward to more progress on this one. I’m skeptical that the ATMP refactor is as bad as you think, but it does have cost in terms of additional complexity there. Definitely worth it, though, and I’d appreciate even a first step with only 1 parent as that would have real benefits for some users.
  12. naumenkogs commented at 8:13 pm on June 20, 2019: member

    Concept ACK. I think this is a must-have feature in the p2p layer, and I have a feeling that code won’t become much less intuitive.

    I think it’s a good timing for introducing new p2p messages: the problem is not severe at the moment and it’s not that bad if old nodes don’t get the benefit. However, there might be an issue if we start to rely on this assumption real soon (e.g., imagine a lot of people start closing lightning channels with low fees). I don’t know how to justify this part.

    Btw, I believe a scenario similar to what Matt send to the mailing list: multi-sig transactions or coinjoins which take a while to prepare. While collecting signatures, fee market might change a lot, and cpfp might happen quite often (if not overpaying a lot by default).

  13. oleganza commented at 5:19 pm on January 23, 2020: none

    I had a thought about looking at the problem from a slightly different angle. What if instead of Child Pays For Parent we think of it as Parent Discounts Child?

    Then we can keep a limited set of non-relayable “waiting-state” parents and include their feerate (represented by a pair of (fee, weight)). Every time another candidate tx arrives, its effective fee rate is computed as ∑fee/∑weight over all its “waiting-state” ancestors. In other words, all ancestors discount the feerate of the new candidate. If such effective feerate is over the threshold, then such tx and all its waiting ancestors are marked as “accepted” and now can be relayed.

    If there is an overlapping subgraph (because some waiting ancestor has 2+ outputs) and the common ancestor transitions from waiting to accepted, we need to recursively follow the known descendants and re-calculate their feerates so they are not discounted anymore. Since some of them can become acceptable themselves, the process should probably consist of two phases: update the feerates downstream, then try to accept the updated waiting txs.

    Does this make sense?

  14. ariard commented at 7:55 pm on July 29, 2020: contributor

    I would like to re-open the discussion relative to package relay design, especially in light of my learnings from the past few months of the LN security requirements.

    To sum-up, we have an array of use-cases for which package relay would provide value. (ofc I’m looking at this issue with a strong LN bias and getting an LN-fix in the first version might be too much to ask.)

    Best-feerate discovery

    A node on the network may have a small mempool configuration, such that its minimum feerate required for acceptance is far higher than the rest of the network. Receipt of a low-feerate transaction will be rejected even though a high-feerate child may be received just after. This scenario downgrades the feerate discovery of node’s with small mempools and limits their ability to accurately estimate fees.

    Multi-party transactions

    A Coinjoin user among a set of participants may have a different liquidity preference than others and would like to unilaterally bump the feerate of the pending coinjoin to spend the funds sooner. Requiring interactivity breaks privacy assumptions by triggering another round of communications or violates spending policy by accessing the cold wallet.

    Pre-signed delegated transactions

    A LN user may delegate enforcement of its channel to a watchtower, while only providing pre-signed transactions to avoid fund hijacks by the distrusted watchtower. To do its job correctly, the delegated watchtower needs to bump the feerate of pre-signed transactions.

    Bitcoin Contracts with Competing Interests

    Contract applications (vaults or LN) rely on the timely confirmation of transactions for fund security. Assuming a pre-negotiated fees model, a counterparty has an interest in preventing the inclusion of a transaction to benefit from a timelock expiration. An honest party should be able to unilaterally bump feerates of its counter-signed transactions, without any assumed knowledge of network mempools. See https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html

    Security Requirements

    Here are a few attacks to consider when designing package-relay: a) a counterparty shouldn’t be able to block propagation of a higher-feerate chain of transactions by an earlier, lower-feerate broadcast competing for the same utxo b) a counterparty shouldn’t be able to pin an invalid or low-feerate parent in reject filters to block later acceptance of a higher-feerate child c) a counterparty shouldn’t be able to flood the p2p network with cheap, invalid chains of transactions or orphans to obstruct the propagation of higher-feerate packages. Attacks b) and c) are especially concerning from an LN perspective, as you can map a LN-node to its full-node. Assuming we can’t mask tx-relay topology, an attacker can directly connect to victims’ peers to interfere with propagation.

    Approaches Trade-offs

    Talking with @sdaftuar on package relay design, he underscored the need for a uniform relay/mempool policy across the deployed network to ensure package relay efficiency. Policy stability has already been talked in other issues (e.g #13283 (comment)) My mailing list post covers this in more detail, but if we opt for a uniform policy it means our package design must be backward compatible to avoid breaking applications relying on previous package versions.

    We have a few choices to make on the initial design:

    I think it’s possible that any one of the following choices would guard against attacks by: a) solving conflict replacement, which is a mempool mechanism b) introducing a package_id or evicting low-feerate parents from our rejection filters c) allocating per-peer resources, to avoid DoS due to new data structures/algorithms, as we’re already doing for INVs

    IMO, where the approaches diverge, is regarding the use of bandwidth. A receiver-initiated scheme would have the following round-trips:

    • INV announcement of parent and child
    • GETDATA replies of both parent and child
    • TX sending of both parent and child, if the parent is rejected, the child kept as an orphan
    • GETPACKAGE reply of child ancestors
    • PACKAGE sending of ancestors ids
    • GETDATA replies of ancestors
    • TX sending of ancestors Parent announcement/sending would have to be duplicated.

    Whereas a sender/relay initiated, based on introducing a package_id (sha256 of all package elements) might look like the following:

    • INV announcement of package_id
    • GETDATA of package_id, if it’s not already known
    • PACKAGE sending of all package members

    A sender/relay initiated scheme, due to already knowing the feerate dependency between transactions, can only announce a common identifier for the whole. Of course, among package elements, a parent can already be known by the receiver, but you should always allow a mempool retry as it’s a “different” relay.

    Now, is this worth the effort to optimize package-relay ? That’s a question hard to answer without knowing its future usage. Dependent second-layers may use it more often based on their fee models requirements which may make up a significant share of the overall tx-relay bandwidth. Starting with a high-bandwidth scheme now (receiver-initiated) but needing to switch to a lower-bandwidth scheme (sender-initiated) if later needed, will be a break a backward-compatible uniform policy.

    Between original-sender-initiated and relay-initiated schemes, I think we want to avoid encumbering every relaying hop with recomputing packages when it can be done O(1) by the sender. This also helps avoid doing work that your mempool has already done with regards to package limits.

    It might even be worth exploring fancier p2p extensions, like new INVs where the utxo/feerate of either a transaction or a whole package could be sent or queried by peers.

    Conclusion

    IMHO, going forward,

    • we should have a clear understanding of second-layer application requirements to evaluate proposals, including the edge cases of LN security issues
    • we should decide if it’s worth leaving room for future bandwidth optimizations so that either sender or receiver can initiate with the caveat that fancier optimizations come with greater DoS vulnerabilities
    • we should deploy an backward-compatible package policy, namely only 2-tx package for now solving pinning (at least until mempool refactors are merged) to avoid package replacement DoS vectors package support should be built on top of overhaul transaction request to reduce DoS and transaction origin inference attack surfaces
    • we should decide on a consistency guarantee, the package API should be clear enough to avoid footgunish misuse by higher-level applications

    Maybe to start, we can work on an unoptimized, upgradeable LN-fixing package relay and defer the hard questions to later.

    Thoughts ?

    cc @TheBlueMatt @t-bast @rustyrussell

  15. sdaftuar commented at 8:12 pm on July 29, 2020: member

    I’m still digesting all this, but I wanted to respond to this very clearly:

    Talking with @sdaftuar on package relay design, he underscored the need for an uniform relay/mempool policy across the deployed network to ensure package relay efficiency.

    I think you misunderstand my points on this issue. My view is exactly the opposite: philosophically, trying to enforce a uniform relay/mempool policy across the network in order to protect the security model of an application is something that I think is a mistake and a huge divergence from how we’ve approached relay policy and p2p design in the past.

    What I was also trying to say is that recent conversations around the insufficiency of our transaction relay system for lightning’s security model make it sound like lightning needs a uniform relay policy – which I think would be troubling if true. So I think it’s worth hashing out here exactly what the security requirements are in some more detail, and whether we can reasonably accommodate that at the base layer. Perhaps the best way to do that is to throw up some proposals and discuss whether they are actually sufficient?

  16. sdaftuar commented at 8:25 pm on July 29, 2020: member

    a) a counterparty shouldn’t be able to block propagation of a higher-feerate chain of transactions by an earlier, lower-feerate broadcast competing for the same utxo b) a counterparty shouldn’t be able to pin an invalid or low-feerate parent in reject filters to block latter evaluation with a higher-feerate child c) a counterparty shouldn’t be able to overflow the p2p network with cheap, invalid chains of transactions or orphans to obstrucate propagation of higher-feerate packages

    This is a helpful start. Points a) and c) make me wonder what are the RBF requirements of lightning? It sounds like we need to be able to do package-level-RBF in order to achieve some of these goals, possibly with new RBF semantics that might allow total mempool fee to drop. Is that correct? Also, does it need to apply even if not all directly conflicting transactions signal for opt-in, to prevent an attacker from creating a transaction that can’t be package-RBF-evicted because it’s not signaling?

  17. sdaftuar commented at 10:49 pm on July 29, 2020: member

    @thebluematt and I previously discussed some policy changes that might help here. I’m not sure I entirely understand all the issues but we had two ideas that I think may mitigate some of the transaction chain pinning concerns:

    a) Transaction pinning. To solve the problem of low-feerate transactions being stuck in the mempool interfering with protocols like lightning which just need resolution, we can borrow from suggestions to solve RBF-pinning. I believe @gmaxwell once suggested that one way we could mitigate RBF-pinning issues would be by letting transaction creators add a flag (perhaps an unused bit in a sequence field or the like) that would indicate that nodes should not add children to the transaction, unless doing so would likely cause the transaction to be confirmed very soon. The motivation is that the author of the transaction may seek to use RBF in the future to bump the feerate, so relay or mining nodes would be better off not accepting low-feerate children, which would interfere with that.

    This seems like it should be incentive-compatible in principle, although the details around determining exactly what is “near confirmation” could have implementation difficulties in practice, I dunno. I don’t really know how lightning works, but presumably if we had such a policy that was broadly in use, then applications like lightning could attach this flag to all pre-signed transactions that involve more than one-party?

    Also, I guess we’d need to have this flag work at the package level (so that we only let a transaction in if the package it’s part of has a high enough feerate to be mined soon).

    b) Package RBF. I guess if some undesirable transaction does end up stuck in the mempool somehow, that you’d need package-level RBF in order to evict a package with another. If we think our implementation of (a) is good enough, then I don’t know how much we need to worry about this, but perhaps there are ways that (a) would be gamed which make package rbf of some sort important. I think figuring out exactly what the semantics here would be will take some design effort. This also reintroduces the RBF pinning problem but this time for packages rather than single transactions (ie what do you do when you want to evict a large package with a small one?).

    Are there any other tools we need at our disposal to achieve the security requirements listed?

  18. sipa commented at 11:07 pm on July 29, 2020: member

    Just want to mention this here. @rustyrussell floated the idea once of having sort of a “pre-mempool”, of transactions that are queued for inclusion into the mempool, but otherwise unvalidated. These transactions aren’t relayed until they’re actually accepted into the mempool (gated by it shrinking through its normal expiration/confirmation/conflict mechanism), so there are fewer concerns about creating a freely abusable network wide broadcast channel. In addition, these could be relayed along with the outputs they’re spending (and identified using a hash that commits to these), so that the receiver can correctly reason about their feerate and whatever other policies they want to apply, without needing to care about orphaning or doing costly UTXO lookups and other full validation. This could reduce the actual mempool to a smaller area, of sufficient size to efficiently decide what goes into the next block, but and delay relay of what can’t be confirmed soon.

    I haven’t thought through the implications too much, but perhaps it is useful in this discussion. @rustyrussell let me know if I’m missing something or misrepresented anything.

  19. ariard commented at 1:26 am on July 30, 2020: contributor

    I think you misunderstand my points on this issue. My view is exactly the opposite: philosophically, trying to enforce a uniform relay/mempool policy across the network in order to protect the security model of an application is something that I think is a mistake and a huge divergence from how we’ve approached relay policy and p2p design in the past.

    What I was also trying to say is that recent conversations around the insufficiency of our transaction relay system for lightning’s security model make it sound like lightning needs a uniform relay policy – which I think would be troubling if true.

    I’m sorry for the misunderstanding there, but I think there is still a valuable discussion to have on some policy subset stability across versions, on which higher applications (not only LN) can confidently build their fee model and propagation model. And it could be a) base layer agreeing on the policy subset b) in consequence applications designing their fee model c) as a feedback, if there is use-cases demand, reasonably extending the subset. Actually that’s not the order followed, because we have deployed protocols with funds at risk due to these issues not being understood enough (and I don’t blame anyone each layer has a complexity of its own, and the interactivity of both is daunting).

    When I’m thinking about a uniform policy, here the class of scenario I’m concerned about, we tight some rules in the future in release Y (like increasing minRelayTxFee), a higher application may have pre-signed parent transactions with hardcoded older minRelayTxFee and relying on a high-feerate children for timely confirmation of the whole (as you can’t predict the future why bother with an inaccurate parent feerate ?). The application full-node is safe as long as you have a propagation path Y (the older release) to miner mempools. If your full-node is a private one and has only few tx-relay peers as soon as they’re all X ones your application becomes insecure. Of course, it’s unlikely we touch inconsiderately to an obvious rule like minRelayTxFee, but as the whole policy set is loosely documented and hard to test, it’s hard to know on which higher-layer devs have made assumptions for the security of their software stack.

    Naturally a way to avoid that would be to give some ecosystem warning when we identify such upgrade risk and ensure there is sufficient time elapsed for higher-software to compel and re-deploy. And maybe that’s the word “uniformity” where we hanging on, I think it’s more backward-compatibility which is aimed ?

    It’s as much troubling to think the security of your LN node is function of the whom your linked full-node might be connected to. And this good-propagation of your time-sensitive transactions is really a fundamental assumption of all payment channels design, even before LN. Ecosystem-wise, it’s quite concerning that this assumption is, in fact, laying on a false premise.

  20. ariard commented at 2:20 am on July 30, 2020: contributor

    This is a helpful start. Points a) and c) make me wonder what are the RBF requirements of lightning? It sounds like we need to be able to do package-level-RBF in order to achieve some of these goals, possibly with new RBF semantics that might allow total mempool fee to drop. Is that correct? Also, does it need to apply even if not all directly conflicting transactions signal for opt-in, to prevent an attacker from creating a transaction that can’t be package-RBF-evicted because it’s not signaling?

    I think the total mempool fee needs to drop because otherwise a malicious counterparty can broadcast first a low-feerate, high-fee package for the utxo in competition, such preventing replacement by a honest party capping its fee-bump to the contested value. Honest bump must happen through CPFP as the honest parent is also pre-signed and won’t replace the malicious pin on its own. As you might not learned pinned parent (and you may have a different one in each network mempool), you can’t do blind-CPFP and must assume that your honest parent+high-feerate CPFP replace every malicious instance as a whole. For the second concern, that’s something we can handle on our own, as transactions are pre-signed or spends must use a CSV you can force opt-in, even of malicious transactions.

    Overall, for Lightning security model with regards to known pinning attack scenarios I sum up them there : https://lists.linuxfoundation.org/pipermail/lightning-dev/2020-June/002758.html.

    I believe @gmaxwell once suggested that one way we could mitigate RBF-pinning issues would be by letting transaction creators add a flag (perhaps an unused bit in a sequence field or the like) that would indicate that nodes should not add children to the transaction, unless doing so would likely cause the transaction to be confirmed very soon.

    I guess there is the weakness of someone connecting to your node, the original sender, and freely inflating the feerate of your mempool to make you reject your own CPFP. An attacker can do this without effectively paying the price by partition-conflicting your mempool from the rest of the network. There is also an attacker anticipating mempool-congestion (like exchange payouts) and racing to make the low-feerate make it in the mempool before being obsolete but staying there. Also not broadcasting low-feerate children restrain the network to learn about them, they’re valid blockspace demand at some price point and may express a honest user confirmation preference (now does low-feerate CPFP on RBFable transactions make sense ?)

    Generally I’m skeptical turning mempools as fee-estimators (even if latter are built on top of former) and instead favor applications expressing their confirmation preferences, mempools just reflecting them (just policing out resources abuse). “near confirmation” sounds like a moving target due to block variance and mempool-congestion. That said it might be good enough, I need to think more on it.

    This also reintroduces the RBF pinning problem but this time for packages rather than single transactions (ie what do you do when you want to evict a large package with a small one?).

    Is this an issue if smaller one has a better feerate than large package or you’re thinking about ancestor/descendants limits here ?

    Are there any other tools we need at our disposal to achieve the security requirements listed?

    IIRC we mentioned with Matt about fast rotation of some of your tx-relay peers to make topology less static and thus less prone to exploitation through conflicting mempools or interfering with propagation. But that was only for the more sophisticated scenarios (“The Ugly” in my June post), not the one I’m immediately concerned with.

  21. t-bast commented at 8:13 am on July 30, 2020: contributor

    If that helps, I tried summing up recent discussions around these issues and LN assumptions here:

    These documents may be incomplete, and there may be mistakes (my own!), but I think they can be a good starting point to understand where LN may be making wishful assumptions that can fall apart at tx-relay time (and the tx format constraints LN has).

    What I was also trying to say is that recent conversations around the insufficiency of our transaction relay system for lightning’s security model make it sound like lightning needs a uniform relay policy – which I think would be troubling if true.

    I don’t think LN requires a uniform p2p network (in the sense that all nodes apply the exact same relay policies and mempools always converge). And I agree that it would not be desirable; I think there’s simply a misunderstanding on what “uniform” means here. IIUC @ariard means that higher layers need to be able to rely on a few core assumptions that need to remain “loosely true” for a big enough portion of the network (e.g. minRelayTxFee will not be raised arbitrarily).

    I think the core tx-relay assumption for LN is that there must be a (somewhat stable) mechanism to allow an honest node to bump the fees of a tx that has been signed in the past with a low feerate (and that can’t be updated because it requires signatures from multiple participants, who may be malicious and won’t cooperate). If we raise the fee to X sats, it’s ok if some nodes relay it and others won’t, as long as raising the fees even more can guarantee that most nodes will eventually relay it (once the fee has been raised enough).

    I don’t have enough knowledge to suggest how this high-level goal should be met (even though sender-initiated package relay feels like a good starting design point), but I can share what I think is the root cause of the issue nowadays (my point of view may be biased and incomplete, please don’t hesitate to hurt my feelings and tell me I’m completely wrong). IMHO BIP 125 rule 5 is the culprit because it introduces irreplaceable packages. Those packages are also quite costly for mempools (because walking the graph is costly). I don’t understand why allowing such long chains of unconfirmed transactions in the mempool is useful. It feels to me that bitcoin should only allow much smaller unconfirmed packages (with a hard-coded, maybe configurable limit N), allow replacing those packages, and shift the complexity of managing long chains to clients; if someone wants to chain 100 txs, they should be buffering them and broadcasting them N by N, waiting for the previous batch to be confirmed before broadcasting more (i.e. they’re the only ones bearing the complexity cost of their usecase, instead of putting the burden on the whole network).

    Again, I want to emphasize that I may be completely wrong on this; it’s very likely that I’m unaware of use-cases that require this feature.

  22. ajtowns commented at 9:50 am on July 30, 2020: contributor

    “pre-mempool”

    I’ve had a similar thought, that I call the “memswamp” – basically, keep an indexed collections of transactions but allow it to include conflicting transactions as well (could potentially include orphan or non-final transactions as well, suitably tagged). You filter out conflicting and low-fee txs from the memswamp to generate the mempool, which is what you use to build blocks.

    The main idea being that if someone relays you a tx that you can successfully validate (you know its parents, its signatures are valid, and it complies with standardness rules), then if it doesn’t pass RBF rules, you still keep it around, you just don’t immediately forward it. Ideally, you maintain a “minfeebump” value that you use to ratelimit what tx’s you forward, so if there’s a lull in tx traffic, you’ll forward even a tx that RBF’s by a small amount, but if there’s a lot it may rise above “minfee” even. Expire txs when they’re low fee, have been replaced by a higher fee tx and you’ve relayed that tx, when they’ve been orphaned for too long, etc. I think that might be a good enough way of preventing tx spam on its own, so that you could drop the “must pay a higher fee than the package it’s replacing, not just a higher fee rate” rule, which seems like the biggest road block in ensuring you can get your contract resolution committed on chain.

    I think the memswamp approach potentially has nicer properties for handling reorgs, since you can “just” free up space for the txs from the reorged blocks, and dump them directly into the mempool, without having to resolve conflicts. If it’s a >100 block reorg you’d need to be careful about disappearing coinbases making some tx’s orphans, and would have to take care of final txs becoming non-final as well.

  23. darosior referenced this in commit 8b6f77f2a4 on Nov 11, 2020
  24. darosior referenced this in commit 6067a8fdb7 on Nov 12, 2020
  25. ariard commented at 11:55 pm on April 24, 2021: contributor

    Should package limits be expressed in weight units or virtual bytes ?

    See discussion : https://github.com/bitcoin/bitcoin/pull/20833/#discussion_r618629408

    cc @glozow

  26. glozow commented at 3:16 am on April 25, 2021: member

    Should package limits be expressed in weight units or virtual bytes ?

    Isn’t one a multiple of the other? I used vbytes in 20833 since that’s what descendant limits are expressed in. I don’t think I fully understand why there would be a significant difference in using one over the other.

  27. ariard commented at 10:07 pm on May 28, 2021: contributor

    Isn’t one a multiple of the other? I used vbytes in 20833 since that’s what descendant limits are expressed in. I don’t think I fully understand why there would be a significant difference in using one over the other.

    See #22097 for rational and proposed changes.

  28. glozow commented at 4:51 pm on September 21, 2023: member

    Design discussion has made progress in various places. Since we now have code and a bip and can discuss the design decisions more concretely, I’m going to close this issue.

    See #27463 for project tracking.

  29. glozow closed this on Sep 21, 2023

  30. ariard commented at 9:32 pm on September 22, 2023: contributor
    Good to close. Note to reviewers: the pinning scenarios described here are still somehow relevant to review bip331 and its implementation correctness iirc.
  31. bitcoin locked this on Sep 21, 2024

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-09-27 19:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me