Limited mempool + floating relay fee + rejection caching + mempool expiry #6455

pull sipa wants to merge 8 commits into bitcoin:master from sipa:limitpoolx changing 22 files +534 −154
  1. sipa commented at 10:07 pm on July 17, 2015: member

    This is a demo combination of #6452, #6421, #6453 and a new gradual relay fee adjustment.

    Continuously (every second) adjust the relay fee by measuring the mempool size and correcting it towards 71% (sqrt(50%)) of the cap size. The maximum correction speed is equal to doubling or halving the fee per two hours. The relay fee will never go below the -minrelayfee setting. It is not intended to deal with sudden spikes - that’s what the hard cap is meant for.

    This was inspired by Jeff Garzik’s low/high water mark approach, but has more gradual effects.

    TODO: integration with fee estimation. @morcos, feel like having a look at that?

  2. sipa force-pushed on Jul 17, 2015
  3. sipa force-pushed on Jul 17, 2015
  4. sipa force-pushed on Jul 17, 2015
  5. laanwj added the label TX fees and policy on Jul 18, 2015
  6. laanwj added the label Mempool on Jul 18, 2015
  7. petertodd commented at 6:04 pm on July 18, 2015: contributor

    Kinda inclined to NACK the floating relay fee idea here. (though everything else looks fine)

    With the mempool limited in size, the minimum relay fee loses the purpose of protecting nodes from running out of RAM - that I’m sure we can agree on!

    For miners, it still has the (sort of accidental) purpose of ensuring they don’t fill up their blocks with useless txs that cost them more to mine in orphan risk than fee revenue. (although it’s probably something like 10x or more too low for that)

    For relay nodes, first let’s consider transactions with a high probability of getting mined eventually. By that I mean if not for high demand the tx will eventually get mined and the sender isn’t intending to double-spend it anytime soon. Here we’re converging towards a more and more valuable mempool, so regardless what the minimum is bandwidth usage will diminish as the mempool value increases, eventually reaching a steady state. If I understand the floating relay fee idea, this is the same outcome, basically.

    What an attacker can game is the case where your node is accepting transactions that have a low probability of getting mined. For instance, at the extreme if we have another sigop-like bug, the attacker can fill the mempool by broadcasting high fee txs, using up bandwidth, double-spending them, and repeating. Without a floating relay fee, after each one of those cycles your relay node is immediately back to normal, helping move txs. With a floating relay fee the attacker could succeed in increasing the minimum to the point where other txs aren’t getting propagated - bad!

    I’ve got some more ideas on this, but I’ll save them for the list…

  8. sipa commented at 6:34 pm on July 18, 2015: member

    I’ll try to explain the reasoning behind it.

    1. The mempool limitation uses a rule that the new, replacing transaction should pay for the relay of both the replaced transaction and the new transaction. That means that if we previously accepted low-fee transactions into the mempool (perhaps because a block was just mined, and there is now space), we’ve now made it unnecessarily expensive for new transactions to get in.

    2. The mempool limitation code uses heuristics to avoid bad performance, but this means it also makes suboptimal decisions. After many replacements, with the bottom filled with big “packages”, it may become impossible for a significant portion of the transactions to get in, and thus, to get relayed.

    3. The above would even happen if the mempool code did an exhaustive search for the best possible replacement, as it is still only a per-incoming-transaction decision, and single transactions will rarely beat the fee requirements to replace a set of dependent transactions.

    4. The effective feerate to get relaying on the network is a sawtooth function when purely the limitation code is being used, more or less synchronized across the network. I believe it’s not good for predictability to have such sharp changes - e.g., people may aim to suddenly broadcast right after a block to increase inclusion chances.

    5. The DoS protection with mempool limitation is still based on having a marginal price per byte on the network, and if that value drifts to much, it becomes ineffective. This also includes dust protection, which IMHO needs feedback from the network to avoid having a fixed configured value.

  9. sipa commented at 6:48 pm on July 18, 2015: member

    I do understand the worry about mempools becoming filled with high-fee but unconfirming transactions, leading to (ever) increasing fees until the mempool clears. I’ve thought about using fees actually in the mempool, or time-based experation. Perhaps a rule that the relay fee in case of “high water” mempool (above the aimed size) cannot go above the lowest feerate actually in the mempool.

    In practice, what I’m seeing so far with this code in the current network conditions is that the relay fee actually goes back to the default often, and only rises a bit for a few hours at a time.

  10. jgarzik commented at 9:41 pm on July 18, 2015: contributor

    mempools are guaranteed not to fill with unconfirming transactions if you use an accurate sampling method: time-based expiration + blocks naturally confirming and clearing out transactions.

    All other methods - this PR, abs size limit, floating relay fee - must be treated as inaccurate fallbacks for situations such as transaction bursts when time-based expiration fails to cap the mempool at an absolute size - because transaction replacement is fundamentally an inaccurate guess at what is best to be removed.

    The solution needs to be considered holistically (and this PR is a good attempt at that):

    • time based expiration + natural block confs, the best way to sample what miners are actually confirming
    • an absolute cap, to deal with short term traffic bursts. good local node defense.
    • a floating relay fee, to deal with floods and filter out will-not-confirm traffic
  11. Add uint256 support to CRollingBloomFilter cfaef8a5cc
  12. Keep track of recently rejected transactions
    Nodes can have divergent policies on which transactions they will accept
    and relay.  This can cause you to repeatedly request and reject the same
    tx after its inved to you from various peers which have accepted it.
    Here we add rolling bloom filter to keep track of such rejections,
    clearing the filter every time the chain tip changes.
    
    Credit goes to Alex Morcos, who created the patch that this code is
    based on.
    bec54c1e52
  13. Separate core memory usage computation in core_memusage.h 51e1cd412f
  14. sipa force-pushed on Jul 18, 2015
  15. TxMemPool: Change mapTx to a boost::multi_index_container
    Indexes on:
    - Tx Hash
    - Fee Rate (fee-per-kb)
    605e2ec363
  16. Move orphan tx handling to a separate log class 6b22e28fbb
  17. Implement on-the-fly mempool size limitation. 64871cadd5
  18. Floating relay fee
    Continuously (every second) adjust the relay fee by measuring the
    mempool size and correcting it towards 71% (sqrt(50%)) of the cap
    size. The maximum correction speed is equal to doubling or halving
    the fee per two hours. The relay fee will never go below the
    -minrelayfee setting.
    
    It is not intended to deal with sudden spikes - that's what the hard cap is meant for.
    
    This was inspired by Jeff Garzik's low/high water mark approach, but has more gradual effects.
    6498673f99
  19. sipa force-pushed on Jul 19, 2015
  20. sipa renamed this:
    Limited memory pool + floating relay fee + rejection caching
    Limited mempool + floating relay fee + rejection caching + mempool expiry
    on Jul 19, 2015
  21. sipa commented at 0:03 am on July 19, 2015: member

    Added default-48h expiry to the mempool; thanks to @ashleyholman’s indexed mempool this is trivial.

    I was previously of the opinion that expiration doesn’t really help as rebroadcast (by anyone can always override it), but in combination with floating relay fee I think it makes sense as extra protection against divergent behaviour.

  22. sipa force-pushed on Jul 19, 2015
  23. sipa force-pushed on Jul 19, 2015
  24. sipa force-pushed on Jul 19, 2015
  25. Mempool expiry 2be5db0792
  26. sipa force-pushed on Jul 19, 2015
  27. dgenr8 commented at 9:06 pm on July 19, 2015: contributor

    The floating fee is node-specific, which could lead to lumpy minimum fees across the network. When some nodes raise their minimum, other nodes will see reduced traffic without having had to raise theirs.

    Rather than just integrating with fee estimation, I wonder if fee estimation could be improved to where it serves as the dynamic fee rate mechanism. The advantage is that it’s based on the global blockchain data set.

  28. sipa commented at 10:08 pm on July 19, 2015: member

    It is true that this results in dynamic effectice relay fees across the network, however that is inevitable with a limited memory pool linked to relay.

    The purpose of the floating relay is to dampen the effect of that limit, and make it more constant over time.

  29. dgenr8 commented at 11:08 pm on July 19, 2015: contributor

    For the goal of maximizing fees, filling the mempool by fee rate is a heuristic. Truly maximizing fees would require filling by absolute fee, subject to the size constraint, continually re-solving a knapsack problem.

    In particular, the feerate heuristic is unkind to an efficient large-size transaction if its feerate is slightly lower than another smaller-sized transaction, and there is not enough room for both.

    But for a relay node, the goal itself of maximizing fees does not seem right. It allows a transaction that pays greater fees than necessary for 1-conf to push earlier transactions out, and cause fee estimates to increase. That’s good for miners, but not good for users. A relay node should be agnostic between miners and users.

    The dependent handling logic is complex. It would be simpler if dependents were made less attractive directly because they are dependent, and to make them even less attractive for being multiple levels deep. A nice property of this is that they automatically get more attractive later, once their parents are confirmed.

    All of these reasons suggest relay nodes use a different metric for filling their mempool than miners use for filling blocks. A relay node has no revenue, so profit maximization is equivalent to cost minimization. A relay node’s cost of carrying a transaction in the mempool is proportional to two main factors: the size of the transaction and the amount of time it spends in the mempool. Therefore a possible metric for relay nodes is sizeBytes * expectedBlocksToConfirm(feeRate). The values stored for this on mempool entries could be updated as the fee rate structure changes.

  30. petertodd commented at 11:19 pm on July 19, 2015: contributor

    @sipa

    1. The mempool limitation uses a rule that the new, replacing transaction should pay for the relay of both the replaced transaction and the new transaction. That means that if we previously accepted low-fee transactions into the mempool (perhaps because a block was just mined, and there is now space), we’ve now made it unnecessarily expensive for new transactions to get in.

    2. The mempool limitation code uses heuristics to avoid bad performance, but this means it also makes suboptimal decisions. After many replacements, with the bottom filled with big “packages”, it may become impossible for a significant portion of the transactions to get in, and thus, to get relayed.

    3. The above would even happen if the mempool code did an exhaustive search for the best possible replacement, as it is still only a per-incoming-transaction decision, and single transactions will rarely beat the fee requirements to replace a set of dependent transactions.

    All these issues with large chains of unconfirmed transactions make me think it makes sense to just put a size limit on them, e.g. no single set of unconfirmed txs should be more than 125KB in size, and contain more than 100 txs. When you think about it, large unconfirmed sets are inherently problematic to mine, because it’s hard to “fit” them in with other transactions in optimal ways. So why not just stop relaying them? The use-case is both rare, and often associated with abusive behaviors.

    With that limit in place, the size difference between the largest possible package to replace, and a single transaction, would be about 625:1 (125KB to 0.2KB) If we replaced txs purely on a fee/KB basis, that’d mean the trick of replacing large transactions with small ones could get you 625x cheaper bandwidth DoS attacks - not great. However from a miner’s profitability point of view, given that the mempool is much larger than a single block, it’s probably still a economically rational decision. (for replace-by-fee this logic probably should be modified, but that’s my problem, not yours!)

    Now suppose instead that we set the minimum relay fee based purely on bandwidth used. E.g. grow min relay fee during times of high bandwidth and let it decay back to the absolute minimum during lulls. In that case the attempting to do a bandwidth DoS with repeated replacements would just drive up the relay fee, quickly limiting the effectiveness of the attack to the pre-defined bandwidth limit. Having such a mechanism would also be useful for RBF.

    1. The effective feerate to get relaying on the network is a sawtooth function when purely the limitation code is being used, more or less synchronized across the network. I believe it’s not good for predictability to have such sharp changes - e.g., people may aim to suddenly broadcast right after a block to increase inclusion chances.

    Is that actually such a bad thing? Basically what such a change would say is “previously there was no chance that your tx would get mined, but that suddenly changed, so rebroadcast it now” I think that’s a better outcome than unnecessarily preventing perfectly valid and profitable-to-mine transactions from getting to miners.

    I do understand the worry about mempools becoming filled with high-fee but unconfirming transactions, leading to (ever) increasing fees until the mempool clears. I’ve thought about using fees actually in the mempool, or time-based experation. Perhaps a rule that the relay fee in case of “high water” mempool (above the aimed size) cannot go above the lowest feerate actually in the mempool.

    So, really you have two conflicting problems with the unminable transaction issue:

    1. Bandwidth usage

    2. Unnecessarily high fees

    Expiration improves the situation for #2, but makes #1 worse as it lets the attacker re-use the txouts faster. OTOH, they’re probably doing that anyway, because they can double-spend the outputs. Limiting total unconfirmed tx chain depth also helps here in a few ways, such as forcing the attacker to tie up more txouts in the attack. The attacker is limited by how much BTC they have access to; the larger the mempool the more BTC they need to pull off the attack.

    Each time a block is found and a tx is not included in that block, it’s essentially a sign that the economic feerate of the tx may be lower than the apparent feerate we calculated. So quickly reducing the apparent feerate on every block for the top 1MB * x transactions found probably makes sense for the mempool. (though the mining code should use the actual feerates!) For instance, halving the feerate each time a block is found for those txs probably makes sense; the remaining part of the mempool would then act to make the bandwidth attack expensive to pull off.

    Finally for worst-case recovery, I agree that a 48-hour expiration can’t hurt.

  31. sipa commented at 11:22 pm on July 19, 2015: member

    As Peter Todd noted, if relay behaviour and miner behaviour are allowed to diverge, there is no real need anymore to keep a mempool at all, if a solution is found to deal with dependent transactions (for example, relaying by package of dependent transactions).

    The strategy here is trying to maintain a solution that applies to both relaying and mining, trying to optimize for what is most economic to be mined.

  32. petertodd commented at 11:30 pm on July 19, 2015: contributor

    @dgenr8 Relaying transactions that miners aren’t interested in mining is pointless and just wastes bandwidth; go too far with that philosophy and large miners will start creating alternate tx submission mechanisms, which is harmful to decentralization.

    Re: the knapsack problem, so long as the mempool is much larger than the blocksize limit and we have a maximum transaction size well under the blocksize limit - and better yet maximum unconfirmed tx package size - ignoring the knapsack stuff is a reasonable approximation of ideal behavior anyway. There’s very little legit use-cases for creating extremely large transactions, especially not extremely large chains of them. In particular, note how the efficiency gain of large transactions quickly goes down as the number of outputs goes up, while very long chains of txs are much more efficiently done with a few large single transactions.

  33. dgenr8 commented at 0:46 am on July 20, 2015: contributor

    @sipa @petertodd It sounds like you don’t think fee estimation works?

    Experience with min relay fee, non-validating mining, blacklists, RBF, and other examples show that miners will always want to diverge from relay policy.

    There’s no point in worrying about miners creating submission mechanisms. If it’s a major worry though, creating a private miner relay network was an odd thing to do (but consistent with having the p2p relay network act in miners’ interest).

  34. ABISprotocol commented at 3:39 am on July 20, 2015: none

    @petertodd Earlier you suggested a “size limit on them, e.g. no single set of unconfirmed txs should be more than 125KB in size, and contain more than 100 txs,” What is the rationale for that particular proposed limit in kB size? It seemed arbitrary and static rather than dynamic or heuristic.

    I do note that you stated, “With a floating relay fee the attacker could succeed in increasing the minimum to the point where other txs aren’t getting propagated,” but I thought @sipa’s remarks in response significantly addressed your issues.

    Regarding that comment above in which you state in part that “Relaying transactions that miners aren’t interested in mining is pointless,” ~ that’s incorrect, and for research to support my position, please see this remark in another recently closed pull request.

    Noted from prior remarks that a mempool is not necessarily needed (e.g., relay nodes would not need to be using a mempool; the wallet nodes, would be using a mempool, as has been indicated).

    Questions: ~ for @sipa / @petertodd

    • How is the mempool to be limited? (I am assuming a description very general or holistic as per @jgarzik could be described as) a) time based expiration + natural block confs, b) an absolute cap, to deal with short term traffic bursts. good local node defense. c) a floating relay fee, to deal with floods
    • What is the mechanism by which the UTXO set is stored (or proposed to be stored)?
    • How would dynamic fee determinations be calculated?
    • Please describe more how the general purpose messaging network might work? (described elsewhere in bitcoin-dev message thread ‘Do we really need a mempool?’ as “a general-purpose messaging network paid by coin ownership if the UTXO set is split up, and some kind of expiration over time policy is implemented”)

    Thank you

  35. petertodd commented at 1:35 pm on July 20, 2015: contributor
    @dgenr8 If private tx submission mechanisms to miners are a significant method of getting txs to them, that places small miners at a serious disadvantage. @ABISprotocol Your discussion is mostly off-topic for this pull-req; please take it to the dev list; “how the UTXO set is stored” has pretty much nothing to do with the mempool; general purpose messaging is not what this pull-req is about. Above all, we’re trying to fix a real and immediate issue and get a fix deployed relatively soon - maybe even in v0.11.1 That’s not the time to delve too deep into long-term design issues.
  36. dgenr8 commented at 11:12 pm on July 20, 2015: contributor

    A general way to find transactions to be evicted (from discussion with @sipa @morcos @petertodd):

    • In the mempool entry, include the total size and total fees of all of the tx’s ancestors. The values used in the fee rate index are these sums, with the size and fee values of the tx itself added in.
    • In CompareTxMemPoolEntryByFeeRate, use 2 tiebreakers. After feerate, 1st tiebreaker is size, 2nd tiebreaker is arrival time. The goal is for the first to-be-removed transaction to be the largest transaction with the worst feerate, preferring to remove the later arrival if the first two values are tied.
    • Iterating through the list, skip any transactions with children we haven’t seen yet. Those children have a higher feerate than the parent we are looking at, and will include it. I finally got why @petertodd kept saying this.
    • Accumulate tx size until either enough space for new tx is found (-> evict), or feerate is higher than new tx’s feerate (-> abort).
  37. dgenr8 commented at 2:19 pm on July 21, 2015: contributor
    Turning off CPFP in the above scheme is a matter of simply NOT skipping parents in the third step above. Instead, add the descendants into the eviction set. This could make sense for relay nodes, until most miners support CPFP.
  38. in src/main.cpp: in 2be5db0792
    900+        if (!mempool.StageTrimToSize(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000, entry, stagedelete, nFeesDeleted)) {
    901+            return state.DoS(0, false, REJECT_INSUFFICIENTFEE, "mempool full");
    902+        }
    903+
    904         // Don't accept it if it can't get into a block
    905         CAmount txMinFee = GetMinRelayFee(tx, nSize, true);
    


    jtimon commented at 2:31 pm on July 21, 2015:
    Why not just move these things to StageTrimToSize to save the nFeesDeleted variable? From looking at some of @morcos work, it seems these checks can use more information from there later.
  39. in src/main.cpp: in 2be5db0792
    893@@ -859,22 +894,29 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa
    894         CTxMemPoolEntry entry(tx, nFees, GetTime(), dPriority, chainActive.Height(), mempool.HasNoInputsOf(tx));
    895         unsigned int nSize = entry.GetTxSize();
    896 
    897+        // Try to make space in mempool
    898+        std::set<uint256> stagedelete;
    899+        CAmount nFeesDeleted = 0;
    900+        if (!mempool.StageTrimToSize(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000, entry, stagedelete, nFeesDeleted)) {
    


    jtimon commented at 2:31 pm on July 21, 2015:
    nit s/mempool/pool
  40. in src/txmempool.cpp: in 2be5db0792
    448+}
    449+
    450+bool CTxMemPool::StageTrimToSize(size_t sizelimit, const CTxMemPoolEntry& toadd, std::set<uint256>& stage, CAmount& nFeesRemoved) {
    451+    size_t nSizeRemoved = 0;
    452+    std::set<uint256> protect;
    453+    BOOST_FOREACH(const CTxIn& in, toadd.GetTx().vin) {
    


    jtimon commented at 2:39 pm on July 21, 2015:
    This loop is similar to the one in https://github.com/bitcoin/bitcoin/blob/master/src/main.cpp#L785 It would be nice to take that one here too, not only to DRY, but also because it’s what alternatives to the first seen policy have needed for long. I bet petertodd’s RBF release could get much simpler if we had done an equivalent movement before 0.11 (or before 0.10 when @luke-jr proposed it).

    sipa commented at 2:50 pm on July 21, 2015:

    I’m aware it is similar, but I chose to reimplement it to be more efficient in this case (eviction needs to be fast, so we can try multiple combinations).

    I’m planning on refactoring some of the code to reuse a common way of iterating all dependencies of a transaction, but I’d prefer to wait with that until the approach and necessary code paths are clear.


    jtimon commented at 1:48 am on July 24, 2015:
    Ok, that sounds good. But wouldn’t it be nice to have both similar loops close to each other by moving the other one here already? Both things are about replacements. This shouldn’t make the later refactor harder. All I’m asking is putting some commits before others.
  41. morcos commented at 2:50 am on July 22, 2015: member

    @sipa @petertodd OK I think we’re making some progress.

    Please take a look at https://github.com/morcos/bitcoin/tree/softcap. It builds off of this pull with two commits. The first commit is this notion of being able to accept a transaction into the reserve space between the soft cap and hard cap if it pays a multiple of the relay rate. I’ve been measuring the reject rate of all transactions with high feerates for 7/6 - 7/14 with a mempool limited at 32MB. (note the soft cap was 70% of 32MB, while maintaining a hard cap at 32MB)

    Here are the reject rates:
    tx > 30k sat/kB feerate: 6455: 5% 6455 with soft cap: 0.5% package system (sorting mempool by entire packages feerate and/or fee): 1% package with soft cap: 0.2%

    tx > 60k sat/kB feerate: 6455: 3% 6455 with soft cap: 0.2% package (w or wout soft cap): 0.05%

    I think the package system can be a second step if @sdaftuar gets it production ready in time. But if you’re ok with the soft cap idea then I think with some more tweaks we’d be close to something that’s better than the status quo. I still need to work out the proper interaction with the soft cap and your relay multiplier and the code is just a quick hack. But if you agree with concept I’ll polish it up.

    The second commit is something I mentioned before, it doesn’t make any sense to try to find a package to delete that uses up all of the incoming transactions fees and doesn’t leave it any to pay for its own relay. Although I think its a relatively minor effect.

  42. sipa commented at 12:07 pm on July 22, 2015: member
    @Alex what if you cut out the floating fees, and just have a softcap/hardcap, but no target even below it?
  43. sipa commented at 4:28 pm on July 27, 2015: member
    Superceded by #6470.
  44. sipa closed this on Jul 27, 2015

  45. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 12:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me