Limit mempool size #6281

pull pstratem wants to merge 1 commits into bitcoin:master from pstratem:limit_mempool_size changing 5 files +59 −1
  1. pstratem commented at 0:28 am on June 14, 2015: contributor

    Restrict the maximum memory pool size to avoid potential DoS issues.

    The mechanism is simply to recursively remove the transactions which have the lowest priority until the memory pool is below the maximum size.

    compiles and passes tests, but has not been extensively tested

    this is intended to replace #3753

  2. sipa commented at 1:11 pm on June 14, 2015: member

    Iterating though the entire mempool whenever the limit size is exceeded seems inefficient, especially with larger mempools. I would either keep an index ordered by priority/whatever, or use random deletes instead.

    Also, needs a hook for the wallet to mark its own transactions as “precious”, or the conflict detection mechanism will break.

  3. sipa commented at 1:13 pm on June 14, 2015: member
    Also, priority seems to be the wrong metric, as most of the mempool sorting is done by fee/byte rather than by priority.
  4. ashleyholman commented at 2:06 pm on June 14, 2015: contributor

    You should not remove any transactions that have dependencies before first removing their dependent transactions.

    Also how about preserving nBlockPrioritySize worth of the highest priority transactions, and then pruning the rest based on fees.

  5. ashleyholman commented at 2:27 pm on June 14, 2015: contributor
    Actually, ignore what I said about dependents because I see that the remove() call will remove all of the dependent transactions, which seems like the right thing to do.
  6. afk11 commented at 6:45 pm on June 14, 2015: contributor

    I was playing with recursively splitting utxos recently, and managed to fill my two testnet nodes mempools to 300mb (>110k tx’s) over half an hour. One node was used for broadcasting, the other for informational purposes. Both gave similar results throughout, so I suspect many nodes accumulated the same. https://www.blocktrail.com/tBTC/address/mvoh3Jwirz6YLdSdpp1558uHWdJ9Qpijjg

    Since all of these transactions were dependent on a single UTXO, perhaps nodes could limit the number of dependant transactions in the memory pool. As @ashleyholman noted, tracking hierarchy is probably a good idea, since after pruning an unconfirmed transaction with 100k unconfirmed child transactions, further spammy spends would no longer be added to the mempool as the inputs would be missing.

    I’m less clear on how this would affect the relay policy however - not relaying child-of-pruned transactions means the spammer (or legitimate user) would need to find other peers.

    EDIT: Also, if the presence of a transaction in the mempool is a requirement to relay other, dependent unconfirmed transactions, you could exhaust the networks mempool as a whole by giving it a fixed size.

    I managed to produce 1/4 of your proposed limit in an hour or two. I had a 4000 tBTC output to play with, which should have given me millions of transactions, but I cut it off. I can probably determine the minimum sized output needed to produce your limit.. will try this now.

  7. pstratem commented at 7:45 pm on June 14, 2015: contributor

    @sipa The entire priority calculation setup seems like it could use being cleaned up.

    It would be a shame to write a bunch of complicated specific code here instead of figuring out someway to incorporate existing priority calculation code.

  8. ashleyholman commented at 2:41 am on June 15, 2015: contributor

    @pstratem I just had some further thoughts about this.. hope you don’t mind my 2c

    In general I think it would make sense to align the policy of choosing transactions to pack into a block (in CreateNewBlock) and the policy for mempool retention. The only difference would be nBlockMaxSize vs nMaxMempoolSize.

    So how about splitting out the transaction selection code from CreateNewBlock into a policy class, and then re-using this for mempool pruning? It would be like constructing an nMaxMempoolSize block, and throwing out any transactions that weren’t included.

  9. pstratem commented at 3:41 am on June 15, 2015: contributor

    @ashleyholman I’m going to take a look at the various places that policy is applied and figure out how to work from there.

    So far I see

    Transaction select for CreateNewblock Relay policy

    Am I missing anything?

  10. ashleyholman commented at 5:31 am on June 15, 2015: contributor

    Sounds good! Let me know if you want any assistance with writing code or testing, because I’m quite interested in this patch so am happy to get involved :)

    If you’re implementing this as a full sweep of the mempool (like the CreateNewBlock code does), maybe it should run less often than every time a tx comes in. You could have two limits, 1 for the threshold (size at which to trigger the cleanup) and another for the size to prune down to.

  11. laanwj added the label Mempool on Jun 15, 2015
  12. morcos commented at 2:09 pm on June 15, 2015: member

    @pstratem Thanks for getting started on this. I’d been putting off working on it because it wasn’t clear to me that there was agreement on how to boot things from the mempool. In my mind though, you would keep all the transactions in a sorted list (or up to 3 lists, one by priority, one by fee, one for wallet transactions) and that same sorting would be used in 3 places. For booting things at the end to limit mempool size, for creating new blocks, and for improvements to fee estimation. The sorted list is a requirement to really improve the fee estimation over where it is now with just some historical analysis.

    As for the priority calculation code, it does appear in a couple of places, but I fixed the one instance where it was calculating it differently. They all result in the same calculation now.

  13. jonasschnelli commented at 2:21 pm on June 15, 2015: contributor
    Started working on a rebases version of #3753 https://github.com/jonasschnelli/bitcoin/commits/2015/06/mempool-janitor with focus on protecting wallet relevant transaction. Removed the thread and try to check during CTxMemPool::addUnchecked() but only if a certain amount of time has passed (24h by default). Will focus now on a hierarchical check for wtx (don’t remove wtx dependent tx).
  14. pstratem commented at 7:34 pm on June 15, 2015: contributor
    @jonasschnelli It seems like it would be more efficient to simply add a whitelisting flag to CTxMemPoolEntry which is passed as part of addUnchecked
  15. pstratem commented at 7:49 pm on June 15, 2015: contributor

    @morcos Unfortunately that was the same conclusion that I came to.

    If I can get everybody to agree on some algorithm for calculating a single priority number which takes into account dPriority, nFee, and IsMine/IsFromMe then the code here (and potentially in CreateNewBlock) becomes much simpler.

  16. morcos commented at 7:56 pm on June 15, 2015: member

    actually it’s probably a lot easier to keep dPriority in its own queue, because that queue is a pain to maintain as all the transactions in it are updating their priority every block.

    also having CPFP implemented will make this work a lot better if we can make it efficient

  17. gavinandresen commented at 8:07 pm on June 15, 2015: contributor

    When I’ve contemplated updating this code in the past I’ve thought that storing mempool entries using a Boost multi-index container (with indexes on fee-per-kb, priority, and time inserted) would be the right thing to do. See http://www.boost.org/doc/libs/1_58_0/libs/multi_index/doc/index.html

    The goal would be to do all the sorting as transactions enter the mempool, so CreateNewBlock doesn’t have to do any sorting and is very fast. As morcos says, though, handling priority is tricky because priority changes over time, although I bet some clever thinking could come up with a way to represent priorities that takes advantage of the fact that priorities only get better over time, they never get worse.

  18. pstratem commented at 8:17 pm on June 15, 2015: contributor
    @gavinandresen The goal is simplicity…
  19. sipa commented at 8:29 pm on June 15, 2015: member
    I must agree actually that boost::multiindex is pretty elegant and efficient, for this purpose.
  20. pstratem commented at 8:30 pm on June 15, 2015: contributor

    @jonasschnelli @morcos

    On further consideration… protecting wallet transactions from eviction is an information leak.

  21. sipa commented at 8:33 pm on June 15, 2015: member
    @pstratem That’s indeed a consideration. The alternative is to do all tracking inside the wallet for conflicts, but that is less strong (it cannot detect conflicts that are caused by an indirect out-of-wallet dependency that conflicts). In the long term, I think there is no way around that though - the dependency of the wallet is ugly, inefficient, and incompatible with SPV (and my idea, IIRC…).
  22. pstratem commented at 8:43 pm on June 15, 2015: contributor
    Then the question is whether memory pool acceptance should use the same transaction priority model as block creation.
  23. morcos commented at 8:52 pm on June 15, 2015: member
    Ugh.. turns out I spoke too soon on the priority calculations. GetPriority in txmempool.cpp is wrong after all.
  24. jonasschnelli commented at 11:27 pm on June 15, 2015: contributor

    @pstratem: I’m not sure if adding whitelist flag to CTxMemPoolEntry is the right way. How would you then protect a 0-conf input (which has low priority for some reasons and therefore is marked for removal) used within a just created wtx (not part of the assumed pool sweep)? Or also the other way (wtx outpoint used in tx input and marked for deletion during poolsweep).

    But the potential privacy leak is indeed a problem. And thanks for bringing this up.

    What if we just make sure the wallet can handle mempool removals of any type (no protection of wtx at all)?

    Worst case to handle is probably a chain (or just “flat” and multiple) of low fee self-transaction used as 0conf-inputs for a outgoing standard wtx.

    Suddenly remove of a incoming 0conf wtx should be tolerated and is already possible in other wallets.

    If someone spends a 0conf he takes the risk of producing a invalid wtx.

  25. sipa commented at 9:08 am on June 16, 2015: member

    The problem is that the wallet uses the mempool as a means for estimating whether an unconfirmed transaction is expected to be confirmed.

    With the memory-size-limited mempool, it pretty much loses that functionality, because when you set your limit lower than most mempools in the network, or even lower than just a few miners, that does not decrease your confirmation chances; only your ability to estimate them.

    I think the better solution is to stop depending on the mempool for this, and do conflict tracking inside the wallet. Not sure I want to touch that code, though…

  26. petertodd commented at 1:01 pm on June 16, 2015: contributor
    re: 0-conf chains, this is a good argument to handle them w/ replace-by-fee, by rewriting the tx with more outputs rather than a long chain. Also, this results in lower overall tx fee costs.
  27. pstratem commented at 11:08 pm on June 16, 2015: contributor

    @sipa ah so if a wallet transaction is evicted from the mempool the conflict tracking stuff will all break? that’s nasty

    the rabbit’s hole of things to fix gets every deeper

  28. dgenr8 commented at 0:53 am on June 17, 2015: contributor
    Wallet could add parents of a received wtx that are < some-confirmation-level. Conflict detection is important enough to justify this imho. Next question is how long to keep the parents (esp. if wtx and possibly parents remain unconfirmed).
  29. petertodd commented at 6:26 am on June 18, 2015: contributor
    Have you considered just making the minimum relay/acceptance fee increase exponentially as the mempool size increases? e.g. just doubling the min-relay-fee for every additional x MB would be easy to implement.
  30. lapp0 commented at 9:43 am on June 18, 2015: none

    Does this open a new DoS vulnerability?

    If you can fill transactions to the max mempool size of some peers, you can replace any dropped transaction with little additional fee. This set of transactions would propagate through any peers that have a mempool limit smaller than the set of transactions.

    Perhaps an index of outputs that have been redeemed by transactions dropped from the mempool needs to exist to prevent this?

  31. pstratem commented at 9:54 am on June 18, 2015: contributor
    @petertodd a hard upper bound is preferred for memory constrained systems, possibly such a scheme could be used in addition to the upper limit
  32. pstratem commented at 10:08 am on June 18, 2015: contributor
    @lapp0 an attacker directly connected to a peer can send the same low priority transaction to the peer continuously; this is however insignificant
  33. lapp0 commented at 10:23 am on June 18, 2015: none

    @pstratem And if a transaction is dropped from the mempool, an attacker can replace that transaction and have that replacement transaction propagate among all peers that have also dropped the transaction.

    The resources used to broadcast X bytes of data to the set of nodes that the data can propagate to is the cost of uploading X bytes to my peer(s) that will propagate it.

  34. pstratem commented at 10:35 am on June 18, 2015: contributor
    @lapp0 the replacement wont pass AcceptToMemoryPool and will thus not be relayed
  35. lapp0 commented at 10:41 am on June 18, 2015: none
    @pstratem why?
  36. pstratem commented at 10:46 am on June 18, 2015: contributor
    @lapp0 because that’s what the code does?
  37. lapp0 commented at 10:51 am on June 18, 2015: none
    @pstratem I think you’re missing the fact that the transaction it’s “replacing” is dropped. If it’s accepted into the mempool the first time it’s not already there, it should also be accepted the 2nd time it’s not in the mempool.
  38. pstratem commented at 10:54 am on June 18, 2015: contributor
    @lapp0 a remote node cannot trigger a transaction to be dropped except by sending a new transaction with higher priority/fees, stop and reconsider how that effects what you’re considering
  39. pstratem commented at 11:01 am on June 18, 2015: contributor
    @lapp0 after consulting @sipa i think I understand what you’re getting at, it’s the same issue that the replace by fee patch has around enforcing a minimum delta in fees for replacement, is that right?
  40. sipa commented at 11:03 am on June 18, 2015: member
    Good catch. I think indeed that if tx1 causes tx2 to be evicted, that tx1.fee - tx2.fee should account for the relay cost of tx1.
  41. lapp0 commented at 11:05 am on June 18, 2015: none

    @pstratem Yes, I considered the fact that you must have a higher fee in my first comment “you can replace any dropped transaction with little additional fee.”

    If you order by fees it is very cheap to outbid your own transactions and drop them.

    Update:

    It’s the same issue that the replace by fee patch has around enforcing a minimum delta in fees for replacement, is that right?

    Yes, however replace-by-fee has the advantage of having the transaction available, hence the suggestion of a mempool already-redeemed-output index. @sipa That prevents this attack and is much less complex than building a new index.

    However, what happens when I have a 100KB transaction with lowish fees that someone wants to evict? Can no one broadcast a 500 byte transaction to that peer without paying a 100KB fee first?

    I think the logic should be to pay the fee for your transaction plus the fee per kb of tx you replaced. The 100KB transaction would be dropped and the next 100KB worth of transactions added to the mempool would need account for that 100KB transactions fee.

  42. pstratem commented at 11:25 am on June 18, 2015: contributor
    @lapp0 sorry I misinterpreted your first comment
  43. petertodd commented at 7:37 pm on June 18, 2015: contributor
    @lapp0 Given we’re discussing this on #bitcoin-dev right now, just wanted to say kudo’s to your recognizing the RBF-like fee issue re: eviction. Nice catch!
  44. pstratem commented at 7:57 am on June 26, 2015: contributor

    Transactions that are old are now removed first.

    The memory pool is then trimmed on a strict feerate basis.

  45. lapp0 commented at 8:50 am on June 26, 2015: none
    Perhaps this should be integrated with #6331?
  46. Trim txmempool to MAX_MEMPOOL_SIZE according to feerate f0d0fc4be5
  47. sipa commented at 6:42 pm on July 9, 2015: member
    See #6410 for accurate memory usage accounting.
  48. ashleyholman commented at 10:19 am on July 11, 2015: contributor
    @pstratem could you explain the rationale behind removeOldTransactions()? It looks like you are calculating the expected number of seconds it would take the drain a full mempool with full blocks every 10 mins. And then you are subtracting this amount of time from the current timestamp and pruning any transactions are older than that? What is the logic behind that?
  49. sipa commented at 10:43 am on July 11, 2015: member
    I’m working on another mempool limiting implementation, which uses @ashleyholman’s index, and should deal correctly with the DoS issue describee above.
  50. pstratem commented at 8:39 pm on July 11, 2015: contributor
    @ashleyholman The model used for evicting transactions from the mempool cannot perfectly model the policy of miners as a whole. As such it’s necessary for the age of a transaction to be part of the eviction criteria.
  51. pstratem commented at 9:29 pm on July 12, 2015: contributor
    Closed in favor of #6421
  52. pstratem closed this on Jul 12, 2015

  53. random-zebra referenced this in commit 49867086da on Jun 3, 2020
  54. wqking referenced this in commit d1577afc9a on Jan 9, 2021
  55. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 09:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me