mempool decreases to zero on nodes with a small maxmempool #21558

issue rebroad openend this issue on March 31, 2021
  1. rebroad commented at 9:21 am on March 31, 2021: contributor
    image I’ve been testing a node with a maxmempool of 50MB, as as you can see, at around 6:40am the mempool reduces to zero due to continuing to filter out the lower fee transactions. I think the feefilter reduction needs to therefore take into account the maxmempool setting and decrease faster for nodes with a small maxmempool.
  2. rebroad added the label Bug on Mar 31, 2021
  3. rebroad renamed this:
    minrelayfee needs to decrease faster on nodes with small maxmempool
    mempool decreases to zero on nodes with a small maxmempool
    on Mar 31, 2021
  4. rebroad commented at 9:30 am on March 31, 2021: contributor
    image This screenshot shows that the minfeefilter change could be effective, as the mempool seems to quickly fill up again once the feefilter drops low enough.
  5. MarcoFalke removed the label Bug on Apr 1, 2021
  6. MarcoFalke added the label Brainstorming on Apr 1, 2021
  7. MarcoFalke added the label TX fees and policy on Apr 1, 2021
  8. rebroad commented at 12:54 pm on April 15, 2021: contributor

    I have a idea - in the same way that nodes indirectly reveal their maxmempool size by sending feefilter messages, why not allow them to do this upon connection instead of waiting for their mempools to fill before doing this? i.e. send a number close to the maxmempool. Nodes can then factor this in to determine the feefilter message that would have been sent and restrict TXs accordingly. This would require a new P2P message, I suspect to do this.

    However, already this could be done (without a new message):- when a node sends a feefilter message that is lower than the previous one, the node could prioritize sending TXs in that fee band in order to help the node acquire the TXs in that fee band more quickly. This would avoid a protocol change and yet helps nodes to fill up their mempool in a way that avoids receiving too many TXs that will be deleted (once the mempool fills to max again).

  9. rebroad commented at 12:44 pm on April 16, 2021: contributor

    Another idea - When mempool fills and TXs are deleted, why not instead write them to disk (similar to when mempool is written to disk at shutdown) and re-load them when a block arrives and there’s free space again?

    or write the deleted mempool to disk as a leveldb (as suggested 5 years ago by @luke-jr #8448 (comment)) @MarcoFalke could/should the label “mempool” also be added to this?

  10. MarcoFalke added the label P2P on Aug 20, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-07-03 16:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me