mempool: keep CPFP’d transactions when loading from mempool.dat #27476

pull glozow wants to merge 4 commits into bitcoin:master from glozow:2023-04-persist-packages changing 6 files +93 −16
  1. glozow commented at 4:28 pm on April 17, 2023: member

    Part of v3 and package relay (see #27463).

    Problem When loading mempool.dat, we apply -minrelaytxfee and mempool min feerate on each transaction, meaning we’ll reject transactions that may be CPFP’d by later transactions mempool.dat. Even without package relay, we can run into this problem if we are shrinking -maxmempool or raising -minrelaytxfee on a restart.

    Solution When loading mempool.dat, use bypass_limits=true and then call TrimToSize() at the very end.

    Advantages:

    • We definitely keep the “highest descendant score” transactions if mempool min feerate rises.
    • It’s extremely simple implementation-wise.
    • It’s simple to keep track of what made it in and what didn’t.

    Disadvantages:

    • If mempool.dat is very large, we can exceed maxmempool by quite a bit.
    • This won’t be sufficient for ephemeral anchors (unless bypass_limits allows not-yet-spent anchors).
  2. glozow added the label Mempool on Apr 17, 2023
  3. DrahtBot commented at 4:28 pm on April 17, 2023: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    Concept ACK instagibbs
    Approach ACK dergoegge

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #27018 (mempool / miner: regularly flush <=0-fee entries and mine everything in the mempool by glozow)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  4. ajtowns commented at 6:28 am on April 18, 2023: contributor
    Maybe move most of the text in the PR description into a comment? “Review this first” and “Alternatives considered” stuff is useful here, but doesn’t seem useful for the merge commit message?
  5. glozow commented at 3:08 pm on April 20, 2023: member

    Alternatives Considered

    Feel free to suggest more alternatives, but note potential problems.

    (1) Call TrimToSize() at smart intervals, e.g. when a timestamp changes (since packages are submitted with the same timestamp), or every ~25 transactions, or when we hit a certain threshold above maxmempool. (2) When a transaction fails for fee-related reasons, try again with bypass_limits=true, and then schedule a flush for later. (3) Create a new mempool.dat format, which contains a list of packages instead of transactions (package can have 1 transaction). (4) When loading, load into a buffer and use AncestorPackage “packageifier” (in #26711) to group them into packages and submit them together.

    Their disadvantages:

    • (1, 2) The implementation is a bit hacky. Some strategies may fail.
    • (3, 4) The implementation is quite complex.
    • (1, 2) When shrinking maxmempool, we won’t always pick the top transactions by descendant feerate. Since mempool min feerate depends on the feerate of the last package removed + incremental, we can end up rejecting something with feerate 1sat/vB higher than what we keep. Essentially, we have a bias for earlier transactions. This is weird behavior (and makes it quite annoying to write tests).
    • (1, 2) We need to add various bits of code to track things evicting each other to accurately report which transactions made it in and which didn’t.
    • (1, 2) Doesn’t work for ephemeral anchors (https://github.com/bitcoin/bitcoin/pull/26403). We could work around this by adding another flag to bypass “unspent anchor” checks but… also a bit ugly.

    We can also pick one of the more complex solutions to implement later (I expect people are going to be interested in (3) since it’s the most future-proof). This problem shouldn’t be very common right now. For the most part, it doesn’t really matter if the “bottom” of your mempool doesn’t match others'.

  6. glozow commented at 3:11 pm on April 20, 2023: member

    Maybe move most of the text in the PR description into a comment? “Review this first” and “Alternatives considered” stuff is useful here, but doesn’t seem useful for the merge commit message?

    Moved “alternatives considered.” I prefer putting “review this first” at the top so reviewers see it as soon as possible. It will go away if/when I rebase, so definitely before any merging happens.

  7. in src/kernel/mempool_persist.cpp:66 in 0a4ecc9d55 outdated
    62@@ -63,6 +63,7 @@ bool LoadMempool(CTxMemPool& pool, const fs::path& load_path, Chainstate& active
    63         }
    64         uint64_t num;
    65         file >> num;
    66+        LOCK2(cs_main, pool.cs);
    


    dergoegge commented at 10:03 am on April 21, 2023:

    Afaict we currently import mempool txs from disk in the background and allow the node to make connections and sync the chain in parallel. Seems like this commit changes that by locking cs_main for the entire import duration. Not sure about the consequences.

    Potentially relevant for #27460 as well.


    instagibbs commented at 7:27 pm on May 1, 2023:

    To recap, this disallows a wallet(or other TrimToSize caller) from trimming a parent before the child is entered into the mempool?

    With debug builds, this draws out the startup time of bitcoind by ~1.5 minutes with 300MB mempool. ~30 seconds on non-debug build.


    ajtowns commented at 7:19 am on May 2, 2023:
    I don’t think locking cs_main or pool.cs for this long is okay.

    glozow commented at 9:14 am on May 2, 2023:
    To be clear I agree that this was not the right thing to do.

    ajtowns commented at 4:25 am on May 3, 2023:
    As a datapoint, importing my mempool now (600MB used, 237k entries) seems to take about 25 minutes. Not a particularly fast machine (though not an rpi either), debug build, etc, so something of a worst-case scenario, but still.
  8. dergoegge commented at 10:06 am on April 21, 2023: member
    Approach ACK - importing all and then trimming seems like the least complex approach to me.
  9. glozow marked this as ready for review on Apr 26, 2023
  10. [mempool] evict everything below min relay fee in TrimToSize()
    At this point it's not expected that there are any such transactions,
    except from reorgs and possibly when loading from mempool.dat.
    d627adb4bd
  11. [test framework] return txhex from create_lots_of_big_transactions 7ae1efd68d
  12. [test] raise wallet_abandonconflict -minrelaytxfee settings
    TLDR This test needs to be tweaked because it currently relies on
    suboptimal behavior in order to pass, and that behavior is changed in
    the next commit.
    
    The intention of the test is to set a high -minrelaytxfee
    such that these transactions are rejected in LoadMempool() and
    in CWallet::ResumbitWalletTransactions().
    
    However, while the parent transactions are below minrelaytxfee, they
    each have descendants high enough feerate to bump them past the
    minrelaytxfee (observe that the `assert_greater_than` checks fail
    if given the original amount of 0.0001). These transactions will be kept
    after the mempool persists packages, causing the original test to fail.
    84e8f4a5ad
  13. [mempool] persist packages across restart
    Hold pool.cs the entire time otherwise wallet resubmissions may call
    TrimtoSize() in between loading transactions from disk.
    2c0931b8f4
  14. glozow force-pushed on Apr 26, 2023
  15. in src/txmempool.cpp:1067 in d627adb4bd outdated
    1063         indexed_transaction_set::index<descendant_score>::type::iterator it = mapTx.get<descendant_score>().begin();
    1064 
    1065+        CFeeRate removed(it->GetModFeesWithDescendants(), it->GetSizeWithDescendants());
    1066+        // Skim away everything paying effectively zero fees, regardless of mempool size.
    1067+        // Above that feerate, just trim until memory is within limits.
    1068+        if (removed >= m_min_relay_feerate && DynamicMemoryUsage() <= sizelimit) break;
    


    instagibbs commented at 6:53 pm on May 1, 2023:
    removed is a bit confusing since it will likely bail without doing anything. to_remove_rate?

    ajtowns commented at 7:26 am on May 2, 2023:
    If bypass_limits is left as false, I think these changes are not necessary?
  16. instagibbs commented at 7:28 pm on May 1, 2023: member
    approach ACK
  17. glozow commented at 8:46 pm on May 1, 2023: member

    Closing this for now to (1) focus on more important functionality (2) defer the decision-making to when we have more information on usage and/or better eviction.

    • The “trim stuff below minrelayfeerate” thing isn’t very good… as established, we might evict things we shouldn’t, etc.
    • This isn’t strictly necessary yet until package relay is deployed.
    • It might be too early to decide on a solution. Perhaps with EA and actual package relay usage, we’ll end up wanting to reformat mempool.dat anyway. Or maybe, with other improvements to mempool, we will have a simpler solution that achieves everything we want.

    Still tracking this as a “todo” item in #27463, but will defer until later.

  18. glozow closed this on May 1, 2023

  19. in src/kernel/mempool_persist.cpp:81 in 2c0931b8f4
    77@@ -77,8 +78,12 @@ bool LoadMempool(CTxMemPool& pool, const fs::path& load_path, Chainstate& active
    78                 pool.PrioritiseTransaction(tx->GetHash(), amountdelta);
    79             }
    80             if (nTime > TicksSinceEpoch<std::chrono::seconds>(now - pool.m_expiry)) {
    81-                LOCK(cs_main);
    82-                const auto& accepted = AcceptToMemoryPool(active_chainstate, tx, nTime, /*bypass_limits=*/false, /*test_accept=*/false);
    83+                // Use bypass_limits=true to skip feerate checks, and call TrimToSize() at the very
    


    ajtowns commented at 7:23 am on May 2, 2023:
    We currently only do bypass_limits for txs from a reorg; not even packages can bypass limits otherwise. I think leaving bypass_limits=false for loading from mempool.dat should be fine until that changes?
  20. in src/kernel/mempool_persist.cpp:118 in 2c0931b8f4
    113+        // Ensure the maximum memory limits are ultimately enforced and any transactions below
    114+        // minimum feerates are evicted, since bypass_limits was set to true during ATMP calls.
    115+        pool.TrimToSize(pool.m_max_size_bytes);
    116+        const auto num_evicted{size_before_trim - pool.size()};
    117+        count -= num_evicted;
    118+        failed += num_evicted;
    


    ajtowns commented at 7:24 am on May 2, 2023:
    This may be evicting txs that were present in the mempool prior to the import, in which case it will overestimate failed and underestimate count?
  21. ajtowns commented at 9:10 am on May 2, 2023: contributor

    Approach NACK: I don’t think holding cs_main or the mempool lock for the full import period is okay.

    I’m not sure this needs fixing at all at this point: it doesn’t affect nodes that don’t restart in between the tx being broadcast and mined; it doesn’t affect nodes that don’t have a full mempool (eg, if you raise the limit to 1GB or 2GB instead of 300MB); and it’s not needed if the child tx is either rebroadcast or rbf’ed. In some sense, saving the mempool and reloading it on restart is just a nice-to-have, so having some txs be missing on restart is at worst annoying, not a fatal bug.

    It would however be nice to fix before we start accepting below-minrelaytxfee txs as part of packages (eg 0-fee txs whose fees are paid by spending an ephemeral anchor output) – in that case parents will be lost on restart even if the mempool is essentially empty, which would be annoying.

    Here’s a branch that takes an alternative approach of deferring TrimToSize until the mempool is loaded: https://github.com/ajtowns/bitcoin/commits/202305-defertrimtosize The idea there is to not consider it a DoS to (eg) load a 1GB mempool.dat in its entirety even if you have a 300MB limit, but to still protect against txs from peers pushing memory usage substantially above that limit.

    That feels acceptable to me, though YMMV. Other approaches we could consider while remaining backwards compatible (ie without changing the mempool.dat format):

    • special case below-minrelaytxfee txs: when AcceptToMemoryPool detects insufficient fee, report that directly (not as a generic MEMPOOL_POLICY failure), and have LoadMempool keep track of all those txs so they can be included if a later tx considers them an ancestor. That means you may go over the maxmempool limit, but only by the size of the below-minrelaytxfee txs from your mempool.dat. If those are small (due to v3/ephemeral anchor rules?) anyway, that might be acceptable; you might also only need to keep limitancestorcount of them, if you write the ancestor that cpfps them to disk asap when saving.
    • implement cluster mempool first and export mempool.dat in chunks from highest to lowest fee; when importing, assume that the mempool contains chunks from highest score to lowest, build up a chunk to import; once it’s over the mempool minfee, import the txs immediately; if it’s over 1MvB and still under the mempool minfee, drop it and stop attempting to import txs from this mempool.dat entirely?
  22. glozow commented at 9:21 am on May 2, 2023: member

    Thanks for writing up a full review @ajtowns. Will keep this feedback in mind for the future if/when we get to this bridge again.

    implement cluster mempool first and export mempool.dat in chunks from highest to lowest fee; when importing, assume that the mempool contains chunks from highest score to lowest, build up a chunk to import; once it’s over the mempool minfee, import the txs immediately; if it’s over 1MvB and still under the mempool minfee, drop it and stop attempting to import txs from this mempool.dat entirely?

    Yeah I’m pretty optimistic that cluster mempool will provide a natural solution like this.

  23. ajtowns commented at 10:52 am on May 2, 2023: contributor

    Yeah I’m pretty optimistic that cluster mempool will provide a natural solution like this.

    Probably, but I suspect ephemeral anchors or similar will be ready before we’ve got all the i’s dotted and t’s crossed on cluster mempool, so some stop gap patch will be worthwhile. Still, we can cross that bridge once we get there.

  24. instagibbs commented at 3:28 pm on May 2, 2023: member
    Special-casing it is a value-add imo: v3/ephemeral anchor tx should be aggressively fee bumping anyways.
  25. bitcoin locked this on May 2, 2024

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-03 18:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me