Continuously limit the memory pool memory consumption #6421

pull sipa wants to merge 5 commits into bitcoin:master from sipa:limitpool changing 11 files +372 −61
  1. sipa commented at 1:36 pm on July 11, 2015: member

    This pull request contains a squashed version of #6331 and #6410, and replaces #6281.

    Whenever a new transaction arrives, we check which of the lowest-feerate transactions in the mempool (including their dependencies) would have to be removed to make the memory consumption go below a configurable limit. The DoS protection relay rules then operate on the size of both the new transaction and the removed ones (as pointed out by @lapp0 in #6281).

    Note that:

    • Even though it adds a feerate index to the mempool, it does not use this index for block construction.
    • The mempool eviction code does not take priority into account. It strictly aims at maximizing mempool fee rate.
    • Untested.
  2. jonasschnelli commented at 1:51 pm on July 11, 2015: contributor
    Needs rebase because #6410 is merged now.
  3. sipa force-pushed on Jul 11, 2015
  4. in src/main.cpp: in 489726e5d0 outdated
    882         // Don't accept it if it can't get into a block
    883-        CAmount txMinFee = GetMinRelayFee(tx, nSize, true);
    884+        CAmount txMinFee = GetMinRelayFee(tx, nSize + nSizeDeleted, true);
    885         if (fLimitFree && nFees < txMinFee)
    886             return state.DoS(0, error("AcceptToMemoryPool: not enough fees %s, %d < %d",
    887                                       hash.ToString(), nFees, txMinFee),
    


    petertodd commented at 2:09 pm on July 11, 2015:
    Need to update this error msg.

    sipa commented at 2:17 pm on July 11, 2015:
    Suggestion?

    petertodd commented at 2:29 pm on July 11, 2015:
    Oh, actually I misread it; no changes needed.
  5. in src/txmempool.cpp: in 489726e5d0 outdated
    460+            todo.pop_front();
    461+            if (!stage.count(donow)) {
    462+                stage.insert(donow);
    463+                totalfeeremoved += origTx->GetFee();
    464+                if (totalfeeremoved > maxfeeremove) {
    465+                    return false;
    


    ashleyholman commented at 2:09 pm on July 11, 2015:
    What happens if the lowest fee-rate transaction in the pool is a parent transaction to a chain of high fee transactions? Wouldn’t that prevent any eviction from happening, even though there may be other transactions that could be removed to make room?

    sipa commented at 2:12 pm on July 11, 2015:
    Yup. Fixing that is complicated and/or expensive.

    sipa commented at 2:14 pm on July 11, 2015:
    (Efficient) code that implements CPFP will likely fix this, as I expect that it will maintain caches of sizes/fees of children of transactions.

    petertodd commented at 2:56 pm on July 11, 2015:

    CPFP could work with this by turning individual transactions into “packages” of transactions whenever a parent pays more fees/KB than a child+parent. Removal would then remove the whole “package” in one go; there would never be a situation where removal was blocked.

    That said, I’d be inclined to merge this pull-req first, then work on the fixes for the edge cases second.

    On 11 July 2015 10:15:01 GMT-04:00, Pieter Wuille notifications@github.com wrote:

    +bool CTxMemPool::StageTrimToSize(size_t sizelimit, const CAmount& maxfeeremove, std::set& stage, CAmount& totalfeeremoved, size_t& totalsizeremove) {

    • size_t expsize = DynamicMemoryUsage();
    • indexed_transaction_set::nth_index<1>::type::reverse_iterator it = mapTx.get<1>().rbegin();
    • while (expsize > sizelimit && it != mapTx.get<1>().rend()) {
    •    std::deque<uint256> todo;
      
    •    todo.push_back(it->GetTx().GetHash());
      
    •    while (!todo.empty()) {
      
    •        uint256 donow = todo.front();
      
    •        const CTxMemPoolEntry\* origTx = &*mapTx.find(donow);
      
    •        todo.pop_front();
      
    •        if (!stage.count(donow)) {
      
    •            stage.insert(donow);
      
    •            totalfeeremoved += origTx->GetFee();
      
    •            if (totalfeeremoved > maxfeeremove) {
      
    •                return false;
      

    (Efficient) code that implements CPFP will likely fix this, as I expect that it will maintain caches of sizes/fees of children of transactions.


    Reply to this email directly or view it on GitHub: https://github.com/bitcoin/bitcoin/pull/6421/files#r34412978


    sipa commented at 3:10 pm on July 11, 2015:
    Agree.

    ashleyholman commented at 3:13 pm on July 11, 2015:
    You could return to the outer loop and try again on the next tx, up until you hit one that has a feerate >= the one you are trying to add. Although that does add a lot of extra work.

    sipa commented at 3:20 pm on July 11, 2015:
    That risks quadratic work…
  6. in src/main.h: in 489726e5d0 outdated
    49@@ -50,6 +50,8 @@ struct CNodeStateStats;
    50 static const bool DEFAULT_ALERTS = true;
    51 /** Default for -maxorphantx, maximum number of orphan transactions kept in memory */
    52 static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100;
    53+/** Default for -maxmempool, maximum megabytes of the mempool */
    54+static const unsigned int DEFAULT_MAX_MEMPOOL_SIZE = 300;
    


    petertodd commented at 2:13 pm on July 11, 2015:
    Can you update this comment to clarify if we’re talking about txs or ram?

    sipa commented at 2:18 pm on July 11, 2015:
    Fixed.
  7. sipa force-pushed on Jul 11, 2015
  8. sipa force-pushed on Jul 11, 2015
  9. sipa force-pushed on Jul 11, 2015
  10. sipa force-pushed on Jul 11, 2015
  11. jtimon commented at 4:04 pm on July 11, 2015: contributor
    Nits from #6331 rebased at https://github.com/sipa/bitcoin/compare/limitpool...jtimon:pr-6421-0.11.99 (will force push to jtimon/pr-6421-0.11.99 as this gets rebased).
  12. sipa force-pushed on Jul 11, 2015
  13. sipa force-pushed on Jul 11, 2015
  14. sipa force-pushed on Jul 11, 2015
  15. sipa force-pushed on Jul 11, 2015
  16. sipa commented at 6:45 pm on July 11, 2015: member

    @lapp0 You suggested using “the new transaction’s fees should pay for the size of the new plus removed transaction”. That doesn’t help, as you can create a sequence of transactions that each replace the previous one, and each have enough fees to pay for both. This would give you infinite relay bandwidth at fixed cost.

    The solution is perhaps to remember for each mempool transaction what the size of everything it has replaced is, but that’s a bit more complex than I’m willing to do now. I’ve chosen the conservative approach here which is to look at the fee difference instead.

  17. sipa commented at 6:49 pm on July 11, 2015: member
    @jtimon It’s probably a bit more complicated than just a score function, I now realize. The mempool code is trying to optimize for fee/byte (currently), independently of what sorting is implemented by the index. I think we’ll need a policy-controlled “cost” (as a generalization of size, perhaps corrected for UTXO differences) and policy-controlled “revenue” (as a generalization of fee). The reason is that you can’t compute the “score” of a collection of transactions - you need (revenue1+revenue2)/(cost1+cost2) rather than just score1+score2.
  18. jonasschnelli commented at 6:57 pm on July 11, 2015: contributor

    Slightly tested. Running this code (git commit tip 3c638fc0ba82a9d9c235f428d098a098fc0b6b16, not the latest tip) since some hours with -maxmempool=100. Since 1h i have a stable dynamic memory size of ~100MB. Graph: http://bitcoin.jonasschnelli.ch/charts/mempool6410/

    Log filtered after the “stored orphan txs”: https://gist.githubusercontent.com/jonasschnelli/1f2e89d64887710f6c5b/raw/dba3d68d79cc649cd7e01c992d40da8d46073431/gistfile1.txt

  19. jtimon commented at 9:42 pm on July 11, 2015: contributor
    @sipa answered in #6331: bike-shed the name of the variable, the getter and the comparator, but please don’t make the type CFeeRate so that we have to fix it later. int64_t should work perfectly fine for these changes.
  20. jtimon commented at 3:20 pm on July 12, 2015: contributor
    I believe this could be much simpler (and the end result better) after #5709 (is 10 commits but ready to be squashed into the first one), but I doubt people want to read step by step to be sure behavior is not being changed. At least not more now than when it was opened…
  21. sipa commented at 6:43 pm on July 12, 2015: member

    @jtimon It’s a bit more complicated. The replacement code needs to have a way to know whether replacing a set of transactions with another transaction is a good idea. Contrary to what I first thought, just having a score to compare is not enough - if the index order doesn’t match the feerate well, its search for sets to remove will degrade or fail.

    One way to generalize this to something policy-controllable is to have a “general reward” (typically fee) and a “general cost” (typically bytes) determined by the policy at mempool entry time, and then compare reward/cost ratios (typically feerates), both in the index, and in the replacement code (the limiting code, but also for example CPFP combination code and RBF code). But it’s really not as simple as making the index order configurable - sorry for saying so at first.

  22. jtimon commented at 7:12 pm on July 12, 2015: contributor

    Even in that case, both the “general reward” and the “general cost” indexes can use int64_t instead of CFeeRate and size_t respectively. Can we agree on that first?

    I still don’t understand why this needs transaction replacement. We can add it or not as normal and, after adding, trim to the desired size. with this, we could have a unified index that it’s just reward/cost instead of two separate ones. But for transaction replacement, #6416 is what I had in mind. Something more flexible that is independent from the index or the mempool entries themselves. I just realized that ApproveTxReplacement needs a CCoinsViewCache parameter and it would be a good idea to call it later. Even a “general reward” and “general cost” index in the mempool may not be enough for certain replacement policies, for example, zero-conf-safer-RBF (also known as FSS-RBF, but it seems to me that everything is “first seen safe”). So I don’t think we need or can generally solve replacements here: expiring old transactions before adding a new transaction and forcing the mempool to a given size just after that (simply by dropping until it is enough from the bottom of the unified index) should be enough. In my opinion adding transaction replacement will unnecessarily complicate things, not only for this PR, but also for later changes in replacement policies (for example adding an option to use ZCS-RBF instead of FS as replacement policy) and with later stages of your own plan (https://github.com/bitcoin/bitcoin/pull/6331#issuecomment-120464993 ):

    • Generalize the feerate here to a unified policy-dependent score (effectively removing the priority as it exists today, as in #6405).
    • Implement block creation using this score-based index.
  23. sipa commented at 7:32 pm on July 12, 2015: member

    @jtimon There is a DoS attack possible by mempool limiting, where someone sends a transaction that ends up at the bottom of the mempool, and then sends another transaction with slightly higher feerate, causing the previous one to be evicted later on. This leads to network broadcast bandwidth at much lower cost than the actual network relay fee, as discovered by @lapp0.

    The solution is to treat block size limiting as transaction replacement with the mempool bottom (sorted by feerate/score), and require that the new transaction pays in fee for the relay of the old transactions that we kicked out.

  24. dgenr8 commented at 7:37 pm on July 12, 2015: contributor
    @sipa You don’t need to optimize a ratio, if you can represent both the rewards and costs in comparable units. Then you could optimize the difference. I wonder if unit costs could be represented in BTC/byte …
  25. sipa commented at 7:42 pm on July 12, 2015: member
    @dgenr8 Optimizing feerate is what you expect miners to do in their mempool, as it maximizes income given a constrained (by rule or propagation economics) block size. @jtimon Yes, I agree that instead of feerate and size we can use int64_t. Or double even. But the logic is already complicated enough here. I really don’t think it’s wise to spend more mental power of maintainers and reviewers to understand how this code will at some point generalize to a configurable policy.
  26. dgenr8 commented at 7:55 pm on July 12, 2015: contributor
    @sipa Miners don’t care about relay cost, which you are now trying to include.
  27. sipa commented at 8:04 pm on July 12, 2015: member
    @dgenr8 Of course. This is DoS protection code for relaying nodes, not for miners. Its primary purpose is preventing people from being able to spam the network, in various ways. It aims to build a mempool which is as aligned with miner’s incentives as possible, but is restricted to prevent network actors from causing too high memory consumption, get massive flooding bandwidth or consume too much CPU power on the nodes traversed by it.
  28. sipa force-pushed on Jul 12, 2015
  29. sipa commented at 8:10 pm on July 12, 2015: member
    Pushed a new version which tries more than just the bottom transactions and their dependencies in the mempool, is more efficient, and is better documented.
  30. jtimon commented at 8:20 pm on July 12, 2015: contributor

    @jtimon Yes, I agree that instead of feerate and size we can use int64_t. Or double even. But the logic is already complicated enough here. I really don’t think it’s wise to spend more mental power of maintainers and reviewers to understand how this code will at some point generalize to a configurable policy.

    Whatever, If CFeeRate(nFee, nTxSize) is more readable than nFee / nTxSize and https://github.com/jtimon/bitcoin/commit/00baf3d667b4a9e6641903eab78eed3cd01a2f46 makes things more complicated (I strongly disagree), let’s not waste the time of today’s reviewers and let’s waste the time of future maintainers instead. This doesn’t generalize anything (it is functionally equivalent!), it’s just avoids introducing unnecessary barriers to generalization at this point. But if we’re going “spend too much mental power” by thinking about a cleaner history, let’s not do it and let’s do things wrong instead for the shake of preserving such a vaguely defined resource. Since this is urgent, let’s not make perfectly reasonable nits that will “waste” our time, let’s write now what we already know we will have to erase tomorrow. I think we’ve already wasted enough time discussing this already and since you’ve been arguing against the little nit, I’m sure @ashleyholman will not want to incorporate it to #6331. I’ll just write something down to remember to fix up what you’re doing wrong now, that’s fine.

  31. sipa commented at 8:42 pm on July 12, 2015: member

    @jtimon Making score be anything but feerate won’t work, as the replacement code relies on that now. Making it generalizable will require more thinking than we should be doing right now, and calling it score when it can’t just be anything is confusing IMHO - sorry for suggesting this before. But can we please keep the discussion and changes about how mempool policy will be introduced separate from this discussion?

    I don’t care about the usage of CFeeRate though. CTxMemPoolEntry::GetFeeRate() could just return an int64_t and compute nFees / nTxSize on the fly even.

    EDIT: nFees / nTxSize won’t work, as that’s satoshi per byte, which offers very low accuracy. Something with higher accuracy than CFeeRate (even a floating point number…) wouldn’t hurt.

  32. jgarzik commented at 8:43 pm on July 12, 2015: contributor
    Using CFeeRate is sufficient. No need to over-engineer and over-complicate the matter. +1 @sipa
  33. in src/txmempool.h: in 0a917df8e2 outdated
    76+    {
    77+        return entry.GetTx().GetHash();
    78+    }
    79+};
    80+
    81+class CompareTxMemPoolEntryByFee
    


    petertodd commented at 3:52 am on July 13, 2015:

    s/CompareTxMemPoolEntryByFee/CompareTxMemPoolEntryByFeeRate/

    More verbose sure, but let’s not confuse people.


    jtimon commented at 6:47 am on July 13, 2015:
    If we’re only having 1 comparator (and having more than one would be over-complicating this), simply CompareTxMemPoolEntry or TxMemPoolEntryComparator is enough. Of course, both suggestions are just bike-shedding.

    petertodd commented at 11:42 am on July 13, 2015:

    NACK

    Any comparator is ultimately going to be about profit-per-byte. (modulo radical design changes to how the Bitcoin protocol works)


    sipa commented at 2:29 pm on July 13, 2015:
    I’ve renamed it to CompareTxMemPoolEntryByFeeRate, because that is what it currently does, and it can’t be easily changed without breaking the limiter code.

    jtimon commented at 2:58 pm on July 13, 2015:
  34. in src/txmempool.h: in 0a917df8e2 outdated
    184+     *  - Removing said list will reduce the DynamicMemoryUsage below sizelimit.
    185+     *  - At most maxfeeremove worth of fees will be removed.
    186+     *  - No transaction whose hash is in prot will be removed.
    187+     */
    188+    bool StageTrimToSize(size_t sizelimit, const CAmount& maxfeeremove, const CFeeRate& repfeerate, const std::set<uint256>& prot, std::set<uint256>& stage, CAmount& totalfeeremoved, size_t& totalsizeremoved);
    189+    void RemoveStaged(std::set<uint256>& stage);
    


    petertodd commented at 4:10 am on July 13, 2015:

    If RemoveStaged() is kept as a separate function, we should document what it does.

    Equally, should it remain a separate function? The code using it could very well be more clear if it just does the removal itself.


    sipa commented at 2:31 pm on July 13, 2015:
    The rationale here is that I would want the staged set to become a separate class, which tracks statistics of what is being removed etc. That would make it also safe to not expose removeUnchecked (way faster than remove, because it already contains all dependencies), but a RemoveStaged method would need to remain.
  35. laanwj added the label Mempool on Jul 13, 2015
  36. jtimon commented at 7:08 am on July 13, 2015: contributor

    @jtimon Making score be anything but feerate won’t work, as the replacement code relies on that now. Making it generalizable will require more thinking than we should be doing right now, and calling it score when it can’t just be anything is confusing IMHO - sorry for suggesting this before. But can we please keep the discussion and changes about how mempool policy will be introduced separate from this discussion?

    Of course, I’m not been talking about generalizing for a while now, if only you had read my code instead of arguing with me about a variable name and one line of code… I’m tired about discussing about one line of code and the name of a variable, a class and a method. I will pretend #6331 has been merged already and there’s no time to nit it anymore (given that my little 1-line + bike-shedding nit is soooo #@$% controversial).

    Using CFeeRate is sufficient. No need to over-engineer and over-complicate the matter. +1 @sipa

    No, using int64_t and nFees * 1000 / nTxSize is enough, a custom class for this index is over-engineering.
    But sure, let’s introduce mistakes when we have plenty of time to fix them before being merged. I may correct this mistake in the future myself (or don’t, but it’s clear I’m the only one that wants to fix the mistake before introducing it, so why keep wasting everybody’s time with a perfectly reasonable nit for #6331 if it has already been labelled as “too complicated” and nobody wants to read the 1-line change).

    I don’t care about the usage of CFeeRate though. CTxMemPoolEntry::GetFeeRate() could just return an int64_t and compute nFees / nTxSize on the fly even.

    You mean like #6331 (comment) ( https://github.com/bitcoin/bitcoin/commit/f86244ea960cb0a9fabf7c89886061034c46774b ) and later even-simpler versions of the nit?

    EDIT: nFees / nTxSize won’t work, as that’s satoshi per byte, which offers very low accuracy. Something with higher accuracy than CFeeRate (even a floating point number…) wouldn’t hurt.

    nScore = _nFee * 1000 / nTxSize; is completely equivalent and IMO simpler as noted in: #6331 (comment)

    But I’m tired about this, as said #6331 is closed for me. Does #6421 still accept nits or should I keep them for my version of “phase 3”?

    • Generalize the feerate here to a unified policy-dependent score (effectively removing the priority as it exists today, as in #6405).

    Or should I better wait for 0.15.99 to avoid distracting people and wasting (definitely scarcer than I thought, possibly non-renewable) “mental power”?

  37. in src/main.cpp: in 0a917df8e2 outdated
    865+            protect.insert(in.prevout.hash);
    866+        }
    867+        std::set<uint256> stagedelete;
    868+        CAmount nFeesDeleted = 0;
    869+        size_t nSizeDeleted = 0;
    870+        if (!mempool.StageTrimToSize(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000, nFees, entry.GetFeeRate(), protect, stagedelete, nFeesDeleted, nSizeDeleted)) {
    


    petertodd commented at 11:08 am on July 13, 2015:
    It’d be better if we added the size of the new tx to the size limit, as right now the limit can be exceeded; I’m sure this is going to result in issues getting raised… GuessDynamicMemoryUsage() could be used for that if I understand its purpose correctly.

    petertodd commented at 11:12 am on July 13, 2015:
    Equally, maybe change StageTrimToSize() to take a CTransaction()/CTxMemPoolEntry() and figure out all that stuff itself?

    sipa commented at 2:31 pm on July 13, 2015:
    Done.
  38. in src/txmempool.cpp: in 0a917df8e2 outdated
    501+                if (!(stage.count(nexthash) || now.count(nexthash))) {
    502+                    todo.push_back(nexthash);
    503+                }
    504+                iter++;
    505+            }
    506+        }
    


    petertodd commented at 11:39 am on July 13, 2015:
    The worst-case for this while() loop is that it traverses the entire mempool, setting iternow to the number of txs in the mempool; the fails/iternow check should probably be on the while loop instead.

    sipa commented at 1:59 pm on July 13, 2015:

    I’ve considered doing that, but that would prevent any removal of deep chains, effectively making those clog the mempool. However, we do add the fees of these transactions together, which are always of a higher feerate than the ‘hash’ being iterated. Once those accumulated fees exceed the total fees of the replacing transaction, we continue.

    Perhaps a iternow + fails > 100 safeguard is still useful or so.

  39. in src/main.cpp: in 0a917df8e2 outdated
    878+
    879         // Don't accept it if it can't get into a block
    880         CAmount txMinFee = GetMinRelayFee(tx, nSize, true);
    881-        if (fLimitFree && nFees < txMinFee)
    882+        if (fLimitFree && nFees - nFeesDeleted < txMinFee)
    883             return state.DoS(0, error("AcceptToMemoryPool: not enough fees %s, %d < %d",
    


    petertodd commented at 12:08 pm on July 13, 2015:

    Change this to nFees < txMinFee + nFeesDeleted and correspondingly change the DoS error message to “%d < %d + %d” - right now we could get the left hand to be > than the right hand in the error message printout.

    That said, a look in my debug.log indicates that this code never gets triggered with this patch, not sure why yet.


    petertodd commented at 12:10 pm on July 13, 2015:
    Of course, another possibility is to change nFees to nDeltaFees = nFees - nFeesDeleted everywhere.

    sipa commented at 2:32 pm on July 13, 2015:
    You don’t want to use modified fees for the “absurdly large fees” check, I think.
  40. jtimon commented at 1:52 pm on July 13, 2015: contributor

    So here are some suggestions that could be incorporated to this or maybe left for later: https://github.com/sipa/bitcoin/compare/limitpool...jtimon:limitpool_nits Here’s the same after further simplification and indentation: https://github.com/sipa/bitcoin/compare/limitpool...jtimon:post_limitpool

    But let’s focus on https://github.com/sipa/bitcoin/commit/c7ef38980ab48db7cc56a6ebdf1ca2c03ef2453c first (although a look to https://github.com/sipa/bitcoin/commit/655810b963a46d4df2ca573926e030beca085500 may actually help understand it better).

  41. sipa commented at 1:57 pm on July 13, 2015: member
    Added a commit that removes CFeeRate and the feerate variable, and replaces it by something more efficient.
  42. sipa force-pushed on Jul 13, 2015
  43. sipa commented at 2:37 pm on July 13, 2015: member
    @jtimon The changes to the free transaction decision logic look good to me - much more readable than the combined ifs all the way. Would you care to rebase?
  44. sipa force-pushed on Jul 13, 2015
  45. in src/txmempool.cpp: in d7a7cc9e8a outdated
    521+                return false;
    522+            }
    523+        }
    524+        it++;
    525+    }
    526+    if ((double)nFeesRemoved * toadd.GetTxSize() > (double)toadd.GetFee() * nSizeRemoved) {
    


    sdaftuar commented at 7:02 pm on July 13, 2015:
    I think we should be doing this check on each “package” (of parent and child transactions) separately. Otherwise I think it’s possible for us to remove a package that has a higher fee rate than the transaction we’re evaluating, because we’re averaging together with lower feerate packages.

    sipa commented at 7:23 pm on July 13, 2015:
    Nice catch, I agree. I’ll move the check.

    sdaftuar commented at 8:06 pm on July 13, 2015:
    I think we should also compare the fee rate of each package of transactions to the feerate of this transaction + any unconfirmed parents of this transaction, to ensure that we’re not removing a package that is economically preferable to the transaction we’re considering.
  46. sipa force-pushed on Jul 13, 2015
  47. laanwj commented at 11:03 am on July 14, 2015: member
    Going to test.
  48. laanwj referenced this in commit 27fc6c70f4 on Jul 14, 2015
  49. mikehearn commented at 1:23 pm on July 14, 2015: contributor

    Miner’s incentives aren’t quite to maximise fees gathered above all else. Miners need Bitcoin to actually work for regular users, even in the case where there is a DoS attack on the network that’s based on paying high fees. Otherwise the utility of the coins they’re mining and thus their profits go down. That’s why the notion of priority was invented, and why there’s a region of the block dedicated to high priority transactions.

    So, having a unified scoring function that takes both priority and fee into account might work better.

  50. in src/main.cpp: in 8dc3a50604 outdated
    875+                                      hash.ToString(), nFees, txMinFee + nFeesDeleted),
    876                              REJECT_INSUFFICIENTFEE, "insufficient fee");
    877 
    878         // Require that free transactions have sufficient priority to be mined in the next block.
    879-        if (GetBoolArg("-relaypriority", true) && nFees < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) {
    880+        if (GetBoolArg("-relaypriority", true) && nFees - nFeesDeleted < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) {
    


    jtimon commented at 3:00 pm on July 14, 2015:
    Why this doesn’t also require fLimitFree ? Why call AllowFree() when AcceptToMemoryPool’s caller is specifying fLimitFree=false ? I know this is older than this PR, but I have only asked myself that now…
  51. jtimon commented at 4:04 pm on July 14, 2015: contributor
    Rebased https://github.com/sipa/bitcoin/compare/limitpool...jtimon:post_limitpool again with some changes. If we don’t care about not limiting free transactions that don’t pass AllowFree() check when the caller specifies fLimitFree=false, this change becomes really simple: https://github.com/sipa/bitcoin/commit/68f0129c29543d6e744a73f168627c733cd3d98e
  52. in src/txmempool.cpp: in 8dc3a50604 outdated
    518+            nFeesRemoved += nowfee;
    519+            nSizeRemoved += nowsize;
    520+            expsize -= nowusage;
    521+        } else {
    522+            fails += iternow;
    523+            if (fails == 20) {
    


    morcos commented at 4:30 pm on July 14, 2015:

    Needs to be (fails >= 20) if iternow is the count of all the transactions in the package being considered.

    However, I’m not sure this is a great check, it still seems easy for the lowest fee transaction to be part of a package > 20 txs which fails at the end. Then you’ll never check any other starting transaction. I’d rather have some variance in the starting transaction you check and put the cap on the # of starting transactions we check. We could also limit as @petertodd and you suggest above that we never traverse a package too far. This would mean that a very large package which had its parent sorted at the bottom of the feerate index would never get kicked. But that’s ok if there is some randomness as to which transaction you start with. (And it eventually gets kicked by the janitor if not mined)


    sdaftuar commented at 8:05 pm on July 14, 2015:

    Agree with @morcos – I’m running various tests of this code, and I think a significant number of transactions are being rejected because of this issue with large transaction packages preventing new transactions from being accepted, even if the new transactions have a large fee.

    I’ll try to quantify this more precisely and carefully as I test further, but as an example, I ran an analysis of this code (modified with the fails >= 20 bugfix mentioned above) on July 7 with a 10MB mempool limit, and I think 32% (roughly 7500 out of almost 23000) of the transactions rejected due to a full mempool were transactions with a feerate greater than twice the average feerate of the whole mempool.


    sipa commented at 8:06 pm on July 14, 2015:
    I’m testing a change in the code that skips entries randomly, to sample a larger subset of transactions for replacement in the long term.
  53. sipa force-pushed on Jul 14, 2015
  54. sipa commented at 10:13 pm on July 14, 2015: member

    @morcos @sdaftuar Now every transaction is only tried with a 1/10 chance, and we bail out after 10 (actually tried) transactions, and after 20 failures even without finishing the current transaction.

    I tried with up to 32 failures, and it seems that 99% of succesful replacements happen with <=10 failures.

  55. laanwj referenced this in commit d29ec6c230 on Jul 15, 2015
  56. laanwj referenced this in commit e092f22951 on Jul 15, 2015
  57. morcos commented at 1:04 pm on July 15, 2015: member

    The main concern I have with this approach is finding the best set of “packages” (transactions and their dependents) to replace in the mempool is a knapsack problem, and it is hard to strike the right balance between doing too much work and rejecting a potential replacement transaction that intuitively should make it into the mempool. Also any indeterminism could lead to an unfortunate feedback loop where the p2p network serves as your retry loop since some of your peers might have accepted a tx you rejected.

    I’m not completely convinced that using minRelayFee doubling is an inferior approach on it’s own, but what about combining the two ideas?

    So suppose you have a soft limit of 200MB for the mempool. If a tx would cause the mempool to surpass this, then first you do a fast check of whether it passes the exponentially increased relay minimum (suppose the min doubles at 200MB and then every 20MB after that). If the tx passes this increased minimum then it’s let into the mempool without evicting anything. If the tx doesn’t pass this, then it gets a shot of evicting some packages from the mempool as in this pull.

    I think the advantage of this is that all txs that are “clearly” better and should make it into the mempool just make it in quickly. We could add a hard cap if desired, but it seems unnecessary since at 600MB you’ll need to pay 1,000,000 the orig min relay fee to be fast path accepted.

  58. laanwj commented at 1:32 pm on July 15, 2015: member
    @morcos I agree, though I do like this approach of having a hard cap in case any kind of fee-based soft limit is somehow bypassed. I think a goal should be to have all of bitcoin core’s internal data structures capped.
  59. dgenr8 commented at 2:16 pm on July 15, 2015: contributor

    @morcos A selfish node can just assign dependent transactions a higher cost than independent ones. They require more space (since parents are required) and they can take longer to confirm than other txes with the same fee rate (since parents may have a lower fee rate).

    It’s impossible to guarantee that you have spender’s whole “package” anyway, so the rules really need to work on individual transaction level.

  60. morcos commented at 3:02 pm on July 15, 2015: member
    @dgenr8 If I understand your first paragraph, yes I think thats the fundamental problem. But I’m not sure what you mean by rules working on an individual transaction level. You have to kick out dependent transactions, and so if you don’t take into account the fees those transactions paid, you make it easy for someone to get free relay bandwidth by sticking in a long chain and then replacing the parent with one small transaction. @laanwj yes, I think its not a problem to have a hard cap as well @sipa Sigh… Even with the extra fee reservoir idea alluded to on IRC, it still seems its very difficult to solve for this problem of the individual transactions that are sorted to the bottom of the mempool tending to be part of longer chains which actually aren’t cheap in fee rate and therefore very hard to kick out. Maybe we’ll take a look at the full package approach from the beginning…
  61. dgenr8 commented at 4:53 pm on July 15, 2015: contributor

    @morcos In what I described, every child “pays” for its parents up-front in reduced mempool/relay attractiveness. Multiple children pay again for the same parent, and there is a recursive effect.

    Unconfirmed chains are expensive to process, have huge DoS risk, and limited usefulness. Replacement complicates everything. I mentioned in a mailing list post an overall approach I took.

  62. jtimon commented at 10:19 am on July 16, 2015: contributor

    It seems everybody is happy with https://github.com/sipa/bitcoin/commit/8adacf100a36d92e263fe6295c767b087168aeaa Can we merge that first (after rebase) while we discuss the last commit? @morcos I’m not sure I understand your complains, but if the mempool is capped there must be some replacement criteria, even if it’s the dumb “never replace” we have now (that’s why I think the last commit would be clearer and more forward compatible with https://github.com/sipa/bitcoin/commit/44d29ff871c409670e7b09cb030886d7719680f6 ).

    In fact, capping the mempool with the current first seen replacement policy (that is, all replacements forbidden policy) would be the simplest way to cap the mempool (although not precisely the best way to cap it). Anything beyond that (always rejecting new transactions when the mempool is full) must necessarily be more complicated, but also hopefully better than never replacing.

  63. jtimon commented at 10:23 am on July 16, 2015: contributor

    Btw, there’s slightly related optimizations in #6445. The most relevant parts for this PR being in AcceptToMemoryPool:

    • Don’t calculate nValueOut 5 times
    • Don’t calculate nValueIn 3 times
    • Don’t call CCoinsViewCache::HaveInputs 3 times
  64. jtimon commented at 12:11 pm on July 16, 2015: contributor
    Rebased version (with my suggestions on top) in https://github.com/jtimon/bitcoin/commits/post_limitpool
  65. Diapolo commented at 12:30 pm on July 16, 2015: none
    Can this be rebased, the Qt keyword pull sneaked in here ;).
  66. in src/txmempool.cpp: in cf260b617f outdated
    116+    BOOST_FOREACH(const CTxIn& txin, it->GetTx().vin)
    117+        mapNextTx.erase(txin.prevout);
    118+
    119+    totalTxSize -= it->GetTxSize();
    120+    cachedInnerUsage -= it->DynamicMemoryUsage();
    121+    mapTx.erase(it);
    


    jtimon commented at 3:06 pm on July 16, 2015:
    Shouldn’t this be mapTx.erase(hash); ?
  67. in src/txmempool.cpp: in cf260b617f outdated
    479+        std::set<uint256> now; // Set of hashes that will need to be added to stage if 'hash' is included.
    480+        CAmount nowfee = 0; // Sum of the fees in 'now'.
    481+        size_t nowsize = 0; // Sum of the tx sizes in 'now'.
    482+        size_t nowusage = 0; // Sum of the memory usages of transactions in 'now'.
    483+        int iternow = 0; // Transactions we've inspected so far while determining whether 'hash' is acceptable.
    484+        todo.push_back(it->GetTx().GetHash()); // Add 'hash' to the todo list, to initiate processing its children.
    


    jtimon commented at 3:07 pm on July 16, 2015:
    This can be just todo.push_back(hash);
  68. jtimon referenced this in commit 8dcae75bfa on Jul 16, 2015
  69. jtimon referenced this in commit 29f2bf1108 on Jul 16, 2015
  70. in src/txmempool.cpp: in cf260b617f outdated
    463+        if (stage.count(hash)) {
    464+            // If the transaction is already staged for deletion, we know its descendants are already processed, so skip it.
    465+            it++;
    466+            continue;
    467+        }
    468+        if (CompareTxMemPoolEntryByFeeRate()(*it, toadd)) {
    


    morcos commented at 1:24 am on July 17, 2015:
    This test is better placed after we’ve randomly skipped some entries below. Otherwise we might end up evicting something that actually has a better fee rate than the tx being considered.

    sipa commented at 6:34 pm on July 17, 2015:
    Unless you’re talking about an outer-loop transaction being hit which has an earlier (lower feerate) skipped transaction as dependency, I think the odds are small. But it won’t hurt, both are very cheap checks.
  71. sipa force-pushed on Jul 17, 2015
  72. jtimon referenced this in commit fe1c480993 on Jul 18, 2015
  73. jtimon referenced this in commit c30c5ad4e0 on Jul 18, 2015
  74. Add uint256 support to CRollingBloomFilter 44974a31c1
  75. Keep track of recently rejected transactions
    Nodes can have divergent policies on which transactions they will accept
    and relay.  This can cause you to repeatedly request and reject the same
    tx after its inved to you from various peers which have accepted it.
    Here we add rolling bloom filter to keep track of such rejections,
    clearing the filter every time the chain tip changes.
    
    Credit goes to Alex Morcos, who created the patch that this code is
    based on.
    6c7dfed39b
  76. TxMemPool: Change mapTx to a boost::multi_index_container
    Indexes on:
    - Tx Hash
    - Fee Rate (fee-per-kb)
    b1ba550bce
  77. Move orphan tx handling to a separate log class 550ae8a13d
  78. Implement on-the-fly mempool size limitation. 26d123a028
  79. sipa force-pushed on Jul 18, 2015
  80. jtimon referenced this in commit 3501f19341 on Jul 20, 2015
  81. jtimon referenced this in commit acb6575e3b on Jul 20, 2015
  82. jtimon referenced this in commit 4c51b3fc56 on Jul 20, 2015
  83. sipa commented at 4:39 pm on September 22, 2015: member
    Superseded by a dozen other PRs.
  84. sipa closed this on Sep 22, 2015

  85. MarcoFalke referenced this in commit 44ece5ea83 on Mar 26, 2016
  86. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-19 00:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me