Limit mempool by throwing away the cheapest txn and setting min relay fee to it #6722
pull TheBlueMatt wants to merge 13 commits into bitcoin:master from TheBlueMatt:mempoollimit changing 10 files +361 −57-
TheBlueMatt commented at 3:21 am on September 25, 2015: memberTests forthcoming, but I felt bad I still hadnt pushed this. See commitmsg for more details.
-
in src/txmempool.h: in 7ef6af92b2 outdated
283@@ -284,6 +284,13 @@ class CTxMemPool 284 uint64_t totalTxSize; //! sum of all mempool tx' byte sizes 285 uint64_t cachedInnerUsage; //! sum of dynamic memory usage of all the map elements (NOT the maps themselves) 286 287+ mutable int64_t lastRollingFeeUpdate; 288+ mutable bool blockSinceLastRollingFeeBump; 289+ mutable double rollingMinimumFeeRate; //! minimum fee to get into the pool, decreases exponentially 290+ static const double ROLLING_FEE_HALFLIFE = 60 * 60 * 24;
jonasschnelli commented at 9:22 am on September 25, 2015:travis complains about a missingmutable bool blockSinceLastRollingFeeUpdate;
here.TheBlueMatt force-pushed on Sep 25, 2015JeremyRubin commented at 12:18 pm on September 25, 2015: contributorOne thing that I think is maybe not great about the behavior of this, is let’s say we have:
TXs: A, Fee 10, Size 1 B, Fee 10, Size 1 C, Fee 21, Size 2
If A and B are the min in the set, submitting C should kick them out. Now, let’s say B wanted to increase their fee, they would need to go above 21 to get in. As implemented, it doesn’t seem to me that two TX’s could both raise by 1 to, combined, provide more fee (because it seems tx’s get added one at a time?)
Perhaps a better compromise between these two behaviors would be to have a two part mempool, the inclusion set and the to-be ousted set and trigger a “GC” with some frequency. The to be ousted-set can be RBF’d or something.
Lastly justification on who might take advantage of such a behavior, perhaps a major exchange with a bunch of settlements out at once would want to make sure they all go through expediently and can coordinate increasing them all a hair.
JeremyRubin commented at 1:52 pm on September 25, 2015: contributorI think that my earlier comment is not fully needed, because mempool is a large multiple of block size, currently. Perhaps a more future proof implementation would allow setting:
- an optional hard memory cap
- a (potentially) dynamic size which is a large multiple of the current block size
in src/txmempool.cpp: in 152dcb6aba outdated
942+ it++; 943+ } 944+ 945+ if (expsize <= sizelimit) { 946+ BOOST_FOREACH(const txiter& it, stage) 947+ removeUnchecked(it);
morcos commented at 5:20 pm on September 25, 2015:You can’t call this by itself anymore. Use removeStagedin src/txmempool.cpp: in 152dcb6aba outdated
947+ removeUnchecked(it); 948+ 949+ trackRemovedOrAddFailed(bestFeeRateRemoved); 950+ return true; 951+ } else { 952+ trackRemovedOrAddFailed(CFeeRate(toadd.GetFee(), toadd.GetTxSize()));
morcos commented at 5:23 pm on September 25, 2015:It doesn’t make sense to bump the rolling fee for a tx that didn’t get in. A very high fee tx might not make it in if there are large packages or transactions (even of low fee rate) at the bottom of the mempool. That’s a problem in and of itself for the tx that doesn’t get in, but it’s even worse if you make that the new minimum relay rate.
TheBlueMatt commented at 8:29 pm on September 25, 2015:Hmm? No a very high fee tx will always evict transactions with lower feerate even if it ends up evicting a very large package to do so.in src/txmempool.cpp: in 152dcb6aba outdated
851+ if (!blockSinceLastRollingFeeBump) 852+ return CFeeRate(rollingMinimumFeeRate); 853+ 854+ int64_t time = GetTime(); 855+ if (time > lastRollingFeeUpdate + 10) { 856+ rollingMinimumFeeRate = rollingMinimumFeeRate / pow(2.0, (time - lastRollingFeeUpdate) / ROLLING_FEE_HALFLIFE);
morcos commented at 5:26 pm on September 25, 2015:I’d be concerned about the tradeoff here between one-time cost to stuff the mempool full of very high fee txs, and the length of time that stuffing causes the min relay rate to remain high. Expecially with 100MB mempool, thats only about 30MB of txs. So for example at 100k sat / kb fee rate, for 30 BTC you can knock the min relay fee up to 100k satoshis and the effect lasts for some time.
TheBlueMatt commented at 8:32 pm on September 25, 2015:Sure, the ROLLING_FEE_HALFLIFE could be dropped a lot. I had originally figured it based on decreasing the mempool right away, but since it now waits at least for one block before it lets the min feerate drop, I think it probably could be dropped a lot. Maybe we even dont want an exponential decrease either.TheBlueMatt commented at 8:38 pm on September 25, 2015: member@JeremyRubin No, you’re right, this breaks relaying of child-pays-for-parent when mempool grows large (assuming the package is not already present). The easy solution is to allow fee calulation of packages together when processing orphans, and then you send your package in reverse-dependancy order.TheBlueMatt force-pushed on Sep 25, 2015morcos commented at 11:45 pm on September 25, 2015: member@TheBlueMatt re: my comment on high fee txs. I see now, you aren’t doing the overall fee check in order to boot a package. I just assumed the StageTrimToSize logic was the same. So how do you think about free relay then? Could you write up a quick intro describing the algorithm as it would help to know how you think about it. Is the idea that all even though the tx causing the eviction hasn’t covered the fees to pay for the evicted packages relay, by boosting the minRelayRate you’re essentially forcing all future transactions to do so?
It’s an interesting idea, one question is how big a sweet spot there is between having the half-life too long and worrying about the “cram relayFee high all of a sudden” attack vs having it too low and perhaps having some vague concern about free relay.
Why does your increased relay fee only apply to low priority transactions? I think it has to apply to all.
TheBlueMatt commented at 0:25 am on September 26, 2015: member@morcos see the description of the main commit: “This limits mempool by walking the lowest-feerate txn in mempool when it goes over -maxmempool in size, removing them. It then sets the minimum relay fee to the maximum fee transaction-and-dependant-set it removed, plus the default minimum relay fee. After the next block is received, the minimum relay fee is allowed to decrease exponentially (with a half-life of one day).
The minimum -maxmempool size is 10*-limitdescendantsize, as it is easy for an attacker to play games with the cheapest -limitdescendantsize transactions.
Note that this effectively disables high-priority transaction relay iff the mempool becomes large.”
As for your specific questions: Yes, the idea is that you can relay some cheap crap for a bit, driving up the min relay fee by the default min relay fee each time (which was always meant as a “this is what it costs to send a transaction around the network” constant, though it hasn’t always done a good job of being accurate there).
The increased relay fee will effectively apply to low priority transactions, as they will be the package selected by the final TrimToSize call. Thus, priority-based relay will effectively remain enabled until people’s mempools fill up.
in src/txmempool.cpp: in 0ae46697fe outdated
903+ bool good = true; // Whether including 'rootit' (and all its descendants) is a good idea. 904+ 905+ while (!todo.empty()) { 906+ const txiter& itnow = todo.front(); 907+ if (now.count(itnow)) 908+ continue;
morcos commented at 0:51 am on September 26, 2015:need to pop_front() before continuing, otherwise its an infinite loop
TheBlueMatt commented at 1:53 am on September 26, 2015:LOL, oops…morcos commented at 0:57 am on September 26, 2015: memberBut in particular the increased relay fee does NOT apply to high priority txs? That’s what I don’t understand. It seems you could use the same stable of high priority inputs over and over to gain free relay.TheBlueMatt force-pushed on Sep 26, 2015TheBlueMatt force-pushed on Sep 26, 2015TheBlueMatt commented at 1:59 am on September 26, 2015: memberHmm, indeed, there is an attack there where you can cause lots of relay for free there. You cant really get much into the mempool (only up to the max package size) and you do have to increase the feerate each time, but only by one satoshi per kb…in src/txmempool.cpp: in 22d846f573 outdated
884+ // If the transaction's feerate is worse than what we're looking for, we have processed everything in the mempool 885+ // that could improve the staged set. If we don't have an acceptable solution by now, bail out. 886+ break; 887+ } 888+ txiter rootit = mapTx.project<0>(it.base()); 889+ rootit--;
morcos commented at 5:30 pm on September 26, 2015:this is a bug. rootit is an iterator by txid hash, so decrementing it puts you at a completely random transaction. the base iterator needs to be decremented before projecting. @sdaftuar and i didn’t like this oddness, so the first commit in #6557 reverses the feerate sort. there was no reason to do it the other way in the first place. maybe you should just grab that?in src/init.cpp: in 22d846f573 outdated
841@@ -841,6 +842,12 @@ bool AppInit2(boost::thread_group& threadGroup, CScheduler& scheduler) 842 fCheckBlockIndex = GetBoolArg("-checkblockindex", chainparams.DefaultConsistencyChecks()); 843 fCheckpointsEnabled = GetBoolArg("-checkpoints", true); 844 845+ // -mempoollimit limits 846+ int64_t nMempoolSizeLimit = GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000; 847+ int64_t nMempoolDescendantSizeLimit = GetArg("-limitdescendantsize", DEFAULT_DESCENDANT_SIZE_LIMIT) * 1000; 848+ if (nMempoolSizeLimit < 0 || nMempoolSizeLimit < nMempoolDescendantSizeLimit * 10) 849+ return InitError(strprintf(_("Error: -maxmempool must be at least %d MB"), GetArg("-limitdescendantsize", DEFAULT_DESCENDANT_SIZE_LIMIT) / 100));
morcos commented at 10:22 pm on September 26, 2015:Keep in mind this is a ratio of 2 different measurements. Serialized transaction size for descendant limit and mempool memory usage for maxmempool. There is about a 3x ratio between those measurements. So a 25MB mempool would actually only fit about 3 maximum sized packages… (I used 4x as a conservative ratio, and similarly wanted a 10x difference so ended up with 40x between the arguments.)
TheBlueMatt commented at 10:38 pm on September 26, 2015:Oops, yea, my notes to fix this from earlier were saying do something like 100MB, for this reason…Last time I ignore my notes and just do what I think when I’m sick :/ghost commented at 10:30 pm on September 26, 2015: noneWhat’s wrong with XT’s method of discarding a random transaction so that you can’t predictably manipulate the mempool?TheBlueMatt force-pushed on Sep 26, 2015TheBlueMatt commented at 10:36 pm on September 26, 2015: member@NanoAkron It makes it trivial to DoS the network, among many other issues.TheBlueMatt force-pushed on Sep 27, 2015in src/txmempool.cpp: in 104c63dad6 outdated
943+ it++; 944+ } 945+ 946+ if (expsize <= sizelimit) { 947+ RemoveStaged(stage); 948+ trackRemovedOrAddFailed(bestFeeRateRemoved);
morcos commented at 5:36 pm on September 30, 2015:These functions will be called every time through even if the mempool wasn’t full to start within src/main.cpp: in 104c63dad6 outdated
888 strprintf("%d < %d", nFees, txMinFee)); 889 890- // Require that free transactions have sufficient priority to be mined in the next block. 891- if (GetBoolArg("-relaypriority", true) && nFees < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) { 892+ CAmount mempoolRejectFee = pool.GetMinFee().GetFee(nSize); 893+ if (mempoolRejectFee > 0 && nFees < ::minRelayTxFee.GetFee(nSize) + mempoolRejectFee) {
sdaftuar commented at 6:02 pm on September 30, 2015:With the way the half-life calculation works, I believe it would take a very long time before mempoolRejectFee will reach 0 again, after an eviction; this in turn would cause us to wait a really long time before being willing to relay low-fee transactions that have high priority. Perhaps the mempool could round the min fee it returns down to 0 at some point so that this doesn’t take forever, or we can adjust the way we use it here to allow for the priority calculation to kick in even if the mempoolRejectFee isn’t exactly 0?in src/main.h: in 104c63dad6 outdated
50@@ -51,6 +51,8 @@ static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 900; 51 static const unsigned int DEFAULT_DESCENDANT_LIMIT = 1000; 52 /** Default for -limitdescendantsize, maximum kilobytes of in-mempool descendants */ 53 static const unsigned int DEFAULT_DESCENDANT_SIZE_LIMIT = 2500; 54+/** Default for -maxmempool, maximum megabytes of mempool memory usage */ 55+static const unsigned int DEFAULT_MAX_MEMPOOL_SIZE = 100;
sdaftuar commented at 6:31 pm on September 30, 2015:I think it’d be better to make this default value as large as we think users can reasonably live with. 100MB of memory is only about 30MB of actual transactions, or 30 full blocks. It seems to me like all the attacks someone could do on a limited mempool involve trying to play games with the effects of eviction, so having a bigger default mempool just causes all attacks to scale up in cost to carry out, because an attacker has to generate more transactions just to trigger eviction.
#6557 has a 500MB default; if we’re concerned that may be too big, how about 250 or 300MB?
morcos commented at 6:32 pm on September 30, 2015: memberI think the
ROLLING_FEE_HALFLIFE
should be 12 hours. Here’s my analysis: The purpose of therollingMinimumFeeRate
is to strike the right balance between two things.- Future transactions should be obligated to pay for the cost of transactions that were evicted (and their own relay fee) otherwise a large package of transactions could be evicted by a small tx with a slightly higher fee rate. This could happen repeatedly for a bandwidth attack.
- It must decay so an attacker can not pack the mempool full of high fee txs one time and peg the effective min relay rate very high for a long time for the cost of stuffing the mempool once.
From the point of view of the bandwidth attack:
Assume the prevailing fee rate at the bottom of the mempool is X times the relay rate. Then a full size 2.5MB package can be evicted from there by paying X+1 on a small 200 byte tx. Effectively you have now paid the minimum relay fee on (200X + 200) bytes, but have relayed 2.5MB + 200 bytes, so you got free relay of 2.5MB - X * 200 bytes.As soon as the
rollingMinimumFeeRate
has dropped from X back down to X-1, you can repeat the attack. At a half-life of 12 hours and assuming X = 20, then it’ll take about 53 mins for that to happen. So you can free relay 47 kB per min. This seems sufficiently small compared to the bare minimum network relay capacity of 100 kB per min (1 block every 10 mins).Since the decay is exponential, you’ll actually take a lot longer than 53 mins to repeat the attack if the prevailing fee rate multiple X is considerably less than 20. However as the prevailing fee rate climbs the attack could be considered a bigger concern. This should be addressed by having a default minimum relay rate that is higher. It seams reasonable that over the long term the default minimum relay rate will not be much less than 1/20th of the prevailing fee rate at the bottom of mempools.
From the point of view of stuffing the mempool:
If we imagine a 100MB mempool, then filling it with 30MB of transactions (sizewise = 100MB of usage) at a 100K sat/KB fee rate will cost 30 BTC.In this case access to the network will be blocked for all txs less than 100k feerate for 5 hours while those transactions are mined anyway. The additional gain the
rollingMinimumFeeRate
gives an attacker is another 7 hours until the decay has brought down the feerate to 50K.Since the attacker could have stopped anything under 50K feerate anyway for 10 hours by just issuing 60MB worth of transactions at that fee rate. This attack is not significantly worse.
So I think 12 hours strikes about the right balance.
in src/txmempool.cpp: in 104c63dad6 outdated
874+ setEntries stage; 875+ std::set<uint256> protect; 876+ BOOST_FOREACH(const CTxIn& in, toadd.GetTx().vin) 877+ protect.insert(in.prevout.hash); 878+ 879+ size_t expsize = DynamicMemoryUsage() + toadd.DynamicMemoryUsage(); // Track the expected resulting memory usage of the mempool.
sdaftuar commented at 6:43 pm on September 30, 2015:I haven’t thought about how much this is likely to matter but I don’t think this is the best way to guess the expected size of the resulting mempool – it misses the extra overhead from mapLinks, mapNextTx, and the multi_index pointers itself.
I think this code here is almost correct: https://github.com/sdaftuar/bitcoin/blob/7008233767bd5e03521d96cde414394975e940d7/src/txmempool.cpp#L797
[There is an error though; the value of “9” that is used in the multi_index memory estimator should actually be a “6” I think in both DynamicMemoryUsage and GuessDynamicMemoryUsage.]
in src/txmempool.cpp: in 104c63dad6 outdated
872+ 873+ CFeeRate bestFeeRateRemoved; 874+ setEntries stage; 875+ std::set<uint256> protect; 876+ BOOST_FOREACH(const CTxIn& in, toadd.GetTx().vin) 877+ protect.insert(in.prevout.hash);
sdaftuar commented at 6:58 pm on September 30, 2015:If you changeTrimToSize()
to take as an argument the ancestors of the entry being considered (which is calculated earlier inAcceptToMemoryPool()
), then you can get rid ofprotect
, and instead just check that each package root that you consider isn’t an ancestor of the entry being added. (This is what I did in #6557 and I think it helps make the code a lot simpler, especially combined with usingCalculateDescendants()
to grab all the descendants instead of writing a new loop here.)jgarzik commented at 7:40 pm on September 30, 2015: contributorconcept ACK - prefer 24-48 hours - will do some testingsdaftuar commented at 0:19 am on October 1, 2015: memberReorgs should probably be handled differently – I don’t think it makes sense for eviction to take place when callingAcceptToMemoryPool()
fromDisconnectBlock()
; instead perhaps we can just let the mempool grow during a reorg and trim it down to size at the end?TheBlueMatt commented at 1:52 am on October 1, 2015: memberAddressed a few nits…Things left to do:
- Figure out the decay constant/drop min fee to 0 when it gets near 0 (if we decide to push forward with this tomorrow, we should discuss this value)
- Steal code from #6557 to use CalculateDescendants to make the TrimToSize code simpler
- Write some basic sanity-check test cases
TheBlueMatt force-pushed on Oct 1, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt commented at 8:48 am on October 2, 2015: memberTheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt commented at 9:59 am on October 2, 2015: memberHalflife set to: if mempool is < max_mempool_size / 4: halflife = 3 hours elif mempool < max_mempool_size / 2: halflife = 6 hours else halflife = 12 hours.
When halflife is < minRelayTxFee (1000 satoshisPerKb), it is rounded down to 0 and free relay is re-enabled.
morcos commented at 1:57 pm on October 2, 2015: member+1 on the half life suggestions.TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt commented at 8:44 pm on October 2, 2015: memberOK, did even better and solved an edge case (thanks again to @sdaftuar for suggestions) by just adding the new tx to the mempool first, and then calling TrimToSize blind and checking if the tx is still in mempool afterwards.
Also reverted the mempool sorting change after discussion with @morcos on IRC - though it is a win in the “optimize for maximum mempool feerate” metric, it seems better to leave it as is because it may result in a larger ending mempool.
TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt force-pushed on Oct 2, 2015TheBlueMatt commented at 9:52 pm on October 2, 2015: memberIncorporated mempool expiry from @sipa, rebased and squashed. Should be reviewable/testable, but needs test cases.petertodd commented at 5:15 pm on October 3, 2015: contributorIt’d be good to add some design doc comments explaining the intent of this code. CTxMemPool::GetMinFee() especially is quite mysterious and full of magic constants right now, which is easier to understand when you read @sdaftuar’s comments, but that’s much harder to discover if you’re starting from the source code.
We also should add a way to get the current minimum relay fee from the RPC interface, e.g. through getmempoolinfo
petertodd commented at 5:16 pm on October 3, 2015: contributorCode looks reasonable so far, though I haven’t looked into it in enough detail to give a utACK just yet.morcos commented at 4:28 pm on October 5, 2015: member@TheBlueMatt I was just talking with @sdaftuar and now we think the max is required for the sort. I know you reverted back to max, but I just wanted to memorialize that it is actually necessary. Otherwise, it might possible to purposefully construct packages which will cause a parent to sort down and get evicted, allowing an attacker to control evicting a particular tx.in src/main.cpp: in f23d7ab4ac outdated
953@@ -954,6 +954,17 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa 954 955 // Store transaction in memory 956 pool.addUnchecked(hash, entry, setAncestors, !IsInitialBlockDownload()); 957+ 958+ // trim mempool and check if tx is was trimmed
sdaftuar commented at 6:58 pm on October 6, 2015:“is was” -> “was”in src/txmempool.cpp: in f23d7ab4ac outdated
884+ halflife /= 2; 885+ 886+ rollingMinimumFeeRate = rollingMinimumFeeRate / pow(2.0, (time - lastRollingFeeUpdate) / halflife); 887+ lastRollingFeeUpdate = time; 888+ 889+ if (rollingMinimumFeeRate < ::minRelayTxFee.GetFeePerK())
sdaftuar commented at 7:02 pm on October 6, 2015:Would it perhaps be better to pass the minRelayTxFee in, so that we’re not needing to access globals inside the mempool?
sdaftuar commented at 8:05 pm on October 6, 2015:On further thought – would it make more sense to just move this code out of the mempool and into main.cpp, to isolate the mempool from relay policy? We could makeTrimToSize()
return the fee rate of the last package it removes, and then leaveAcceptToMemoryPool()
responsible for deciding what to do with the prevailing relay fee after eviction (including this logic for decaying things back down).
TheBlueMatt commented at 9:37 pm on October 6, 2015:I see GetMinFee() as a “minimum feerate this mempool reasonably accepts” not a part of your relay policy. You can tweak your relay policy by having a bigger mempool. Someone who wants to refactor all of the relay policy to be separated, later, can do so, but that seems far out-of-scope for this pull.paveljanik commented at 7:19 pm on October 6, 2015: contributor0In file included from wallet/wallet.cpp:24: 1./txmempool.h:291:25: warning: in-class initializer for static data member of type 'const double' is a GNU extension [-Wgnu-static-float-init] 2 static const double ROLLING_FEE_HALFLIFE = 60 * 60 * 12; 3 ^ ~~~~~~~~~~~~ 41 warning generated.
in src/txmempool.cpp: in f23d7ab4ac outdated
906+ while (DynamicMemoryUsage() > sizelimit) { 907+ indexed_transaction_set::nth_index<1>::type::iterator it = mapTx.get<1>().begin(); 908+ setEntries stage; 909+ CalculateDescendants(mapTx.project<0>(it), stage); 910+ RemoveStaged(stage); 911+ trackPackageRemoved(CFeeRate(it->GetFeesWithDescendants(), it->GetSizeWithDescendants()));
morcos commented at 8:58 pm on October 6, 2015:This seems like it has two problems. First, the descendant package information will have been updated by the removal of all the descendants in RemoveStaged. More importantly, won’t the iterator be invalid once it has been erased?TheBlueMatt commented at 9:46 pm on October 6, 2015: memberComments should all be addressed.in src/main.cpp: in 7940a90bcc outdated
888 strprintf("%d < %d", nFees, txMinFee)); 889 890- // Require that free transactions have sufficient priority to be mined in the next block. 891- if (GetBoolArg("-relaypriority", true) && nFees < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) { 892+ CFeeRate mempoolRejectFee = pool.GetMinFee(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000); 893+ if (mempoolRejectFee > ::minRelayTxFee && nFees < ::minRelayTxFee.GetFee(nSize) + mempoolRejectFee.GetFee(nSize)) {
morcos commented at 3:53 pm on October 7, 2015:I’m not sure if this is ideal.
- If a bunch of free txs fill the mempool, they can then start evicting each other and continue getting relayed with no effective min relay rate ever getting created.
- If we ignore free txs, then if the mempool is full of minRelayTxFee txs then we’re reliant on
blockSinceLastRollingFeeBump
to prevent us from decaying the mempoolRejectFee immediately and as soon as a block comes in, it’ll revert straight to 0.
The free rate limiter in the first case and having to wait for a block in the second case probably prevent either of these problems from being significant, but its seems a bit fragile.
TheBlueMatt commented at 1:41 am on October 8, 2015:Yea, indeed, trying to address suhas’ comments about not wanting to call ::minRelayTxFee in mempool made this worse, but I agree that it wasnt ideal to begin with.in src/main.cpp: in 7940a90bcc outdated
889 890- // Require that free transactions have sufficient priority to be mined in the next block. 891- if (GetBoolArg("-relaypriority", true) && nFees < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) { 892+ CFeeRate mempoolRejectFee = pool.GetMinFee(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000); 893+ if (mempoolRejectFee > ::minRelayTxFee && nFees < ::minRelayTxFee.GetFee(nSize) + mempoolRejectFee.GetFee(nSize)) { 894+ return state.DoS(0, false, REJECT_INSUFFICIENTFEE, "mempool full");
morcos commented at 5:37 pm on October 7, 2015:how about:state.DoS(0, false, REJECT_INSUFFICIENTFEE, "mempool full", false, strprintf("%d < %d + %d", nFees, ::minRelayTxFee.GetFee(nSize), mempoolRejectFee.GetFee(nSize)));
in src/txmempool.cpp: in 7940a90bcc outdated
891+ 892+void CTxMemPool::trackPackageRemoved(const CFeeRate& rate) { 893+ AssertLockHeld(cs); 894+ if (rate.GetFeePerK() > rollingMinimumFeeRate) { 895+ rollingMinimumFeeRate = rate.GetFeePerK(); 896+ blockSinceLastRollingFeeBump = false;
morcos commented at 5:37 pm on October 7, 2015:how about adding:LogPrint("mempool", "Rolling Min Fee bumped to %f\n", rollingMinimumFeeRate);
TheBlueMatt commented at 0:58 am on October 8, 2015:Did it in one print at the end of TrimToSize instead.in src/txmempool.cpp: in 7940a90bcc outdated
897+ } 898+} 899+ 900+void CTxMemPool::TrimToSize(size_t sizelimit) { 901+ LOCK(cs); 902+
morcos commented at 5:37 pm on October 7, 2015:What do you think about tracking the size and number of transactions that are removed here and debug logging it at the end?in src/main.cpp: in 7940a90bcc outdated
961+ if (expired != 0) 962+ LogPrint("mempool", "Expired %i transactions from the memory pool\n", expired); 963+ 964+ pool.TrimToSize(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000); 965+ if (!mempool.exists(tx.GetHash())) 966+ return state.DoS(0, false, REJECT_INSUFFICIENTFEE, "mempool full");
morcos commented at 9:07 pm on October 7, 2015:also log slightly more info here
TheBlueMatt commented at 0:44 am on October 8, 2015:Not sure what else to log here?in src/main.cpp: in 7940a90bcc outdated
2367@@ -2354,6 +2368,8 @@ bool InvalidateBlock(CValidationState& state, CBlockIndex *pindex) { 2368 } 2369 } 2370 2371+ mempool.TrimToSize(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000);
morcos commented at 9:30 pm on October 7, 2015:I’m not a fan of having this in InvalidateBlock.
TheBlueMatt commented at 0:53 am on October 8, 2015:I agree, but the alternative seems to be to do it per disconnected block?sdaftuar commented at 9:36 pm on October 7, 2015: memberThere’s an issue with the way the locking is done – with the current code, we need to hold cs_main in order to callTrimToSize()
(eg inActivateBestChain()
) or else it’s possible for there to be a race condition withAcceptToMemoryPool()
and we could, for instance, let an orphan in to the mempool (because the mempool lock isn’t held from theHaveInputs()
check to acceptance).TheBlueMatt commented at 0:52 am on October 8, 2015: member@sdaftuar Argh, you’re right…could switch it to require you check your inputs with mempool.cs, but its probably just easier to stick with cs_main for this one.TheBlueMatt force-pushed on Oct 8, 2015TheBlueMatt commented at 1:50 am on October 8, 2015: memberFixed bugs and changed the 0-setting feerate stuff:
- When a transaction is removed, the minimum mempool fee is set to the max of its package’s feerate and ::minRelayTxFee. This way, when a transaction is removed, feerate is set to ::minRelayTxFee (ie the value under which we consider txn to have effectively “0 fee”) until the next block.
- After that, the feerate is allowed to decay as previously, until it reaches half of ::minRelayTxFee, and then it is set to 0. However, during this decay, before it reaches 0, the returned fee is set to ::minRelayTxFee (it is no longer added to ::minRelayTxFee in ATMP), this way we never require a fee less than ::minRelayTxFee for a tx to be considered “non-0-fee”.
morcos commented at 3:05 am on October 8, 2015: member@TheBlueMatt huh? You can’t remove the addition of ::minRelayTxFee to the rollingMinFee. That’s the only thing that makes this mempool limiting work, unless I’m misunderstanding what you did.TheBlueMatt commented at 3:09 am on October 8, 2015: memberWhen I was thinking about how to fix it, I was going to add ::minRelayTxFee instead of max, but it seems wires got crossed and I didn’t change the max…
On October 7, 2015 8:05:47 PM PDT, Alex Morcos notifications@github.com wrote:
@TheBlueMatt huh? You can’t remove the addition of ::minRelayTxFee to the rollingMinFee. That’s the only thing that makes this mempool limiting work, unless I’m misunderstanding what you did.
Reply to this email directly or view it on GitHub: #6722 (comment)
TheBlueMatt commented at 3:10 am on October 8, 2015: memberie the ::minRelayTxFee is added, but it’s added to the mempool’s limit, not in ATMP.
On October 7, 2015 8:05:47 PM PDT, Alex Morcos notifications@github.com wrote:
@TheBlueMatt huh? You can’t remove the addition of ::minRelayTxFee to the rollingMinFee. That’s the only thing that makes this mempool limiting work, unless I’m misunderstanding what you did.
Reply to this email directly or view it on GitHub: #6722 (comment)
morcos commented at 3:20 am on October 8, 2015: memberoh ok… but thats really putting the wrong kind of logic in the mempool.TheBlueMatt commented at 3:23 am on October 8, 2015: memberI’m not sure I agree… Mempool has logic for “this is the fee needed to get into this mempool”. The “this fee is effectively 0” parameter is not a mempool policy, indeed, but it’s already a parameter. It has some idea of how it wants to decay the fee, which could be better moved outside of mempool, but putting it in ATMP also feels strange… It’s a property of the mempool, not a global one.
On October 7, 2015 8:20:35 PM PDT, Alex Morcos notifications@github.com wrote:
oh ok… but thats really putting the wrong kind of logic in the mempool.
Reply to this email directly or view it on GitHub: #6722 (comment)
TheBlueMatt force-pushed on Oct 8, 2015TheBlueMatt force-pushed on Oct 8, 2015Reverse the sort on the mempool's feerate index 78b82f4a16Add Mempool Expire function to remove old transactions
(note the 9x multiplier on (void*)'s for CTxMemPool::DynamicMemoryUsage was accidentally introduced in 5add7a7 but should have waited for this commit which adds the extra index)
Fix calling mempool directly, instead of pool, in ATMP 9c9b66f771Track (and define) ::minRelayTxFee in CTxMemPool e8bcdce8a2Add CFeeRate += operator 241d6078baPrint mempool size in KB when adding txn e6c7b362abin src/main.cpp: in abb36aa255 outdated
960+ int expired = pool.Expire(GetTime() - GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY) * 60 * 60); 961+ if (expired != 0) 962+ LogPrint("mempool", "Expired %i transactions from the memory pool\n", expired); 963+ 964+ pool.TrimToSize(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000, ::minRelayTxFee); 965+ if (!mempool.exists(tx.GetHash()))
instagibbs commented at 2:04 pm on October 12, 2015:there a reason why it’s not pool.exists rather than mempool?TheBlueMatt force-pushed on Oct 13, 2015Implement on-the-fly mempool size limitation.
After each transaction which is added to mempool, we first call Expire() to remove old transactions, then throwing away the lowest-feerate transactions. After throwing away transactions by feerate, we set the minimum relay fee to the maximum fee transaction-and-dependant-set we removed, plus the default minimum relay fee. After the next block is received, the minimum relay fee is allowed to decrease exponentially. Its halflife defaults to 12 hours, but is decreased to 6 hours if the mempool is smaller than half its maximum size, and 3 hours if the mempool is smaller than a quarter its maximum size. The minimum -maxmempool size is 40*-limitdescendantsize, as it is easy for an attacker to play games with the cheapest -limitdescendantsize transactions. -maxmempool defaults to 300MB. This disables high-priority transaction relay when the min relay fee adjustment is >0 (ie when the mempool is full). When the relay fee adjustment drops below the default minimum relay fee / 2 it is set to 0 (re-enabling priority-based free relay).
Only call TrimToSize once per reorg/blocks disconnect d355cf4420Add reasonable test case for mempool trimming 074cb155c2Drop minRelayTxFee to 1000
There is no exact science to setting this parameter, but 5000 (just over 1 US cent at the time of writing) is higher than the cost to relay a transaction around the network (the new benchmark due to mempool limiting).
TheBlueMatt force-pushed on Oct 13, 2015TheBlueMatt commented at 8:44 am on October 13, 2015: memberSquashed/rebased and reverted the minRelayTxFee bump.Undo GetMinFee-requires-extra-call-to-hit-0 8abe0f5658Fix comment formatting tabs 2bc50187eein src/txmempool.h: in 9e93640be6 outdated
418@@ -397,6 +419,20 @@ class CTxMemPool 419 */ 420 bool CalculateMemPoolAncestors(const CTxMemPoolEntry &entry, setEntries &setAncestors, uint64_t limitAncestorCount, uint64_t limitAncestorSize, uint64_t limitDescendantCount, uint64_t limitDescendantSize, std::string &errString, bool fSearchForParents = true); 421 422+ /** The minimum fee to get into the mempool, which may itself not be enough 423+ * for larger-sized transactions. 424+ * The minReasonableRelayFee constructor arg is used to bound the time it 425+ * takes the fee rate to go back down all the way to 0. When the feerate 426+ * would otherwise be half of this, it is set to 0 instead. 427+ */
btcdrak commented at 7:34 pm on October 14, 2015:nit: docblock alignment/formatting is squiffin src/main.cpp: in 9e93640be6 outdated
74@@ -75,7 +75,7 @@ uint64_t nPruneTarget = 0; 75 bool fAlerts = DEFAULT_ALERTS; 76 77 /** Fees smaller than this (in satoshi) are considered zero fee (for relaying and mining) */ 78-CFeeRate minRelayTxFee = CFeeRate(5000); 79+CFeeRate minRelayTxFee = CFeeRate(1000);
btcdrak commented at 7:37 pm on October 14, 2015:I think recent events proved this was too low, so I’m not sure we should revert it all the way back.
TheBlueMatt commented at 7:47 pm on October 14, 2015:Oh? Recent events proved it was too low as the only mempool-limiting mechanism (which it is now). But if it doesnt matter for mempool limiting I’m not sure thats true?
btcdrak commented at 2:42 am on October 15, 2015:Wouldn’t this be better discussed in a separate PR?
TheBlueMatt commented at 3:22 am on October 15, 2015:Changint it to anything but 1000, probably. But the 5000 change was done as a temporary hack to fix mempool until something better comes along. Dont take this as an update, its just reverting the now-useless commit :)
jtimon commented at 10:36 am on October 20, 2015:Yes, please, let’s leave changing default values for policy parameters to separate PRs.TheBlueMatt commented at 7:46 pm on October 14, 2015: member@sdaftuar I was being lazy and not bothering since the behavior either way is just as good. But, OK, I fixed it.dcousens commented at 9:17 pm on October 14, 2015: contributorconcept ACKmorcos commented at 10:12 pm on October 14, 2015: memberACK!btcdrak commented at 2:43 am on October 15, 2015: contributorACKinstagibbs commented at 12:52 pm on October 15, 2015: memberACKjgarzik commented at 1:10 pm on October 15, 2015: contributorut ACKsdaftuar commented at 4:28 pm on October 16, 2015: memberACK (apart from sipa’s nit)morcos commented at 1:19 am on October 17, 2015: memberI’ve done some benchmarking. I ran a historical simulation over 1 million transactions between July 6th and July 14th. And I repeated this for 3 different code bases.
- master as of 7/31 (before multindex and package tracking)
- master as of 10/16
- this pull
Results are below, I think you can see that the average is within measurement error. The slight increase in the median is to be expected.
All times in ms.Txns accepted to the mempool pre-packages master 10/16 6722 Average 0.786 0.790 0.782 Median 0.268 0.279 0.298 90th percentile 1.04 0.91 0.93 99th percentile 8.03 7.91 7.71 99.9th percentile 45.8 45.7 43.1 max 286 286 398 I broke down the timing a little bit more and it’s clear that the vast majority of the outliers come from CheckInputs and HaveInputs. For instance if you look at the 1k transactions between the 99.8th and 99.9th percentile. (EDIT: oops posted slightly wrong stats the first time, conclusion the same)
AcceptToMemoryPool time in ms Total 41.4 CheckInputs 32.5 HaveInputs 7.1 remaining 1.8 Although I did see 3 (out of 1M) transactions where the addUnchecked call took > 50ms in this pull.
These measurements were made with libsecp256k1 merged and default dbcache size. 6722 used a 100 MB mempool limit.
sipa commented at 1:29 am on October 17, 2015: memberThose numbers look reasonable and are probably consistent with what I’ve been seeing. I temporarily can’t benchmark easily to verify.Fix stale comment in CTxMemPool::TrimToSize. 58254aa3bcTheBlueMatt force-pushed on Oct 19, 2015rubensayshi commented at 9:56 am on October 19, 2015: contributorutACKsipa commented at 4:24 pm on October 19, 2015: memberACK.in src/main.cpp: in 58254aa3bc
887 return state.DoS(0, false, REJECT_INSUFFICIENTFEE, "insufficient fee", false, 888 strprintf("%d < %d", nFees, txMinFee)); 889 890- // Require that free transactions have sufficient priority to be mined in the next block. 891- if (GetBoolArg("-relaypriority", true) && nFees < ::minRelayTxFee.GetFee(nSize) && !AllowFree(view.GetPriority(tx, chainActive.Height() + 1))) { 892+ CAmount mempoolRejectFee = pool.GetMinFee(GetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE) * 1000000).GetFee(nSize);
jtimon commented at 10:49 am on October 20, 2015:I would prefer that this newGetArg("-maxmempool", DEFAULT_MAX_MEMPOOL_SIZE)
global was initialized only in one place: init.cpp
sipa commented at 11:03 am on October 20, 2015:I agree, but I don’t care strongly. It’s already violated in many places, though.
TheBlueMatt commented at 9:48 pm on October 20, 2015:Seems like an easy thing to do in a separate PR.in src/main.cpp: in 58254aa3bc
953@@ -954,6 +954,17 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa 954 955 // Store transaction in memory 956 pool.addUnchecked(hash, entry, setAncestors, !IsInitialBlockDownload()); 957+ 958+ // trim mempool and check if tx was trimmed 959+ if (!fOverrideMempoolLimit) { 960+ int expired = pool.Expire(GetTime() - GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY) * 60 * 60);
jtimon commented at 10:50 am on October 20, 2015:GetArg("-mempoolexpiry", DEFAULT_MEMPOOL_EXPIRY)
could also be initialized in init.cppin src/main.h: in 58254aa3bc
50@@ -51,6 +51,10 @@ static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 900; 51 static const unsigned int DEFAULT_DESCENDANT_LIMIT = 1000; 52 /** Default for -limitdescendantsize, maximum kilobytes of in-mempool descendants */ 53 static const unsigned int DEFAULT_DESCENDANT_SIZE_LIMIT = 2500; 54+/** Default for -maxmempool, maximum megabytes of mempool memory usage */ 55+static const unsigned int DEFAULT_MAX_MEMPOOL_SIZE = 300; 56+/** Default for -mempoolexpiry, expiration time for mempool transactions in hours */ 57+static const unsigned int DEFAULT_MEMPOOL_EXPIRY = 72;
jtimon commented at 10:56 am on October 20, 2015:Maybe these belong to txmempool.h ?
sipa commented at 11:02 am on October 20, 2015:IMHO the logic for deciding what goes into the mempool doesn’t belong in txmempool, only what is necessary to maintain its efficiency and correctness.
jtimon commented at 11:09 am on October 20, 2015:The logic is in txmempool already, these are just default values for a couple of new policy globals.
sipa commented at 11:12 am on October 20, 2015:The logic for enforcing it is, but not the logic to decide what the limits are.
Please don’t encourage moving more policy decisions to the mempool…
jtimon commented at 11:24 am on October 20, 2015:Well, my general approach is not moving more policy to main. I had encapsulated policy in txmempool (including all the uses of global minTxRelayFee) , decoupled policy/fee from txmempool (done in master thanks to @morcos ) and also decoupled txmempool from policy/fee once. Of course it doesn’t make sense to redo that work while waiting for one of these mempool limit PRs to be merged since they are clearly going to break that effort again.
What about moving them to policy/policy.h instead of putting them on main?
TheBlueMatt commented at 9:48 pm on October 20, 2015:Can we do this in a separate PR? It seems a bit late to be bikeshedding on where to put constants.laanwj added the label Mempool on Oct 20, 2015ABISprotocol commented at 9:25 pm on October 20, 2015: noneSince #6557 was closed (in favor of opening this pull request, according to @sdaftuar from a comment in #6557) I am again coming back to evaluate whether this new pull request does the following:
a) addresses my concerns as expressed in comments here and here in #6201 ~ these comments related to the ability of people in the developing world to access bitcoin and the problems inherent with taking a bitcoin-development approach that would not include the bulk of people in the developing and underdeveloped world. (See full comments for details.)
b) Has code that effectively affirms the principle that “mempool limiting and dynamic fee determination are superior to a static parameter change”
c) Incorporates a floating relay fee, such as this https://github.com/sipa/bitcoin/commit/6498673f99da9976df170052648ff2f0026210d2 or its equivalent.
Reasoning for this request: See history on #6201 #6455 #6470 #6557 (history of closed pull requests leading to this one (#6722) shown above)
TheBlueMatt commented at 9:47 pm on October 20, 2015: member@ABISprotocol I’m not actually sure if you’re asking a question or what, but I think the answer is “yes”. Yes, it does the things in (b) and (c), I guess? As for what you’re talking about in (a), I have no idea. Please make specific criticisms.ABISprotocol commented at 10:31 pm on October 20, 2015: none@TheBlueMatt I have been very specific since #6201 when I raised these issues, the pull requests since then which have led to this one have in part been responsive to the issues I raised, but have been closed.
When you say “I think the answer is “yes” (…) it does the things in (b) and (c), I guess?” I would appreciate a better answer, where you cite how it does so. If you believe that it does the things in (b) and (c) please cite a commit / change as part of this pull request that would do either (b) or (c) and describe in layman’s terms how it would do so.
With respect to (a), please see the comments cited in (a) for details, as suggested in my prior comment. The issues raised in my comments remain a serious and valid concern. @morcos @laanwj
sipa commented at 10:43 pm on October 20, 2015: member@ABISprotocol This page is for discussing technical issues. Please take the philosophical considerations elsewhere.ABISprotocol commented at 11:04 pm on October 20, 2015: none@sipa I believe I have raised substantial technical issues in my past and present comments. I think it is unfair of you to attempt to diminish my participation. Instead, please let me know if the issues I have raised have been addressed, and if so, please cite a basic message as to how they have been addressed. Thank you for your consideration of my comments.sipa commented at 11:10 pm on October 20, 2015: member(a) If you’re talking about accessibility of on blockchain transactions: no. We can’t guarantee that every possible useful transaction will have a negligable fee for every person on earth. If that was the case, DoS attacks that intervene with everyone’s ability to use it would also be negligable in cost for every person on earth. This is a philosophical question, and not something that changes in this pull request. Miners are already incentivized to choose the transactions that grant them the highest profits, and this PR merely extends that behaviour to the mempool.
(b) Yes, see (c)
(c) Yes, read the title of this pull request please.
You’re very welcome to discuss these issues, but not here as I don’t think they are related to this pull request. This is about dealing active problems on the network in line of existing behaviour.
ABISprotocol commented at 11:29 pm on October 20, 2015: nonelaanwj commented at 6:50 am on October 21, 2015: memberutACKlaanwj merged this on Oct 21, 2015laanwj closed this on Oct 21, 2015
laanwj referenced this in commit 3b20e239c6 on Oct 21, 2015dcousens commented at 10:29 pm on October 21, 2015: contributor@ABISprotocol what specific question do you want to ask?
I’ll try to answer, paraphrasing you:
[does this PR affect the] ability of people in the developing world to access [to] bitcoin?
If you classify access as the ability to run a full node, then, this PR, which will allow users to adjust the software to meet the capabilities of their hardware, does increase access.
b) Has code that effectively affirms the principle that “mempool limiting and dynamic fee determination are superior to a static parameter change”
If there isn’t spam, why should we maintain the higher static parameter? My understanding of this algorithm could be summarised as:
sort mempool transactions by fee in descending order, then filter/reduce the resultant collection such that when the maximum memory size is reached, drop any remaining transactions
The implementation isn’t so straightforward due to complications that arise when you account for CPFP and descendants, but, that is the base concept AFAIK. @TheBlueMatt would that be correct?
TheBlueMatt deleted the branch on Oct 21, 2015ABISprotocol commented at 4:28 am on October 22, 2015: none—–BEGIN PGP SIGNED MESSAGE—– Hash: SHA512
Hello,
By access, I didn’t mean necessarily the ability to run a full node. I was more concerned with the ability of users to make a transaction at all (regardless of what software they might be using) and not be squeezed out by the upward creep of fees.
See, for example, this, cited previously here in (a), which discusses the issue in more detail: #6201 (comment)
As stated there, “In observing this sad trend of gradual fee increases and what I see as censorship of small transactions, in a year’s time, given what happened in 2013 with #2577 and what is now happening with this issue here in 2015, it is entirely likely that further transaction and fee policies will be adopted which will edge out even those who are trying to make BTC transactions equivalent to 0.20 USD. Sharp currency declines (in the USD, euro, other currencies) and increases in value of BTC would create situations in which one might need to purchase small quantities of BTC, but paradoxically such policies as those proposed in this pull request might stymie entry-level buyers in the marketplace. In addition, the potential for microgiving in bitcoin is reduced by these kind of development proposals, and microgiving is one of the most significant developments to come to finance. It is one that cannot be adequately implemented by legacy systems in no small part due to their burdensome fees, which up to this point, bitcoin has not had. However, this appears to be changing rapidly.
As a consequence, a large number of people in the developing and underdeveloped world will be edged out by policies created by people who create and develop this new economic system without consideration of the voices of those who are least likely to be heard here. This implies that the billions who potentially could have been helped by this technology, now, will not.”
This is the concern which focuses on access, and it has to do with people being driven out as fees go up and up. Development direction should thus ultimately orient itself towards finding a way to support both on-chain and off-chain micro-transactions. How these are defined is important as well because it is certain that a micro-transaction in the context of bitcoin which can be supported by the network. As @gmaxwell has pointed out, #6201 (comment) “It’s important to be specific in what you’re talking about when you say microtransactions. In some contexts it means value transfers under “$10” in others, under “$1” in others under “$0.01” and in yet other under “$0.00001”. There is some level under which just simply cannot be supported: because a single attack at moderate cost could saturate the bandwidth of a substantial portion of the network (keep in mind Bitcoin is a broadcast system, and any system that can’t keep up can’t participate).”
Note here that there are a large number of persons in the world getting by on the equivalent of 1 to 2 USD per day if salaried. At one time I lived abroad for several years for less than fifty USD per month (and for a period of time lived in the USA with much less than that). This is much of the world. These are statements of fact which cannot be ignored and which are as relevant to the discussion as subsidy, cost of mining, and other vital factors. The trend of upward cost of transacting in the bitcoin network is not going to reverse if the status quo continues, but developers do have a choice in how they proceed right now and moving forward.
Thus the inclusion or exclusion of persons in the developing world when it comes to the bitcoin network and access is not an issue which can be dismissed, nor can developers suggest that these points are not technical enough and must be discussed elsewhere, because the substance of the pull requests mentioned in this thread directly impact whether or not whether billions of people in the developing world (and in particular those who might only be making, at most, a few dollars per day) will be able to transact in the bitcoin system, or whether they will ultimately be excluded from it completely as the use of it spreads.
So, does this PR affect the ability of people in the developing world to access bitcoin, in that context? I would submit that this PR, while it does provide the ability to run a full node, does not increase access substantially within the context of the issues raised above (see also section (a) in my original comment to this pull request), and thus I would submit that there is more to be done. I will not assume that off-chain solutions are the only way to address these issues, as we see from looking at BlockCypher’s on-chain microtransaction API for values between 2,000 satoshis (~$0.005) and 4,000,000 satoshis (~$9.50). http://dev.blockcypher.com/#microtransaction-api I encourage you to look at my own project for some ideas as well: http://abis.io
Daniel Cousens:
@ABISprotocol what specific question do you want to ask?
I’ll try to answer, paraphrasing you:
[does this PR affect the] ability of people in the developing world to access [to] bitcoin?
If you classify access as the ability to run a full node, then, this PR, which will allow users to adjust the software to meet the capabilities of their hardware, does increase access.
b) Has code that effectively affirms the principle that “mempool limiting and dynamic fee determination are superior to a static parameter change”
If there isn’t spam, why should we maintain the higher static parameter? My understanding of this algorithm could be summarised algorithmically as:
sort mempool transactions by fee in descending order, then filter/reduce the resultant collection such that when the maximum memory size is reached, drop all remaining transactions
The implementation isn’t so straightforward, but, conceptually, AFAIK. @TheBlueMatt would that be correct?
— Reply to this email directly or view it on GitHub: #6722 (comment)
http://abis.io ~ “a protocol concept to enable decentralization and expansion of a giving economy, and a new social good” https://keybase.io/odinn —–BEGIN PGP SIGNATURE—–
iQEcBAEBCgAGBQJWKGXZAAoJEGxwq/inSG8CJgsIAKWrlYNfqQp2NT4/q+R22MaD 1B/5MUrTndjhqbP2zcNTESwzbsUwZdgBljGwTRHlKy+eg8JpuFQn//e5wNqJT6UK 1KEDzWzXpoqCqgNeJHeBP7uSMb+9VUs/sV5D1Cgey6Kl/Ss9gJ1fhvvodBBkxK55 zStpgw7+MYQ4ZNxh+2a/txT/aG7quWq64KlMA/dfUT4sXPsJCxnwUcTVJh6fkPM1 smAWMyHWvz9M6WBxQYeDhJ/rjLXSP36D6Tjf0j/Y9/ehJf+wAucB9o7E0J6T4NjP 7xy6hCKr40iyefB3RwwfGZK6aVAcIkeC6p8gP/GKJVrCVV8LMyVT0NyPzKtUPD0= =oyDy —–END PGP SIGNATURE—–
in src/txmempool.cpp: in 58254aa3bc
305@@ -305,15 +306,18 @@ void CTxMemPoolEntry::UpdateState(int64_t modifySize, CAmount modifyFee, int64_t 306 } 307 } 308 309-CTxMemPool::CTxMemPool(const CFeeRate& _minRelayFee) : 310+CTxMemPool::CTxMemPool(const CFeeRate& _minReasonableRelayFee) : 311 nTransactionsUpdated(0) 312 { 313+ clear();
jonasschnelli commented at 10:44 am on October 26, 2015:This change produces a crash on osx.
0jonasschnelli$ ./src/bitcoind --regtest 1libc++abi.dylib: terminating with uncaught exception of type boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >: boost: mutex lock failed in pthread_mutex_lock: Invalid argument
Stacktrace goes back to
cxx_global_var_initXX()
. I think callingLOCK()
from global var init (CTxMemPool mempool(::minRelayTxFee);
) through this newclear()
(which LOCKS mempool) is problematic.rubensayshi commented at 1:04 pm on November 18, 2015: contributorthis is going to be in v0.12.0? it’s not in the release-notes yet for v0.12.0?TheBlueMatt commented at 10:57 pm on November 20, 2015: memberYes, this should be added to the release-notes.furszy referenced this in commit eb00d0f62f on Jun 14, 2020in src/txmempool.cpp:887 in 58254aa3bc
882+ if (rollingMinimumFeeRate < minReasonableRelayFee.GetFeePerK() / 2) { 883+ rollingMinimumFeeRate = 0; 884+ return CFeeRate(0); 885+ } 886+ } 887+ return std::max(CFeeRate(rollingMinimumFeeRate), minReasonableRelayFee);
rebroad commented at 9:21 am on May 16, 2021:why is it limited to minReasonableRelayFee here, but on line 869, it isn’t?rebroad commented at 11:49 am on May 16, 2021: contributor@JeremyRubin No, you’re right, this breaks relaying of child-pays-for-parent when mempool grows large (assuming the package is not already present). The easy solution is to allow fee calulation of packages together when processing orphans, and then you send your package in reverse-dependancy order.
Did this end up being implemented?
str4d referenced this in commit ae9768c8b7 on Aug 4, 2021zkbot referenced this in commit 12af999d42 on Aug 10, 2021zkbot referenced this in commit 44f7d7e4ea on Aug 11, 2021zkbot referenced this in commit 7fa7d5700b on Aug 12, 2021zkbot referenced this in commit fd462fd8c4 on Aug 13, 2021DrahtBot locked this on Aug 16, 2022
github-metadata-mirror
This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 12:12 UTC
This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me