Require that free transactions (in addition to being rate-limited and having size constraints) also have a sufficiently high priority so they can be mined in the next block.
Require sufficent priority for relay of free transactions #5535
pull sipa wants to merge 1 commits into bitcoin:master from sipa:relaypriority changing 3 files +8 −2-
sipa commented at 10:53 PM on December 23, 2014: member
- sipa force-pushed on Dec 23, 2014
- sipa force-pushed on Dec 24, 2014
- sipa force-pushed on Dec 24, 2014
-
sipa commented at 3:35 PM on December 24, 2014: member
Added a -relaypriority option to disable the priority requirement, and use it as a workaround in pulltester (which is mainly intended to test block validity rules anyway).
-
petertodd commented at 8:36 AM on December 25, 2014: contributor
utACK
-
laanwj commented at 4:19 AM on December 27, 2014: member
Thinking of this, was the reason for including too-low-priority transactions in the mempool is that they may gain a sufficient priority over time?
It has always been possible to submit transactions that would not immediately be mined, but would likely be included in a block eventually.
This change is for the better, but rejecting transactions that cannot be mined the next block changes this behavior. Maybe "in the next N blocks" would be a better criterion? (though this change only applies to free transactions, of course, so maybe this concern is ungrounded...)
-
gmaxwell commented at 7:52 AM on December 27, 2014: contributor
@laanwj I had the same thought about increasing 1 to N. Though it does add a lot more state-space you can consume with chains of unconfirmed. Really it's ungrounded for another reason, they're already very unreliable because of the limiter.
To some extent not relaying and not mempooling a very low priority transaction is doing the user a favour because it's easier to get a conflict mined.
ACK. (I'd also be okay with upping 1, but I don't think it's needed)
-
gmaxwell commented at 9:17 AM on December 27, 2014: contributor
That this adds another error() which shows up as a frequent user complaint generating "ERROR" in the logs might be a bit concerning.
-
Require sufficent priority for relay of free transactions 1c52aad540
- sipa force-pushed on Dec 30, 2014
-
sipa commented at 1:04 AM on December 30, 2014: member
Updated, and removed the error message.
I think we should generally avoid reserve ERROR for actually failed user requests or necessary behaviour. Not for normal network activity.
-
gmaxwell commented at 4:41 AM on December 30, 2014: contributor
(still) ACK.
-
petertodd commented at 4:40 AM on January 3, 2015: contributor
utACK
-
jgarzik commented at 3:16 PM on January 6, 2015: contributor
ACK
Nit: Considering that the error is
REJECT_INSUFFICIENTFEE, I think the description of "rate limited free transaction" seems inadequate. Perhaps "insufficient fee for low priority" or somesuch. -
sipa commented at 3:21 PM on January 6, 2015: member
@jgarzik well we're limited to the existing reject codes (and I don't really care much about those; I consider REJECT_INSUFFICIENTFEE to generally mean 'rejected by local relay policy', with the string clarifying the actual reason). That "rate limited free transaction" means: your transaction had a too low fee to be considered as a non-zero-fee one, so it is considered as a free one, but we've exceeded our throttler protection for such transactions. "insufficient fee for low priority" seems entirely wrong; more fee would make the priority irrelevant, the priority is not the problem, and the reason for rejection is rate limiting...
-
sipa commented at 3:24 PM on January 6, 2015: member
Better clarification string suggestions welcome of course.
-
jgarzik commented at 3:24 PM on January 6, 2015: contributor
Well the reason for rejection is ultimately low fee. The error constant does reflect that, but the string does not.
-
sipa commented at 3:27 PM on January 6, 2015: member
Sure, all 3 checks that can result in failure to relay here are ultimately "too low fee", as more fee would make the transaction not subject to the free transaction policy. A more specific string doesn't hurt though.
I still don't understand "insufficient fee for low priority" - if the fee was more, the priority wouldn't matter.
Maybe "rate-limited low-fee transaction"?
- laanwj added this to the milestone 0.10.0 on Jan 6, 2015
-
laanwj commented at 12:00 PM on January 7, 2015: member
As this is a blocker for 0.10, I'm not going to hold merging this up on discussion about the error message, if you have a better suggestion you can change it later.
- laanwj merged this on Jan 7, 2015
- laanwj closed this on Jan 7, 2015
- laanwj referenced this in commit d79adc1ab1 on Jan 7, 2015
- sipa referenced this in commit 3022e7df2a on Jan 7, 2015
-
in src/init.cpp:None in 1c52aad540
323 | @@ -324,6 +324,7 @@ std::string HelpMessage(HelpMessageMode mode) 324 | if (GetBoolArg("-help-debug", false)) 325 | { 326 | strUsage += " -limitfreerelay=<n> " + strprintf(_("Continuously rate-limit free transactions to <n>*1000 bytes per minute (default:%u)"), 15) + "\n"; 327 | + strUsage += " -relaypriority " + strprintf(_("Require high priority for relaying free or low-fee transactions (default:%u)"), 1) + "\n";
Diapolo commented at 2:10 PM on January 7, 2015:Also we choose once to sort the options in alphabetical order (at least) this is now also not true anymore -_-.
mikehearn commented at 12:59 PM on January 12, 2015: contributorThis change will almost certainly break existing software that generates free transactions, who don't know about this new rule. The assumption that you can generate a free transaction and it will get mined at some point is one that has been valid since the start of Bitcoin.
I don't see any measurements of how much breakage this will cause (the cost), or what the rationale is, what the benefits are or why this is being changed with no discussion anywhere. The commit message provides no enlightenment - why is a backwards incompatible policy change being inserted between release candidate 1 and release candidate 2? Isn't the point of release candidates that only critical bug fixes get included until release?
jgarzik commented at 1:02 PM on January 12, 2015: contributor@mikehearn eh? Current versions of bitcoind will similarly neglect to relay low priority -- spam -- transactions.
mikehearn commented at 2:57 PM on January 12, 2015: contributorThat's what this change does. Previously there was a rate limiter and size constraints only. You could post a low priority transaction and it'd hang around until it reached a high enough priority.
The idea here is that it gets a little bit closer to a size limited mempool. But you can still fill up memory by attaching a fee. 1 BTC buys you about 100mb of mempool space. Adding another special case for free transactions doesn't solve the root concern, it just forces a (small?) financial commitment to exploiting it. Given that the size of the mempool can be easily measured these days (it's in the rpc) it'd be nicer if it worked differently - if (mempool_size > flags[-mempoollimit] && priority < lowest_priority_tx) { drop }
Free transactions are already pretty useless because of the rate limiter and arbitrary amount of block space allocated to them, but ideally we'd be trying to slowly unify them rather than adding more ways they are special.
gmaxwell commented at 3:05 PM on January 12, 2015: contributorPreviously there was a rate limiter and size constraints only. You could post a low priority transaction and it'd hang around until it reached a high enough priority
Previously the behavior was non-determinstic depending on where you hit the rate limiters. You'd announce something and it would have very poor propagation. The fact that it reached a few nodes would not help you if those nodes weren't miners since it's not like they'd send it again when rate limiters permitted it; worse-- it would hurt you, because any that did take the transaction would block your own retransmissions of the transaction through them to the peers behind them.
You can, of course, still continue to retransmit and the network will take the transaction once it has sufficient priority. The priority calculation is also deterministic, so you can make a much better estimate of where you stand with respect to that then guesses about a very small rate limititer on far away nodes. But even if you don't know; with that you wouldn't as much end up with your propagation thwarted when your immediate peers take it and it stops there.
mikehearn commented at 3:25 PM on January 12, 2015: contributorYes, that's true. Perhaps the rate limiter can be removed then, as this new rule would supersede it.
morcos commented at 9:52 PM on January 15, 2015: memberI ran a little study over the transactions my node received in the last 6 weeks of 2014. There were 14,600 transactions that would have been rejected by this code, on average about 1 transaction every 5 minutes. But maybe of greater concern, 126 of these transactions (or about 3 a day) were for over 1000 BTC. Transactions for that much value had to depend on inputs in the mempool to not immediately pass the priority test. It makes me think though that we should broadcast this change a little louder. Its not really called out in the release notes and its important for other users of the network whose transactions might now get stuck as some of the nodes have upgraded and some haven't, which will also make it hard to rebroadcast with a fee.
sipa commented at 10:38 PM on January 15, 2015: memberAlex, do you have a way to find out whether a higher limit (e.g. "would confirm in 6 blocks" rather in 1 block) would have eliminated a significant number of these?
sipa commented at 10:46 PM on January 15, 2015: memberSo that it rejects some transactions is indeed expected (though I wasn't expecting such numbers to be honest). The question is really whether that had any effect on them confirming (would they have been rebroadcast, for example).
morcos commented at 11:07 PM on January 15, 2015: memberI'm rerunning the analysis now to answer about 6 (or other block numbers). But part of the problem is that sometimes (especially for higher value tx's) they depend on unconfirmed inputs and so may not increase in priority if you project a few blocks ahead.
My concern was that if the network is a mix of upgraded and not, rebroadcasting might not work very well. Does it matter whether you are rebroadcasting the exact same transaction (that now has priority) or a double spend that adds a fee in thinking about that?
morcos commented at 1:41 AM on January 16, 2015: memberProjecting out to 6 blocks doesn't help, still 12,700 txs rejected, and 25 blocks is still 11,800. Worse yet, it makes almost no difference to high value txs (of over 100 or 1000 BTC).
gmaxwell commented at 1:47 AM on January 16, 2015: contributor@morcos These transactions are already subject to the rate limiter; which gives them very intermittent propagation (I frequently see the rate limiter being hit on my nodes). They're also not mined by default. Fair enough to have a release note on it; I'm not sure if there is any reason for concern beyond that point.
mikehearn commented at 2:40 PM on January 16, 2015: contributorA simpler way to address the underlying concern without breaking things was already proposed up thread. After receiving inserting a new transaction into the mem pool run:
while (mempool_size > 100mb) { drop lowest priority transaction; }
We know the mempool wasn't at excessive sizes during the period Alex analysed (otherwise we'd have noticed nodes falling over), so this change would have had no impact on the transactions that this change will break, whilst actually being a more direct fix. A rule like "must confirm within N blocks" is inherently indirect.
I'd much prefer to see this change backed out (what is the point of release candidates if unanalysed breaking changes are put into them at the last minute). It could go into 0.10.1 as a matter of priority.
gmaxwell commented at 2:45 PM on January 16, 2015: contributor@mikehearn as explained up thread: that fix causes problems by making the behavior less deterministic, and leaving you with things like no way to tell if the transaction is actually propagated, creates difficulty rebroadcasting, and incentives for people to flood the mempool to push transactions out of it. It also doesn't address the relay / rate limiter capacity getting chewed up by very low priority transactions which will likely not confirm soon.
I'm not sure why you're saying "unanalyized", I ran the code for 5 days with recording the transactions it rejected, and didn't see anything concerning, before ACKing above. This is also software written almost a month ago.
mikehearn commented at 3:16 PM on January 16, 2015: contributorAlex's analysis provides data which quantifies the amount of breakage.
I think the reason my proposed fix may seem less deterministic than this change, is that this change doesn't really address the underlying issue. For instance, can it be bypassed by just creating lots of free transactions that are all of sufficient priority to be mined in the next block? I think yes, so it's hard to have confidence it really fixes the problem, or actually as there is no quantification of the problem it's impossible to say either way. It may also interact oddly with changes made elsewhere, e.g. introducing new rules like the "inputs are free" rule could change the economics of bypassing this check.
So I feel it's much easier to have confidence in an actual test against the resource limit in question. It's much more direct and less likely to be subtly impacted by changes elsewhere. There can always be hard-coded minimums in place to let people with copies of the mempool make predictions about where in it they will fall.
laanwj commented at 3:21 PM on January 16, 2015: memberThere needs to be a maximum mempool size as well. Let's not pretend it's one or the other.
What is nice about this change, though, is that it gives immediate feedback to the sender that the transaction is rejected.
gmaxwell commented at 3:22 PM on January 16, 2015: contributorThe problem this is addressing are two-fold possible to use capacity-- in the form of relay bandwidth, and mempool space-- on the entire network with no cost to the attacker. Both fees and priorities are cost; and because the existing ineffective protections (which are actually inconsistent with the public documentation about the behavior; see https://en.bitcoin.it/wiki/Transaction_fees) are non-deterministic and result in stuck transactions. Testing against mempool size does not achieve that end: At best it just changes the resource being contended with for a different one (slots in the finite space) and it's more non-deterministic.
(technically this change also implies a maximum size, though its not big enough to replace the functionality of actual size limit; I'd prefer though to commence with that limiting only after we are storing the mempool on disk in order to avoid pressure to make the limit problematically low)
There can always be hard-coded minimums in place
Sure, thats what this is.
jgarzik commented at 4:06 PM on January 16, 2015: contributorRelated: #3753 (mempool periodic sweep).
In-implementation it is easier to have a separate sweep, than adding the delete-tx-from-mempool loop to the add-tx-to-mempool functionality.
The main blocker is tagging transactions as "significant to my local wallet" so that they are not tossed out with the bathwater.
reddink referenced this in commit af49eb5def on May 27, 2020DrahtBot locked this on Sep 8, 2021Milestone
0.10.0
This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-14 15:15 UTC
More mirrored repositories can be found on mirror.b10c.me