Option -limitunconfdepth to reject transactions dependent on a long chain of unconfirmed ones #6403
pull luke-jr wants to merge 1 commits into bitcoin:master from luke-jr:limitunconfdepth changing 3 files +36 −0-
luke-jr commented at 3:54 am on July 9, 2015: memberNot sure if there’s any long-term benefit to this policy. Thoughts?
-
petertodd commented at 4:08 am on July 9, 2015: contributor
ACK concept.
Long unconfirmed chains are inefficient - use RBF to replace txs with ones with more outputs - and post problems for future scalability improvements as they reduce parallelism opportunities in block processing.
I’d implement it differently though as an additional depth value recorded for each tx in the mempool - nMaxDepth - where nMaxDepth = max(nMaxDepth of all parent txs) This is efficient to calculate, requiring only a single O(n) pass every time a block is found. There’s lots of other use-cases for that kind of architecture too.
-
wizkid057 commented at 3:18 pm on July 9, 2015: none
ACK concept and functionality.
I’ve been testing this commit since last night and it seems to work as intended.
-
in src/main.cpp: in fe520922c6 outdated
744@@ -745,6 +745,32 @@ int64 GetMinFee(const CTransaction& tx, bool fAllowFree, enum GetMinFee_mode mod 745 } 746 747 748+static
luke-jr commented at 2:47 am on July 10, 2015:Sorry, it’s a good habit I have…in src/main.cpp: in fe520922c6 outdated
744@@ -745,6 +745,32 @@ int64 GetMinFee(const CTransaction& tx, bool fAllowFree, enum GetMinFee_mode mod 745 } 746 747 748+static 749+unsigned int TxSearchForLongChain(const CTransaction& tx, const CTxMemPool& pool, unsigned int nDepthLimit, unsigned int nDepth = 0) 750+{ 751+ unsigned int nMaxDepth = nDepth; 752+ for (unsigned int i = 0; i < tx.vin.size(); ++i)
Diapolo commented at 3:26 pm on July 9, 2015:Nit:size_t
?in src/main.cpp: in fe520922c6 outdated
823@@ -798,6 +824,14 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa 824 break; 825 } 826 } 827+ 828+ const int nDepthLimit = GetArg("-limitunconfdepth", 0); 829+ if (nDepthLimit > 0) { 830+ const unsigned int nMaxDepth = TxSearchForLongChain(tx, pool, nDepthLimit); 831+ if (nMaxDepth >= (unsigned int)nDepthLimit) {
Diapolo commented at 3:28 pm on July 9, 2015:Couldn’t you directly save from GetArg as unsigned int or cast there?
luke-jr commented at 2:44 am on July 10, 2015:Not until after the non-negative check two lines up…?in src/main.cpp: in fe520922c6 outdated
823@@ -798,6 +824,14 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa 824 break; 825 } 826 } 827+ 828+ const int nDepthLimit = GetArg("-limitunconfdepth", 0);
Diapolo commented at 3:28 pm on July 9, 2015:Is it intended to be not mentioned in the help message?btcdrak commented at 6:34 pm on July 9, 2015: contributorConcept ACKmatthieu commented at 8:39 pm on July 9, 2015: noneJust worried this patch will increase the incentive to have output spam (especially in light of the current attack). Shred outputs on each block and spend them with more transactions until you hit dust limit and repeat.bencxr commented at 9:23 pm on July 9, 2015: contributorNACK, this will exacerbate the effect of full blocks / long confirmation wait times on legitimate users.
It impacts scenarios where the wallet software must send out several transactions without knowing the recipients in advance, e.g. users with one unspent sending to multiple payees, exchange wallets that start with one/few unspents (e.g. after refilling a hot wallet). Full RBF to replace txs with more outputs is not a solution today.
Also not sure if the benefits are clear, spammers can send to more outputs.
luke-jr force-pushed on Jul 10, 2015luke-jr force-pushed on Jul 10, 2015matthieu commented at 3:51 am on July 10, 2015: none@luke-jr there doesn’t seem to be a lot of consensus (heh) around what miner’s job is. And telling people what their job is rarely helps in making them do it (believe me, I tried).
Furthermore if it’s a mining policy, why make it influence pool composition instead or how blocks are built?
matthieu commented at 4:12 am on July 10, 2015: none@lukejr There are use cases that involve long chains that aren’t spammy. One is people using fee addresses for example, where a service pays for fees for their users. And is said service can use multiple addresses to avoid the chain, so can a spammer.
Not questioning that the patch can make sense under a certain light, just that this specific criteria is one that can help identifying spam.
On Thu, Jul 9, 2015 at 8:52 PM, Luke-Jr notifications@github.com wrote:
@matthieu https://github.com/matthieu It’s also a relay policy. I’m just not pushing relay nodes to “do their job” - if they relay spam, it mostly just hurts themselves. :p
— Reply to this email directly or view it on GitHub #6403 (comment).
Mirobit commented at 3:23 pm on July 10, 2015: contributor@luke-jr You know that most of the node operators and miners will never change a single setting. For node operators that is ok since they do it for free and maybe don’t have the time/knowledge to follow all the recent developments. But even miners relay mostly an default policy (e.g. the block size increase from from 0.75 to 1 MB took quite a while and a lot of pushing). @petertodd If this setting is on default disabled, the miner who enables it is the one that needs more time for block processing. If the miner receives a block from a miner that hasn’t enabled it or chose a high number for n, he will not have these tx in the mempool. Thus block verification takes longer.
Thats why default shouldn’t be 0. It would hurt miners with a custom policy. Why not set 1 as default (works with CPFP).
Option -limitunconfdepth to reject transactions dependent on a long chain of unconfirmed ones f0a4b90242luke-jr force-pushed on Jul 11, 2015laanwj added the label TX fees and policy on Jul 17, 2015jgarzik commented at 7:15 pm on July 23, 2015: contributorconcept ACK, and I agree rolling this out default-disabled is reasonable.
Main reservation: Enabling this can potentially create an expensive search, adding CPU cost for a large mempool during new-TX-entry.
MarcoFalke commented at 8:45 am on August 7, 2015: memberNACK. Can someone explain me what problem is being solved by this PR? The only one I could think of is “weak” nodes (not miners) trying to be less susceptible to DOS. But then the attacker switches to multi-output txs instead of long tx chains …
Also, triggering this with
sendtoaddress
gives me conflicting transactions. Is this wanted?jgarzik commented at 1:23 pm on September 16, 2015: contributorA roll of the dice.
Best to put this in the category of “don’t unnecessarily restrict usage” because there may be some edge case utility we don’t see immediately.
Weak NAK based on that reasoning.
dcousens commented at 2:29 pm on September 16, 2015: contributoredit: [from IRC] @sipa I think you referred to the wrong issue?
NACK.
The DDoS caused by an unlimited depth can easily be replicated by an unlimited horizontal scaling factor.
Consider the case of an attacker choosing to submit 1000 transactions as a form of spam. Before this pull request, he could submit 1 transaction following another, and achieve the desired result. This would have a depth of 1000.
In the case of a
-limitedunconfdepth
of20
, the same attacker would [instead] just submit 1 transaction splitting into ~52 outputs, with ~19 subsequent chains of 1 transaction following another, again achieving the desired result of ~1000 transactions, while adhering to thelimitedunconfdepth
of20
.gmaxwell commented at 2:41 pm on September 16, 2015: contributor@dcousens With respect— horizontal scaling is not the same thing:
- Horizontal scaling is limited by access to spendable outputs. There is no limit to vertical depth.
- Sufficiently deep chains will (likely) never have their later fees paid (esp as few miners do CPFP and any sane CPFP will have a finite horizon)
- Algorithms which consider decedent or ancestor costs will have usually have super-linear cost in depth.
These points might still support fairly deep chain tolerance, but I do not believe it is correct to say that it is equal to horizontal.
dcousens commented at 3:09 pm on September 16, 2015: contributorHorizontal scaling is limited by access to spendable outputs. There is no limit to vertical depth.
Vertical depth is limited by spendable outputs too? I’m not sure understand how this is different, every transaction input must have a subsequent output (coin bases excepted)?
Sufficiently deep chains will (likely) never have their later fees paid (esp as few miners do CPFP and any sane CPFP will have a finite horizon)
Thanks to @sipa I’m aware of CPFP, but could you explain why deep chains would somehow avoid fees?
Algorithms which consider decedent or ancestor costs will have usually have super-linear cost in depth.
Could you explain why?
These points might still support fairly deep chain tolerance, but I do not believe it is correct to say that it is equal to horizontal.
Understandably they have differences, but my point was that, for the purposes of a DDoS attack, they would pose a similar threat for almost the same cost/effort? However, my assertion may have some missing assumptions (such as above).
MarcoFalke commented at 7:28 am on September 22, 2015: memberAs #6654 is merged. What is the status of this PR?luke-jr commented at 5:06 pm on September 22, 2015: memberProbably makes sense to close this now.luke-jr closed this on Sep 22, 2015
MarcoFalke locked this on Sep 8, 2021
github-metadata-mirror
This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-19 00:12 UTC
This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me