Option -limitunconfdepth to reject transactions dependent on a long chain of unconfirmed ones #6403

pull luke-jr wants to merge 1 commits into bitcoin:master from luke-jr:limitunconfdepth changing 3 files +36 −0
  1. luke-jr commented at 3:54 am on July 9, 2015: member
    Not sure if there’s any long-term benefit to this policy. Thoughts?
  2. petertodd commented at 4:08 am on July 9, 2015: contributor

    ACK concept.

    Long unconfirmed chains are inefficient - use RBF to replace txs with ones with more outputs - and post problems for future scalability improvements as they reduce parallelism opportunities in block processing.

    I’d implement it differently though as an additional depth value recorded for each tx in the mempool - nMaxDepth - where nMaxDepth = max(nMaxDepth of all parent txs) This is efficient to calculate, requiring only a single O(n) pass every time a block is found. There’s lots of other use-cases for that kind of architecture too.

  3. wizkid057 commented at 3:18 pm on July 9, 2015: none

    ACK concept and functionality.

    I’ve been testing this commit since last night and it seems to work as intended.

  4. in src/main.cpp: in fe520922c6 outdated
    744@@ -745,6 +745,32 @@ int64 GetMinFee(const CTransaction& tx, bool fAllowFree, enum GetMinFee_mode mod
    745 }
    746 
    747 
    748+static
    


    Diapolo commented at 3:26 pm on July 9, 2015:
    How often do I need to tell you to NOT do this in 2 lines ;)? @laanwj Also did this…

    luke-jr commented at 2:47 am on July 10, 2015:
    Sorry, it’s a good habit I have…
  5. in src/main.cpp: in fe520922c6 outdated
    744@@ -745,6 +745,32 @@ int64 GetMinFee(const CTransaction& tx, bool fAllowFree, enum GetMinFee_mode mod
    745 }
    746 
    747 
    748+static
    749+unsigned int TxSearchForLongChain(const CTransaction& tx, const CTxMemPool& pool, unsigned int nDepthLimit, unsigned int nDepth = 0)
    750+{
    751+    unsigned int nMaxDepth = nDepth;
    752+    for (unsigned int i = 0; i < tx.vin.size(); ++i)
    


    Diapolo commented at 3:26 pm on July 9, 2015:
    Nit: size_t?
  6. in src/main.cpp: in fe520922c6 outdated
    823@@ -798,6 +824,14 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa
    824             break;
    825         }
    826     }
    827+
    828+        const int nDepthLimit = GetArg("-limitunconfdepth", 0);
    829+        if (nDepthLimit > 0) {
    830+            const unsigned int nMaxDepth = TxSearchForLongChain(tx, pool, nDepthLimit);
    831+            if (nMaxDepth >= (unsigned int)nDepthLimit) {
    


    Diapolo commented at 3:28 pm on July 9, 2015:
    Couldn’t you directly save from GetArg as unsigned int or cast there?

    luke-jr commented at 2:44 am on July 10, 2015:
    Not until after the non-negative check two lines up…?
  7. in src/main.cpp: in fe520922c6 outdated
    823@@ -798,6 +824,14 @@ bool AcceptToMemoryPool(CTxMemPool& pool, CValidationState &state, const CTransa
    824             break;
    825         }
    826     }
    827+
    828+        const int nDepthLimit = GetArg("-limitunconfdepth", 0);
    


    Diapolo commented at 3:28 pm on July 9, 2015:
    Is it intended to be not mentioned in the help message?
  8. btcdrak commented at 6:34 pm on July 9, 2015: contributor
    Concept ACK
  9. matthieu commented at 8:39 pm on July 9, 2015: none
    Just worried this patch will increase the incentive to have output spam (especially in light of the current attack). Shred outputs on each block and spend them with more transactions until you hit dust limit and repeat.
  10. bencxr commented at 9:23 pm on July 9, 2015: contributor

    NACK, this will exacerbate the effect of full blocks / long confirmation wait times on legitimate users.

    It impacts scenarios where the wallet software must send out several transactions without knowing the recipients in advance, e.g. users with one unspent sending to multiple payees, exchange wallets that start with one/few unspents (e.g. after refilling a hot wallet). Full RBF to replace txs with more outputs is not a solution today.

    Also not sure if the benefits are clear, spammers can send to more outputs.

  11. luke-jr commented at 9:40 pm on July 9, 2015: member
    @bencxr It won’t do anything unless explicitly enabled. Finding the ideal limit to use is an exercise for relay/miner nodes.
  12. bencxr commented at 9:54 pm on July 9, 2015: contributor
    @luke-jr what limit would you recommend? 10 - 20? many relay/miner node operators do not have the resources to determine the best limit or understand implications.
  13. luke-jr force-pushed on Jul 10, 2015
  14. luke-jr force-pushed on Jul 10, 2015
  15. luke-jr commented at 2:54 am on July 10, 2015: member
    @bencxr Part of the reason why it is defaulting to zero is because we do not know what limit to recommend yet. It will take some experimentation. Furthermore, determining proper policy is part of a miner’s job, so that excuse is not relevant.
  16. matthieu commented at 3:51 am on July 10, 2015: none

    @luke-jr there doesn’t seem to be a lot of consensus (heh) around what miner’s job is. And telling people what their job is rarely helps in making them do it (believe me, I tried).

    Furthermore if it’s a mining policy, why make it influence pool composition instead or how blocks are built?

  17. luke-jr commented at 3:52 am on July 10, 2015: member
    @matthieu It’s also a relay policy. I’m just not pushing relay nodes to “do their job” - if they relay spam, it mostly just hurts themselves. :p
  18. matthieu commented at 4:12 am on July 10, 2015: none

    @lukejr There are use cases that involve long chains that aren’t spammy. One is people using fee addresses for example, where a service pays for fees for their users. And is said service can use multiple addresses to avoid the chain, so can a spammer.

    Not questioning that the patch can make sense under a certain light, just that this specific criteria is one that can help identifying spam.

    On Thu, Jul 9, 2015 at 8:52 PM, Luke-Jr notifications@github.com wrote:

    @matthieu https://github.com/matthieu It’s also a relay policy. I’m just not pushing relay nodes to “do their job” - if they relay spam, it mostly just hurts themselves. :p

    — Reply to this email directly or view it on GitHub #6403 (comment).

  19. Mirobit commented at 3:23 pm on July 10, 2015: contributor

    @luke-jr You know that most of the node operators and miners will never change a single setting. For node operators that is ok since they do it for free and maybe don’t have the time/knowledge to follow all the recent developments. But even miners relay mostly an default policy (e.g. the block size increase from from 0.75 to 1 MB took quite a while and a lot of pushing). @petertodd If this setting is on default disabled, the miner who enables it is the one that needs more time for block processing. If the miner receives a block from a miner that hasn’t enabled it or chose a high number for n, he will not have these tx in the mempool. Thus block verification takes longer.

    Thats why default shouldn’t be 0. It would hurt miners with a custom policy. Why not set 1 as default (works with CPFP).

  20. Option -limitunconfdepth to reject transactions dependent on a long chain of unconfirmed ones f0a4b90242
  21. luke-jr force-pushed on Jul 11, 2015
  22. laanwj added the label TX fees and policy on Jul 17, 2015
  23. jgarzik commented at 7:15 pm on July 23, 2015: contributor

    concept ACK, and I agree rolling this out default-disabled is reasonable.

    Main reservation: Enabling this can potentially create an expensive search, adding CPU cost for a large mempool during new-TX-entry.

  24. MarcoFalke commented at 8:45 am on August 7, 2015: member

    NACK. Can someone explain me what problem is being solved by this PR? The only one I could think of is “weak” nodes (not miners) trying to be less susceptible to DOS. But then the attacker switches to multi-output txs instead of long tx chains …

    Also, triggering this with sendtoaddress gives me conflicting transactions. Is this wanted?

  25. jgarzik commented at 1:23 pm on September 16, 2015: contributor

    A roll of the dice.

    Best to put this in the category of “don’t unnecessarily restrict usage” because there may be some edge case utility we don’t see immediately.

    Weak NAK based on that reasoning.

  26. sipa commented at 1:25 pm on September 16, 2015: member
    #6554 has better reasoning for this, and should be far more efficient.
  27. dcousens commented at 2:29 pm on September 16, 2015: contributor

    edit: [from IRC] @sipa I think you referred to the wrong issue?

    NACK.

    The DDoS caused by an unlimited depth can easily be replicated by an unlimited horizontal scaling factor.

    Consider the case of an attacker choosing to submit 1000 transactions as a form of spam. Before this pull request, he could submit 1 transaction following another, and achieve the desired result. This would have a depth of 1000.

    In the case of a -limitedunconfdepth of 20, the same attacker would [instead] just submit 1 transaction splitting into ~52 outputs, with ~19 subsequent chains of 1 transaction following another, again achieving the desired result of ~1000 transactions, while adhering to the limitedunconfdepth of 20.

  28. sipa commented at 2:30 pm on September 16, 2015: member
    Sorry, I mean #6654 (thanks @dcousens).
  29. gmaxwell commented at 2:41 pm on September 16, 2015: contributor

    @dcousens With respect— horizontal scaling is not the same thing:

    • Horizontal scaling is limited by access to spendable outputs. There is no limit to vertical depth.
    • Sufficiently deep chains will (likely) never have their later fees paid (esp as few miners do CPFP and any sane CPFP will have a finite horizon)
    • Algorithms which consider decedent or ancestor costs will have usually have super-linear cost in depth.

    These points might still support fairly deep chain tolerance, but I do not believe it is correct to say that it is equal to horizontal.

  30. dcousens commented at 3:09 pm on September 16, 2015: contributor

    Horizontal scaling is limited by access to spendable outputs. There is no limit to vertical depth.

    Vertical depth is limited by spendable outputs too? I’m not sure understand how this is different, every transaction input must have a subsequent output (coin bases excepted)?

    Sufficiently deep chains will (likely) never have their later fees paid (esp as few miners do CPFP and any sane CPFP will have a finite horizon)

    Thanks to @sipa I’m aware of CPFP, but could you explain why deep chains would somehow avoid fees?

    Algorithms which consider decedent or ancestor costs will have usually have super-linear cost in depth.

    Could you explain why?

    These points might still support fairly deep chain tolerance, but I do not believe it is correct to say that it is equal to horizontal.

    Understandably they have differences, but my point was that, for the purposes of a DDoS attack, they would pose a similar threat for almost the same cost/effort? However, my assertion may have some missing assumptions (such as above).

  31. sipa commented at 3:13 pm on September 16, 2015: member
    Read #6654 please. It introduces more iseful limits, which are inspired by actual performance problems encountered in the mempool limiting work.
  32. petertodd commented at 8:39 pm on September 16, 2015: contributor
    NACK in favor of #6654. (but thanks for getting the ball rolling on this!)
  33. MarcoFalke commented at 7:28 am on September 22, 2015: member
    As #6654 is merged. What is the status of this PR?
  34. luke-jr commented at 5:06 pm on September 22, 2015: member
    Probably makes sense to close this now.
  35. luke-jr closed this on Sep 22, 2015

  36. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 12:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me