Policy: Lower default limits for tx chains #6771

pull morcos wants to merge 1 commits into bitcoin:master from morcos:lowerLimits changing 2 files +15 −12
  1. morcos commented at 4:18 pm on October 6, 2015: member

    Reduce the default limits on maximum number of transactions and the cumulative size of those transactions in both ancestor and descendant packages to 25 txs and 100kb total size.

    Please see email to bitcoin-dev for background.

    I looked at some recent data and 2.6% of transactions in my node’s mempool in the first 5 days of October would have been restricted by these new lower limits.

  2. laanwj added the label TX fees and policy on Oct 6, 2015
  3. gmaxwell commented at 8:57 pm on October 6, 2015: contributor
    Concept ACK.
  4. Mirobit commented at 9:57 pm on October 6, 2015: contributor
    Are there any other use cases than excessive gambling and spam that require more than 3 unconfirmed ancestors? This seems like a good opportunity to fight spam without downsides for regular users.
  5. ghost commented at 1:10 am on October 7, 2015: none
    So if you’re only saving 2.6%, why bother? Why not increase the max block size to 1.026Mb instead? @Mirobit Who are we to decide how bitcoin is used? What do you want to ‘fight’ next - porn? Donations to unsavoury nations? People who we don’t agree with?
  6. morcos commented at 1:36 am on October 7, 2015: member
    @NanoAkron, to be clear, the goal here is not to restrict transactions. I wish the 2.6% number was 0.0%. Nor is the goal to prevent any specific type or kind of transaction, spam or otherwise. The goal is only to prevent particularly constructed chains of transactions which would serve to break critical functionality of the network. Such chains are only likely constructed for the particular purpose of attacking the entire Bitcoin system, however, the limits proposed to prevent these attacks may inadvertently restrict non-attack transactions. I’m just trying to be as transparent as possible by acknowledging that 2.6% of recent transactions would either have to have been delayed until some of their ancestors were included in blocks or constructed from different utxos if these limits had already been in place.
  7. in src/main.h: in 6362ed4c48 outdated
    43@@ -44,13 +44,13 @@ static const bool DEFAULT_ALERTS = true;
    44 /** Default for -maxorphantx, maximum number of orphan transactions kept in memory */
    45 static const unsigned int DEFAULT_MAX_ORPHAN_TRANSACTIONS = 100;
    46 /** Default for -limitancestorcount, max number of in-mempool ancestors */
    47-static const unsigned int DEFAULT_ANCESTOR_LIMIT = 100;
    48+static const unsigned int DEFAULT_ANCESTOR_LIMIT = 25;
    49 /** Default for -limitancestorsize, maximum kilobytes of tx + all in-mempool ancestors */
    50-static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 900;
    51+static const unsigned int DEFAULT_ANCESTOR_SIZE_LIMIT = 100;
    


    petertodd commented at 1:01 pm on October 7, 2015:
    Note that this means a max size tx - 100KB - can’t be respent. This could cause problems, e.g. with CPFP.

    morcos commented at 6:42 pm on October 7, 2015:

    @petertodd I was assuming that if people cared about that they could limit their transactions to 99.5KB and still chain a child or two off of it to pay extra fees. I suppose alternatively we could raise the default to 101?

    The size limits would only have restricted approx 0.01% of transactions that passed the count limits over the last several months.


    TheBlueMatt commented at 0:32 am on October 8, 2015:
    I would agree with @petertodd here, we should set the size limit to something like 101 or 105.
  8. jgarzik commented at 11:30 pm on October 7, 2015: contributor

    Example use case - bitcoin business(es) that depend on tx chains: https://twitter.com/taariqlewis/status/651160362629230592

    (this is notably not an endorsement of that activity, as you’ll gather from reading responses)

  9. matthieu commented at 2:56 am on October 8, 2015: none

    The largest number of unconfirmed transaction on a single address we’ve seen at BlockCypher was 164. That particular customer had a burst of activity from its end-users during a period of full blocks and long block gaps.Those 164 transactions were unevenly distributed on 3 UTXOs, leading to a ~100 transactions chain. While I can’t name them, a fair amount of people here have used this service, so definitely a legitimate use case.

    Based on this I’d keep the ancestor package at current limits and reduce the descendant package only by a factory of ~10x instead of ~40x.

    At a higher level, while I believe this is a useful metric to have in Core, I’d avoid clamping down too hard on it. I’m worried it’s an over-optimization without considering pool side as a whole.

  10. TheBlueMatt commented at 3:07 am on October 8, 2015: member

    You stated that you saw a long transaction chain in production, but no reason was given as to why your customer had to chain transactions together instead of using other utxos? Are you suggesting that this customer did not otherwise have funds available to pay out when they needed to and, thus, had to chain their own transactions?

    Calling this a “useful metric” is a bit of an understatement. These limits, together, directly govern the complexity of mempool limiting algorithms and, to a slightly lesser extent, their effectiveness.

    On October 7, 2015 7:56:36 PM PDT, Matthieu Riou notifications@github.com wrote:

    The largest number of unconfirmed transaction on a single address we’ve seen at BlockCypher was 164. That particular customer had a burst of activity from its end-users during a period of full blocks and long block gaps.Those 164 transactions were unevenly distributed on 3 UTXOs, leading to a ~100 transactions chain. While I can’t name them, a fair amount of people here have used this service, so definitely a legitimate use case.

    Based on this I’d keep the ancestor package at current limits and reduce the descendant package only by a factory of ~10x instead of ~40x.

    At a higher level, while I believe this is a useful metric to have in Core, I’d avoid clamping down too hard on it. I’m worried it’s an over-optimization without considering pool side as a whole.


    Reply to this email directly or view it on GitHub: #6771 (comment)

  11. matthieu commented at 4:10 am on October 8, 2015: none
    Ideally they’d have shredded their UTXOs a little better but didn’t. Funds were available but people don’t anticipate worst case situations well. They could have been more diligent but on the other hand, the more limits people can hit on transacting, the more complex developing around these limits become.
  12. TheBlueMatt commented at 5:46 am on October 8, 2015: member

    Hmm, so if they had available UTXOs, were they spending UTXOs which they received from others or just making their own chains? The Bitcoin Core tx selection of “first pick txn which are already confirmed” is generally a good idea for hygene. If they were spending UTXOs others had sent them then it’s something we should actively discourage anyway.

    On October 7, 2015 9:10:40 PM PDT, Matthieu Riou notifications@github.com wrote:

    Ideally they’d have shredded their UTXOs a little better but didn’t. Funds were available but people don’t anticipate worst case situations well. They could have been more diligent but on the other hand, the more limits people can hit on transacting, the more complex developing around these limits become.


    Reply to this email directly or view it on GitHub: #6771 (comment)

  13. taariq commented at 6:08 am on October 8, 2015: none

    We, at Serica and DigitalTangible actively use unspent tx chains to allow our customers to speed their bitcoin user-experience without the need for them to wait on blockchain confirmations. These transactions are usually sequential and must happen between our customers and our marketplace of merchants and other customers. For example, a user agrees to place an order to purchase bitcoin and then make a bitcoin payment, for a product or services, with that bitcoin, should their desired price be met while they are unable to actively monitor their transaction.

    We currently do not have a need to chain more than 5 unspents given our current use cases for onboarding new customers into the bitcoin ecosystem. Given this PR, we agree with its principle, since the proposal aims to limit to MAX_ANCESTORS = 25 and MAX_DESCENDANTS = 25, which we think is reasonable. We have not yet seen a use case for more than 25 chains of unconfirmed in our ecosystem.

    However, we would like to publish our viewpoint that we wish to avoid a slippery slope of restrictions in unspents to fall from from 25 to 2 or even 0. The limits imposed should not prevent, at minimum, 5 step chains of transactions that are needed to give a customer temporary control over funds that they would otherwise not have access to unless they waited for a confirmation before conducting another transaction. In these situations, where an instant purchase must be made with customer control, that btc must be sent to a customers address and then be quickly relayed to a merchant or another party in a transaction to create a seamless experience. All of this must happen within 0 confs because our customers will not wait for a whole confirm and we do not wish to lose the opportunity to make Bitcoin more available and useful to a broader audience with higher performance demands.

    Zero confirmations, when done properly, help increase adoption of Bitcoin and make it more competitive against other forms of payments. However, we do think it’s good to prevent abuse of the system with reasonable constraints for the current ecosystem of applications and wallets.

  14. matthieu commented at 6:14 am on October 8, 2015: none
    @TheBlueMatt their own chain. And confirmed first was used, just a faster stream of incoming than confirming with not enough UTXOs in that period of time.
  15. TheBlueMatt commented at 6:27 am on October 8, 2015: member
    @taariq yea, I wouldnt be worried about that. No one wants to set a package size limit, its only a requirement for the mempool limiting and tracking algorithms to have reasonable upper limits on runtime. Still, I think 5 is more than anyone else had seen a use-case for, so thanks for the feeback :)
  16. TheBlueMatt commented at 6:38 am on October 8, 2015: member
    @matthieu Ahh, ok, thanks for the clarification. That is quite…annoying. I’m not sure I’d trust the code with much higher limits yet, but maybe it warrants more study instead of just a belt-and-suspenders limit, then.
  17. pstratem commented at 6:39 am on October 8, 2015: contributor
    @matthieu it sounds like they could have been using sendmany
  18. matthieu commented at 6:45 am on October 8, 2015: none
    @pstratem there are several things they could have done better. How much do you want people to know to be able to transact without bad surprises tho? Making it more complex is good for services like ours that swallow the complexity, but not for the ecosystem in general. Odd that I should be making that point.
  19. pstratem commented at 6:47 am on October 8, 2015: contributor
    @matthieu eh? the sendmany rpc call is right there…
  20. TheBlueMatt commented at 6:51 am on October 8, 2015: member

    I think the point was they’re not using Bitcoin Core.

    On October 7, 2015 11:48:06 PM PDT, Patrick Strateman notifications@github.com wrote:

    @matthieu eh? the sendmany rpc call is right there…


    Reply to this email directly or view it on GitHub: #6771 (comment)

  21. matthieu commented at 6:57 am on October 8, 2015: none
    The point is also they had no prior knowledge they would either need to batch their transactions or ever hit a transaction peak that high. Send as it comes is what everyone starts with, batching can get tricky fast.
  22. morcos commented at 10:24 am on October 8, 2015: member

    It would be helpful to know whether there are other use cases like this that would be hampered. I think if its once in a blue moon something that used to work would now need some more careful transaction preparation, then maybe this change is worth it in the balance.

    To Matt’s point of the reasoning behind this change: it’s also an understatement to say its just needed to govern complexity and effectiveness of mempool limiting. There are attacks which can cause the mempools of full nodes to become almost permanently unusuable, which would prevent transaction relay and basically bring the network to a halt. How low we have to make these limits to prevent that may not be an exact science. The good thing is that if we do decide to start with a higher limit than 25, and these attacks are seen in the wild, its only a command line argument to change it. But as we’ve seen with the minimum relay fee and the stress tests, its still an issue if people have to bring down their nodes and restart them with different parameters to respond to the attack.

  23. MarcoFalke commented at 11:40 am on October 8, 2015: member

    NACK based on my reply below.

    @pstratem they could have been using sendmany

    You can’t bundle customer’s requests of transactions which have to be sent out immediately.

    @morcos But as we’ve seen with the minimum relay fee and the stress tests, its still an issue if people have to bring down their nodes and restart them with different parameters to respond to the attack.

    If you don’t have the resources to run a node with the default settings, you can still use the -limit*count params to adjust them to your available resources. Also, This PR doesn’t touch the release notes to educate users about the change and possible issues (and how to solve them). This is yet another instance of the too broad and misleading “-4 error message” [1]. Abusing the mempool to reject transactions created by the wallet is the wrong way to go. If there is an issue how the wallet creates transactions, this should be tackled instead. However, the wallet already prefers confirmed inputs. If there are no confirmed inputs but only unconfirmed change there is nothing wrong about the transaction and the mempool should not intervene.

    [1]: Error: The transaction was rejected! This might happen if some of the coins in your wallet were already spent, such as if you used a copy of wallet.dat and coins were spent in the copy but not marked as spent here. (code -4)

  24. morcos commented at 12:28 pm on October 8, 2015: member

    @MarcoFalke Just to reemphasize this point. This is not only about resources. Very powerful nodes can still be attacked if they have a mempool with some size limit.

    I do agree that better wallet UI is needed, but that problem is not introduced by this pull.

    If we don’t make the default small enough to be safe, then most people will have to configure their own policy. I suspect that will lead to a great many nodes limiting chains to much smaller counts.

  25. matthieu commented at 4:12 pm on October 8, 2015: none
    @morcos Given this isn’t a consensus change, any issue with lowering it gradually (over a few releases) rather than cranking it down all of a sudden? Try with 50 ancestors and 100 descendants for a bit and then go lower if no real difficulty has been found?
  26. dthorpe commented at 5:58 pm on October 8, 2015: none

    +1 to Taqriq’s use case.

    I also have a transaction system implemented which involves multi-step transactions which spend from UTXOs that are known to us but may yet be unconfirmed by the blockchain. Completely independent of taariq’s use case, but similar in that chains of txs are created and submitted to the mempool together without waiting for sequential block confirmation of each tx in the chain.

    Due to signature constraints, our multiple steps cannot be coalesced into a single transaction for publication. (Coalescing could be possible with improvements to sigHash capabilities) Therefore, we end up with multiple transactions in a chain which are all submitted to the peer network together.

    In our current implementation, these chains can be 3 to 4 levels deep per “user transaction”. However, if the blockchain is taking 20 minutes to produce the next block, a user initiating multiple “user transactions” (buy x, then buy y, then buy z) in that 20 minutes could build on that unspent chain. The additional operations are not strictly linear - subsequent operations would spend the change outputs of the previous operation’s root transaction, forming a tree of chained txs. A set of 3 operations would form a tree with a max depth of 7 and a total node count of 12.

    The “sendmany” RPC function is of no use to me because it only works within the limits of the reference implementation wallet - single signature outputs. Our system is built on multisignature wallets. We run off-the-shelf bitcoind full nodes, but we never use the wallet or address functions.

  27. petertodd commented at 6:07 pm on October 8, 2015: contributor
    @dthorpe Can you explain more what that application actually is? Are you sure you’re not vulnerable to malleability?
  28. petertodd commented at 6:08 pm on October 8, 2015: contributor
    Note that for the case where you can’t use ‘sendmany’ because you need to send funds immediately, you could instead use replace-by-fee to rewrite the txout set, splitting the change txout into the new payee and change. This style of payment uses significantly less blockchain space than chained transactions.
  29. dthorpe commented at 6:35 pm on October 8, 2015: none
    @petertodd I’d be happy to discuss in more detail offline. first.last@gmail.com
  30. morcos commented at 6:36 pm on October 8, 2015: member
    @matthieu I agree it would be good to give people time to adjust their applications to a lower limit, but that will happen naturally as more and more of the network upgrades to 0.12. But yes, if there is too much resistance to 25/25, we might have to start with higher limits. I think node performance should be reasonable with 50/100 but I worry about attacks. Maybe there is some middle ground where some nodes run with higher limits and some lower. Highly chained txs would only be relayed through the nodes with higher limits, but in the event of the attack the lower limit nodes would keep relay open for the vast majority of txs.
  31. petertodd commented at 6:37 pm on October 8, 2015: contributor
    @dthorpe I can’t take on any more consulting for the next few weeks, but you might find this link post about replace-by-fee use-cases useful: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07813.html
  32. petertodd commented at 6:39 pm on October 8, 2015: contributor
    @dthorpe I’ve got a opt-in full-RBF patch ready BTW, and will propose it as a pull-req once this mempool stuff settles down. (hopefully in time for v0.12!) Code is reasonably well tested, as f2pool uses (most of) it.
  33. dthorpe commented at 7:08 pm on October 8, 2015: none

    @petertodd I’ve been following the CPFP and RBF discussions with interest. Not so much to entirely replace the tx chains we have, but to reduce the number of outputs necessary to pay network fees across the chain, and improve the all-or-none granularity of the tx set for inclusion in a block.

    I gave a slightly more specific example of my use case on the bitcoin-dev list a few weeks ago: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010569.html Basically, a brokered escrow transaction involving 3 parties where the seller and the buyer never communicate directly.

    Yes, I am aware of and concerned about transaction maleability. I’m counting on BIP62 to significantly reduce that risk. I believe the txs we generate are already BIP62 compliant. We also compute a signature-free hash of every transaction as an alternate txid so that we can at least recognize our txs in the blockchain even if their scripts have been tweaked in transit. Any issued txs downstream from that altered tx would have to be redone, of course, which is a pain in the neck and extremely inconvenient to all involved.

  34. pstratem commented at 7:24 pm on October 8, 2015: contributor

    This limit should be much lower than 25.

    Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

  35. dexX7 commented at 8:53 pm on October 8, 2015: contributor

    Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

    I assume this statement is based on the assumption that there are unspent confirmed outputs available, in case there is need to send many transactions in a short timeframe? If so, it seems like a trade off, and it may result in situations where a lot of unspent outputs are prepared, just to have some available.

    I can imagine that the reduction of the package/chain size might be useful, but I’m wondering, if there is any indication that it really solves, or would have mitigated the previous/ongoing/potential mempool bloat [in the future]?

  36. dthorpe commented at 9:47 pm on October 8, 2015: none

    Indeed the only reason to relay chains of unconfirmed transactions is CPFP.

    Certainly, CPFP would provide great benefits for asynchronous brokered exchange scenarios (A => B => C as a pseudo “atomic” set of transactions), but until CPFP is actually deployed and usable (and/or sigHash allows merging of independent fully signed txs), we’re stuck with more primitive techniques such as zero-conf spending chains.

    And even if/when CPFP is available for general use, why would that indicate that the tx chain limit should be much lower than 25? If anything, availability of CPFP would likely encourage use of such pseudo atomic transaction sets.

    Frankly, I feel support for some form of merging signed transactions into an atomic blob would be a better solution for my use case than CPFP, providing true atomicity by coalescing all actors into one bitcoin transaction. Until something along those lines can be sorted out, we work with what is available (and complain loudly at every opportunity! :> )

  37. morcos force-pushed on Oct 14, 2015
  38. morcos commented at 6:16 pm on October 14, 2015: member

    I’ve updated the size limits to 101kB.

    I understand there is still some objection to 25 as a count limit, but I’d like to err on the side of what will make the network useable for the vast majority of participants. Even 25 is a compromise in this sense, because until we have better mechanisms for dealing with large chains of transactions, abuse is still possible within the 25 limit.

  39. sdaftuar commented at 6:33 pm on October 14, 2015: member
    ACK. It doesn’t yet seem like there are any common use cases anyone has brought up that would be hindered by these limits. Agree with @MarcoFalke about updating the release notes but I don’t think that should hold up this pull (there’s much more to explain about mempool changes than just this particular set of limits).
  40. TheBlueMatt commented at 9:27 pm on October 14, 2015: member

    ACK. I don’t like doing this because it does make some things more annoying for developers, but, honestly, as long as they retransmit transactions as their chains are confirmed, 25 txn/block isn’t so bad.

    On October 14, 2015 11:33:31 AM PDT, Suhas Daftuar notifications@github.com wrote:

    ACK. It doesn’t yet seem like there are any common use cases anyone has brought up that would be hindered by these limits. Agree with @MarcoFalke about updating the release notes but I don’t think that should hold up this pull (there’s much more to explain about mempool changes than just this particular set of limits).


    Reply to this email directly or view it on GitHub: #6771 (comment)

  41. petertodd commented at 4:26 pm on October 23, 2015: contributor

    utACK

    #6871 gives high-volume applications a good way to avoid long chains.

  42. dcousens commented at 12:02 pm on October 26, 2015: contributor
    concept ACK
  43. MarcoFalke commented at 8:42 pm on October 26, 2015: member

    Needs rebase due to #6722.

    “for the particular purpose of attacking the entire Bitcoin system”

    Is this attack still possible after #6722 got merged?

  44. TheBlueMatt commented at 0:29 am on October 27, 2015: member
    @MarcoFalke this pull was primarily done to make #6722 have stricter CPU-usage bounds to prevent DoS, so, yes, this is still something we want after #6722.
  45. sipa commented at 1:55 am on October 28, 2015: member
    Needs rebase.
  46. Lower default policy limits
    Reduce the default limits on maximum number of transactions and the cumulative size of those transactions in both ancestor and descendant packages to 25 txs and 101kb total size.
    971a4e6b86
  47. morcos force-pushed on Oct 28, 2015
  48. morcos commented at 4:02 am on October 28, 2015: member
    sorry, rebased
  49. sdaftuar commented at 8:01 pm on November 5, 2015: member

    I’ve generated a lot of historical data on blocks to look at the length of chains we see within blocks. For each block I looked at, I calculated for every transaction the number of in-block ancestors and in-block descendants, along with size of those ancestors and size of those descendants. Detailed data is available here:

    https://www.dropbox.com/s/kkh0g4kl9qwt1cx/alldata2.gz?dl=0

    Top line of the file is a header identifying each column. (The blocks are not in any strict ordering, it’s based on the order they’re written to my blk*.dat files in the data dir on the machine I ran on, and this includes all blocks including ones that may have been reorged out.)

  50. gmaxwell added this to the milestone 0.12.0 on Nov 5, 2015
  51. ABISprotocol commented at 5:49 am on November 6, 2015: none
    NACK. Please see running commentary in this thread for details as to why, for example thoughts contributed from @matthieu and @taariq. I’d like to see, in particular, BlockCypher’s concerns addressed (having this PR changed accordingly) or having this PR closed (not being merged).
  52. MarcoFalke commented at 7:24 am on November 6, 2015: member

    stricter CPU-usage bounds

    Is there an estimation that gives the CPU-usage as a function of the limits? Is it non-linear?

  53. jgarzik referenced this in commit 38ed190eef on Nov 12, 2015
  54. laanwj commented at 9:38 pm on November 12, 2015: member
    utACK
  55. jgarzik merged this on Nov 12, 2015
  56. jgarzik closed this on Nov 12, 2015

  57. gmaxwell commented at 11:09 pm on November 12, 2015: contributor
    @dthorpe Didn’t see your above comment until now “I’m counting on BIP62 to significantly reduce that risk.” BIP62 cannot remove malleability from multisignature transactions; and I believe it is unlikely that BIP62 will ever be deployed on the Bitcoin network due to it’s invasiveness and necessary incompleteness. (Well, pedantically, BIP62 as is will not be deployed as its outdated; a revised version could be, but I’m doubtful of it due to the limitations in the approach). Though I’m hopeful about more complete things being done in the future, they wouldn’t be BIP62. I bring it up not to invite a debate over it, but just to level set– you shouldn’t be expecting anything to just happen there.
  58. dthorpe commented at 5:03 pm on November 16, 2015: none
    @gmaxwell understood, thanks.
  59. nopara73 commented at 6:53 pm on December 7, 2018: none

    Since you were struggling to find legitimate usecases, I’d like to contribute to the conversation that in Wasabi Wallet’s coinjoin chains I’m starting to hit these limits in various ways.

    Background

    Before I discuss the limits individually I’d like to provide some background:

    Wasabi launched 3 months ago and the coinjoin volume is exponentially growing since then. An average Wasabi CJ today is 4 kB vsize. There are 37 participants in each round and 10-20-ish rounds happen daily. Usually peers only submit 1-2 inputs, I allow a maximum of 7 inputs. I use fee target 144, this seems to result in the less “too high fees and too slow confirmations” complains.
    I only allow unconfirmed coinjoin outputs to be registered for a round. This is a sanity check. There are way too many things to consider what unconfirmed transactions I should let in if I’d want to extend this to various unconfirmed utxos. While I plan to allow the registration of unconfirmed utxos, it’s unlikely I’ll do that in the near future. Anyway, whenever a limit here is hit, I refuse the registration of the unconfirmed transaction.

    Individual Limit Discussions

    • DEFAULT_ANCESTOR_LIMIT = 25; I did not hit this limit yet. While as rounds get faster, I’ll hit it more and more often, I don’t mind this much. I need a sanity check on how large unconfirmed CJ chain I want to build at max anyway.
    • DEFAULT_ANCESTOR_SIZE_LIMIT = 101; I did not hit this limit yet. As more and more peers get to participate, it’ll be hit faster and faster. For example a 100 participant transaction would cost 15-20kB. This means I’ll have to have a chain with about 5 transactions at max. And this assumes that users will only register 1-2 inputs.
    • DEFAULT_DESCENDANT_LIMIT = 25 and DEFAULT_DESCENDANT_SIZE_LIMIT = 101. These are the real pain points for me, since I have no control over it. I know at registration that no user can create more ancestors, but I cannot control if they create descendants or not in later phases, resulting the final transaction to fail. DEFAULT_DESCENDANT_LIMIT was is being hit already.

    Question

    Also it is not clear to me if the descendant limits apply to the sum of all descendants of all inputs’s prevouts or just individual descendants of each input’s prevout. The error message seems to imply the latter “too many descendants for tx …” Which one is it?

  60. furszy referenced this in commit eb00d0f62f on Jun 14, 2020
  61. DrahtBot locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 12:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me