Increase DEFAULT_ANCESTOR_LIMIT and DEFAULT_DESCENDANT_LIMIT to 100 #11152

pull RHavar wants to merge 1 commits into bitcoin:master from RHavar:limits changing 2 files +9 −9
  1. RHavar commented at 1:41 AM on August 26, 2017: contributor

    I have intentionally avoided touching DEFAULT_ANCESTOR_SIZE_LIMIT to err on the site of caution, but this is something I'd like to see lifted again.

    Due to the ridiculous transaction fees required these days, it's not really economically sane to rely on bitcoin core's wallet for processing deposits and withdrawals. So I have been helping a company come up with a pretty low-tech solution, and have a simple structure:

    • Deposits go into a wallet A
    • They are periodically swept to wallet B, with an extremely low fee
    • Withdrawals are processed from wallet C. When extra funds are needed, they are immediately sent from wallet B.

    However continually sending payments from wallet C, this structure results in very long transaction chains. Even when being responsible and batching in 1 minute sends, it'll very often hit against the current limits which is fun for no one.

    For a background of why the limit was previously lowered from 100 to 25:

    https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-October/011401.html

  2. Increase DEFAULT_ANCESTOR_LIMIT and DEFAULT_DESCENDANT_LIMIT to 100 0bf07635ba
  3. gmaxwell commented at 1:48 AM on August 26, 2017: contributor

    @RHavar you've said nothing to address the original reason for lowering them other than it would be useful for your particular pattern of usage. A few weeks ago you were asking the that allowed unconfirmed chaining depth be made 0. :(

  4. luke-jr commented at 2:13 AM on August 26, 2017: member

    If you're making long chains of unconfirmed transactions, you're using it wrong. Just use sendmany with all your desired outputs, at most once per block. If you find out your fee was too low, abandontransaction and recreate it (including any new outputs you might wish to send to now as well) - be sure you use the raw transaction API to add to the original transaction, to ensure the same inputs get used in the new one.

  5. RHavar commented at 2:13 AM on August 26, 2017: contributor

    @RHavar you've said nothing to address the original reason for lowering them other than it would be useful for your particular pattern of usage.

    I'm not sure the original motivation is nearly as relevant as they were two years ago. The mempool being full and/or spammed is not nearly as problematic thanks to a lot of improvements, and not to mention significantly more expensive to do so. And for safety I've kept DEFAULT_ANCESTOR_SIZE_LIMIT for sanity. And two years ago it was not nearly as needed to chain chain transactions as now. I'm providing a legitimate use case for transaction chaining that was asked for in the original mailing list post.

    FWIW: This isn't even for me. I'm just trying to help a company send transactions in an affordable way, without going down the path of spending thousands of dollars on writing custom coin selection that avoids this limit and can manage large unspent sets properly (like I've had to do). But I strongly feel bitcoin should be trying to minimize the harmful effects of centralization that the high fees have caused. Allowing someone to fire a hundred transactions in a row without having to write extremely complex logic is a marked improvement.

    A few weeks ago you were asking the that allowed unconfirmed chaining depth be made 0. :(

    Was that really necessary? It's a clever one-liner to try undermine my credibility -- but it's a rather blatant and unfair mischaracterization. I suggested that only in the very specific case of bip125 transactions, for the express purpose of stopping the receiver making it unfeasible for you to fee bump. And as I recall you were against such limits because of their paternalistic nature.

  6. luke-jr commented at 2:21 AM on August 26, 2017: member

    @RHavar It sounds like your efforts might be better spent improving bumpfee to enable adding additional outputs.

  7. RHavar commented at 2:22 AM on August 26, 2017: contributor

    If you're making long chains of unconfirmed transactions, you're using it wrong. Just use sendmany with all your desired outputs, at most once per block.

    There are legitimate business reasons to want to make sends more than once per block.

    If you find out your fee was too low, abandontransaction and recreate it (including any new outputs you might wish to send to now as well) - be sure you use the raw transaction API to add to the original transaction, to ensure the same inputs get used in the new one.

    Ignoring the fact you can't abandontransaction on a transaction that's in the mempool, this style is possible with using RBF, but suffers from a ridiculous amount of complexity:

    • You need to handle the possible case your input isn't enough
    • You need to handle the case of your change dropping too much
    • You need to handle the handle the fee logic (including the increasing requirements of bip125)
    • The amount of fees required for a batch becomes O(N^2) for N replacements of increasing size, as you need to continually pay for the relay costs of the replacements
    • And the most complex: You need to then monitor the blockchain to see which of the N transactions confirms to know which outputs to recreate and send
    • <probably a lot more I haven't thought of>

    In short, this isn't a feasibly solution for a company that just wants to send bitcoin transactions and not reimplement half of a wallet.

  8. sdaftuar commented at 2:24 AM on August 26, 2017: member

    @RHavar I think the main lens through which I look at this is whether this is better for miners or not. The downside to increasing the limits is a slowdown in mempool operations and, specifically, CreateNewBlock. Quantifying that slowdown should be part of any argument in favor of raising it, along with motivation for miner income going up to compensate for that.

    Even putting aside arguments that a different workflow would make more sense -- from the use case described, it's not clear to me that the chain limits need to be increased in order for the transactions to confirm? What's the problem with generating the long-chain of transactions and just rebroadcasting them as the ancestors confirm? That could be done just by using higher local policy limits and/or -walletrejectlongchain=0 (or whatever that command line option is called).

  9. RHavar commented at 2:32 AM on August 26, 2017: contributor

    @sdaftuar Thanks for the well reasoned reply.

    Quantifying that slowdown should be part of any argument in favor of raising it, along with motivation for miner income going up to compensate for that.

    I'll close the issue as I do not have the time or skill to do so; but if anyone wants to take up the torch I think it's a useful change :D

    What's the problem with generating the long-chain of transactions and just rebroadcasting them as the ancestors confirm?

    This was actually my solution, and works well. The main limitation of it is that the site gives txid's that aren't on the network (a little confusing) and the transaction unfortunately has no chance of confirming next block (even though it pays enough) which is a bit unfortunate. There's also a bit of a risk that you continually outpace the network, but that can be solved by just sending in 1 minute batches or so.

  10. RHavar closed this on Aug 26, 2017

  11. gmaxwell commented at 3:49 AM on August 26, 2017: contributor

    FWIW, I wasn't trying to get you to close the issue-- but you're arguing for a change and not even trying to answer why blowing the cpu time way up again wouldn't be a concern. I thought maybe you knew something you weren't mentioning.

    A few weeks ago you were asking the that allowed unconfirmed chaining depth be made 0. :(

    Was that really necessary? It's a clever one-liner to try undermine my credibility

    Come on now, that is really unfair to me. I am not trying to undermine your credibility. We know who you are here and your participation is welcome and appreciated. I wasn't thinking about it in terms of 'just BIP125' (also because I expect virtually all transactions to be BIP125), only that you seemed to be proposing diametrically opposed things here. You're a smart guy and so when I see you saying things the seem at odds I want to understand why.

    I apologize for making you feel unwelcome, it wasn't my intent.

    As far as the replacing logic, we'd like to implement that ourselves in the wallet; but prior to segwit activation there were corner cases where there was just no clean solution that we could find. I hope you'll provide feedback on efforts to implement that in the wallet.

  12. RHavar commented at 4:46 AM on August 26, 2017: contributor

    but you're arguing for a change and not even trying to answer why blowing the cpu time way up again wouldn't be a concern.

    Yeah, I agree the onus should be on me to show it doesn't cause problems. I'm pretty sure it doesn't for nodes, but I didn't check the mining functionality like @sdaftuar mentioned. I was kind of hoping it was just a forgotten about thing that could safely be restored due to all the improvements that have happened in the mean time.

    Come on now, that is really unfair to me. I am not trying to undermine your credibility.

    I apologize, I should've been more charitable in my reading. I see bip125 as a special case for when you're specifically planning on replacing transactions (e.g. doing what Luke suggested earlier) which would mean that if someone spent from it before it confirmed it would strictly be a nuisance and greatly complicate your code (as well as make you pay to orphan it).

    I hope you'll provide feedback on efforts to implement that in the wallet.

    All the batching and replacement stuff is a bit of a nightmare of complexity to manage for an end user. There's just a surprising amount of cases that can happen, and they can all cost you a lot of money if you ignore them or handle them badly.

    Later this year I'm hoping to release a software bundle to make it sane for businesses to accept/send bitcoin, and a pretty big aspect of it is decoupling "sends" with actual bitcoin transactions. And the actual wallet does the logic of bundling in a single transaction (or not), and resending (e.g. monitoring and seeing if a previous transaction got confirmed, and then creating a new transaction for the parts that didn't get sent etc.),

    From the end user experience, it really should be "fire and forget" and they get some internal identifier to be able to query the current state of things if they're interested

    I'm not really sure how much of that is suitable for a consumer-level wallet though. I'm not sure there's many people who send bitcoin transactions with such a frequency that would justify that sort of thing.

  13. RHavar deleted the branch on Oct 2, 2018
  14. DrahtBot locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-13 15:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me