Bump minrelaytxfee default #6793

pull laanwj wants to merge 2 commits into bitcoin:master from laanwj:2015_10_bump_minrelaytxfee changing 2 files +2 −2
  1. laanwj commented at 5:42 pm on October 9, 2015: member

    Bump minrelaytxfee default to bridge the time until a dynamic method for determining this fee is merged.

    This is especially aimed at the stable releases (0.10, 0.11) because full mempool limiting, as will be in 0.12, is too invasive and risky to backport.

    The specific value (currently 0.00005) is open for discussion.

    Ping @gmaxwell @morcos Context: https://github.com/bitcoin/bitcoin/blob/v0.11.0/doc/release-notes.md#transaction-flooding

  2. Bump minrelaytxfee default
    To bridge the time until a dynamic method for determining this fee is
    merged.
    
    This is especially aimed at the stable releases (0.10, 0.11) because
    full mempool limiting, as will be in 0.12, is too invasive and risky to
    backport.
    28e3249e53
  3. laanwj added the label TX fees and policy on Oct 9, 2015
  4. gmaxwell commented at 6:01 pm on October 9, 2015: contributor
    ACK both bumping and value (though I’m not dead committed on a particular value). This is what we release-note recommended on 0.11.
  5. btcdrak commented at 6:02 pm on October 9, 2015: contributor
    Agreed.
  6. paveljanik commented at 6:05 pm on October 9, 2015: contributor
    ACK for 0.10 and 0.11. I hope this is not needed for 0.12…
  7. morcos commented at 6:42 pm on October 9, 2015: member
    ACK. I think I prefer a lower number (2000 or 2500), but do not have objection to 5000 if thats more popular.
  8. TheBlueMatt commented at 7:05 pm on October 9, 2015: member
    I dont have any specific comments on the number, and generally agree for 0.10 and 0.11, but NACK for 0.12. #6722 should change what this is used for, so unless you want it in 0.12 to meet backport policy and then revert it in #6722, NACK.
  9. morcos commented at 7:16 pm on October 9, 2015: member
    @TheBlueMatt ahh, that was part of my objection to the higher number. we might want higher than 1000 as just a relay fee anyway though…. ugh.. we need some better way than just hardcoding it.
  10. gmaxwell commented at 7:28 pm on October 9, 2015: contributor

    @TheBlueMatt Why would you care about carrying this in git master until the dynamic stuff is merged?

    Dynamic PR when rebased on this can simply set it back.

  11. TheBlueMatt commented at 7:29 pm on October 9, 2015: member
    Oops, maybe I wasnt clear - I have no objections to carrying it in master until #6722 is merged, just noting that I would push to revert it in #6722 (and if there was disagreement there, I would have a problem).
  12. laanwj commented at 7:36 pm on October 9, 2015: member
    I’d hope to get rid of the entire hardcoded value before 0.12
  13. TheBlueMatt commented at 7:38 pm on October 9, 2015: member
    @laanwj #6722 changes its definition, mostly. I think its doable to nearly get rid of its usage as a default min-relay fee, but we should discuss that on #6722.
  14. laanwj commented at 7:44 pm on October 9, 2015: member
    @TheBlueMatt Right, if you want a different meaning for it in 0.12, probably best to rename it too. But yes, discussion belongs there. @morcos No problem changing it to 2500.
  15. morcos commented at 8:32 pm on October 9, 2015: member
    @laanwj Let’s stick with 5000. To be slightly pedantic, you could view raising it to 5000 so mempools don’t blow up as changing the meaning of it. 1000 is perfectly sufficient as a relay fee for now. Almost every transaction that has ever been transmitted with a fee over 1000 has been mined, so that implies it’s almost too HIGH of a fee as a min relay fee. But I like raising it to 5000 and then revisiting in the context of #6722, possibly even lowering it again in backports in the future.
  16. paveljanik commented at 6:03 am on October 10, 2015: contributor
    0test/transaction_tests.cpp(349): error in "test_IsStandard": check IsStandardTx(t, reason) failed
    1
    2    t.vout[0].nValue = 601; // not dust
    3    BOOST_CHECK(IsStandardTx(t, reason));
    
  17. NicolasDorier commented at 12:15 pm on October 10, 2015: contributor

    Does it means this change will make every (except OP_RETURN) 600 satoshi TxOut Dust ?

    If this is the case, this is a very big change for every colored coin protocols, since every colored coin wallets are using 600 satoshi output for carrying the color. (at least OA) Is it possible to decouple the minrelaytxfee from the dust definition ?

    I understand this is not your priority, but the goal of this PR, I think, is to protect against spammy transaction who does not have enough fees, but it impacts way more than that. :(

  18. laanwj commented at 12:29 pm on October 10, 2015: member

    @NicolasDorier No one really likes doing this, but as nodes are crashing we have to act in some way, and this is the only knob that can be quickly adjusted that brings down the transaction spam. It is meant to be temporary.

    Decoupling dust from minRelayTxFee would be possible (what would the other consequences of this be?) but we really want to get a release out soon (due to the upnp vulnerability), so if there is no better solution today, I’m going to merge this one.

    Edit: decouping the dust threshold from mintxfee makes no sense as long as dust is defined as “output that is expected to cost more to spend than it transfers”, so that would be no easy change, it would need a complete re-definition of what Dust means.

  19. tests: update transaction_tests for new dust threshold 4e2efb3c5f
  20. dexX7 commented at 1:01 pm on October 10, 2015: contributor

    Yes @NicolasDorier, and this also affects other meta-layer protocols such as Counterparty, Omni, ChanceCoin, NewbieCoin, ….

    However, I believe this change is mostly intended to avoid mempool bloat, and as long as there are still a few nodes (and miners!) accepting the transactions, it hopefully has no “blackout” effect.

    I acknowledge the need for a solution like this, at least temporarily, but I was kinda missing a justification (i.e. it’s not really mentioned, why 5000 is better than 1000, or whether this actually solves something), so I checked the impact of changing the minRelayTxFee from 1000 to 5000 over the last 10000 blocks, to see, which (mined) outputs would have been considered as dust with the new default:

    impact

    Raw CSV: http://bitwatch.co/uploads/minrelaytxfee5000.csv

    It’s not a great chart, but it shows that there were massive spikes of outputs, which would have been rejected, so even if I don’t really like it either, it seems to work. Haven’t checked, whether this matches the time of “attacks” though.

  21. laanwj commented at 1:09 pm on October 10, 2015: member

    It works because it makes spamming your mempool full 5 times as expensive. This answers the “why is it better than 1000” part. In practice quite a few people have been using this value and it has apparently avoided their mempool sizes from getting out of hand.

    It may be too high though - more then is needed to prevent the current floods, as said, I’m not wedded to the specific value.

    Again, if you have a better solution quickly, that may be preferable to this one. If not, I’m going to merge this.

  22. dexX7 commented at 1:28 pm on October 10, 2015: contributor

    It may be too high though - more then is needed to prevent the current floods, as said, I’m not wedded to the specific value.

    Given that these protocols use very-close-to-dust values, from that perspective it probably doesn’t matter, whether it’s 1500 or 5000, so 5000 seems to be as good (or bad) as any other value above the old default.

    Again, if you have a better solution quickly, that may be preferable to this one. If not, I’m going to merge this.

    Unfortunally I don’t, but as mentioned, I acknowledge the need for a temporarily solution, so no objections from my side.

  23. dexX7 commented at 1:32 pm on October 10, 2015: contributor

    Just in case, if anyone is wondering, what the new dust thresholds with -minrelayfee=0.00005 are:

    • Pay-to-pubkey-hash (34 byte): 0.00002730 BTC
    • Multisig, two compressed public keys (80 byte): 0.00003420 BTC
    • Multisig, one compressed, one uncompressed public key (112 byte): 0.00003900 BTC
    • Multisig, three compressed public keys (114 byte): 0.00003930 BTC
    • Multisig, one uncompressed, two compressed public keys (146 byte): 0.00004410 BTC
  24. ptschip commented at 3:58 pm on October 10, 2015: contributor

    I just put out PR #6803 if anyone wants to take a look. It automatically adjust minrelaytxfee and limitfreerelay up or down depending on the mempool height.

    On 10/10/2015 5:30 AM, Wladimir J. van der Laan wrote:

    @NicolasDorier https://github.com/NicolasDorier No one really /likes/ doing this, but as nodes are crashing we have to act in some way, and this is the only knob that can be quickly adjusted that brings down the transaction spam. It is meant to be temporary.

    Decoupling dust from minRelayTxFee would be possible (what would the other consequences of this be?) but we really want to get a release out soon, so if there is no better solution today, I’m going to merge this one.

    — Reply to this email directly or view it on GitHub #6793 (comment).

  25. NicolasDorier commented at 1:25 am on October 11, 2015: contributor

    @laanwj I understand the need for this. What I am saying is that every times you do a change on dust it requires enormous amount of change/build/update for all meta protocol. This is worse for us than a hard fork, because for the hardfork you only need to update the node.

    One alternative is to hardcode the limit of “Dust” to what it was before, I think you are raising this limit temporary anyway. The worse is that it is temporary ! Which mean that we will again need to go through the pain soon. If it is temporary, can’t you change the Dust in a hardcoded value temporarily as well ?

    The dust also does not change a lot for spamming ! Because you can just send those 600 satoshi to yourself so it does not cost you anything. Really, this is causing us unnecessary pain for something temporary and which can be fixed very easily by a hard coded value in IsDust() that we can reverse later :(

  26. luke-jr commented at 1:32 am on October 11, 2015: member
    Prefer 0.0001 BTC for the value, but utACK. @NicolasDorier This is just node policy. Nothing should ever be hard-coded to assume anything. I’m also unaware of any BIP (even draft) that does. For both of these reasons individually, there is no reason to consider it a concern.
  27. NicolasDorier commented at 1:35 am on October 11, 2015: contributor
    @luke-jr, let me know a solution about how to fix all those protocols which does not care about nValue so that we don’t get hit by a the nuke collaterally next time.
  28. luke-jr commented at 1:42 am on October 11, 2015: member
    @NicolasDorier What protocols? Like I said, I am not aware of any BIP draft that cares about this.
  29. NicolasDorier commented at 1:43 am on October 11, 2015: contributor
    Counterparty, Omni, ChanceCoin, NewbieCoin, OpenAsset, EPOBC, Colu, in fact anything that does not care about “nValue”.
  30. luke-jr commented at 1:48 am on October 11, 2015: member
    Where are the BIP drafts for those? If they don’t have one, I assume they’re pre-alpha and can change on a dime drop. If they don’t care about nValue, this change shouldn’t affect them anyway.
  31. NicolasDorier commented at 1:54 am on October 11, 2015: contributor

    It is. They use 600 satoshi for every TxOut. When v0.12 will get out then they’ll need to go to the whole rebuild/redeploy process, which is way more painful than a hardfork.

    Well, I don’t think I’ll make you change your mind anyway, we’ll take the blow, not knowing when it will happens again.

  32. luke-jr commented at 2:02 am on October 11, 2015: member
    BTW, I think you’re very much underestimating the effect of hardforks… generally, they require modifying every piece of software in the ecosystem.
  33. NicolasDorier commented at 2:10 am on October 11, 2015: contributor

    they do not, most likely people depends on either lib consensus or bitcoin core to do validation for them. So redeploying Bitcoin core (or using new dll) is generally enough.

    I’m just a bit down to be hit collaterally by a good idea to prevent spam. Especially when raising IsDust does not prevent spam at all. It is like we get hit, without benefit for any party, neither for spammers, you, us, or our deployed customers, pure pain without pleasure. :(

  34. NicolasDorier commented at 2:16 am on October 11, 2015: contributor
    btw, does someone has any link on the rational of the Dust limit ? if we take the blow now, I’d like at least find a solution and maybe submit a BIP for next time.
  35. dexX7 commented at 2:16 am on October 11, 2015: contributor

    @NicolasDorier: actually the setting for counterparty-lib can be set via a config file, and Omni Core calculates the values on the fly, based on the -minrelaytxfee. The trouble I see here is that it has to be communicated to users/integrators, but this seems like the lesser evil than requesting that all other Bitcoin Core users change their settings manually.

    I’m not too familiar with the others, and if hardcoded values are used, then it’s rather unfortunally. Probably a good time to change that though.

    we’ll take the blow, not knowing when it will happens again

    I’d recommend not to assume that node policies (e.g. default fees, dust thresholds, standard scripts etc.) never change. #3737 or #5231 are probably good examples to name in this context.

  36. NicolasDorier commented at 2:28 am on October 11, 2015: contributor

    I think I will index the Dust amount based on the fees from now on, should be more solid in the future. I am only assuming than minrelayfees are lowing than mintxfees which seems reasonable.

    It should not require communication of new default value.

  37. dexX7 commented at 2:44 am on October 11, 2015: contributor
    @NicolasDorier: I’m not sure if it helps, but if OA/colorcore uses Bitcoin Core as backend, then you may just use the relay fee directly as basis for the dust threshold calculation. It’s exposed via RPC ("getnetworkinfo").
  38. jgarzik commented at 2:52 am on October 11, 2015: contributor
    ACK 2500 or 5000 as emergency measure, as long as it is clearly release-noted + mentioned on social media [pending better mempool management].
  39. btcdrak commented at 3:49 am on October 11, 2015: contributor
    ACK on 5000
  40. CodeShark commented at 3:57 am on October 11, 2015: contributor
    ACK pending better mempool management
  41. laanwj commented at 8:31 am on October 11, 2015: member
    My nodes are shutting down due to outrageous mempool memory usage. Can’t be the only one. We need to merge this deploy new versions now.
  42. laanwj merged this on Oct 11, 2015
  43. laanwj closed this on Oct 11, 2015

  44. laanwj referenced this in commit 4ca6ddec4d on Oct 11, 2015
  45. laanwj referenced this in commit 842c48dba3 on Oct 11, 2015
  46. laanwj referenced this in commit e7bcc4aac3 on Oct 11, 2015
  47. dexX7 commented at 11:41 am on October 11, 2015: contributor
    @NicolasDorier: as an additional note: the next step from here are likely floating relay fees (see #6722 for example), which is far worse in my opinion (from the perspective of using it to find a dust threshold), because it seems this basically breaks the assumption that there is a quasi static relay fee on a larger (network) scale. It would probably be best to either use a very high value, which is certainly over the limit, or continuously update the threshold.
  48. NicolasDorier commented at 2:13 pm on October 11, 2015: contributor

    @dexX7 I want user on NBitcoin to make valid transaction easily. They can already set the Dust amount by themselves, but most use the default value. I need to find a default policy that is good enough and solid without dependency.

    I think I’ll base the dust value on the feerate which should evolve nicely, even with the floating relay fee. I’ll work on that now.

  49. actioncrypto commented at 1:49 am on October 12, 2015: none
    For the record, I’ve had multiple nodes running 24/7 throughout the spam attacks: one with minrelayfee at 0.00003 and a few others at 0.00002 - all my nodes are running fine and seem to be unaffected. 41,289,086 and 38,068,607 mempool size for example - So you don’t need it at 0.00005 IMHO. ~40mb mempool is pretty reasonable…
  50. taariq commented at 2:16 am on October 12, 2015: none
    Many thanks @dexX7 for the analysis on both costs and impact. This was extremely helpful to understand the impact on Counterparty type transactions. Also, thanks as well for the quick notes on updating the Counterparty-lib settings for this change.
  51. bharathrao commented at 5:26 am on October 12, 2015: none

    I personally have no complaints about 5000 or even 1000 satoshi. However, I think this is a band-aid and does not address the real problem, which is that mempools will keep getting filled by tx using the lowest allowed fees.

    The right solution for this is to have near-dust transactions as “Fill or Kill”. That is, if a transaction is not confirmed within n blocks (say 5), it is dropped from the mempool (for non-high priority). This will require the sender to re-send.

    The number of blocks can be a function of the fee. i.e. 1 satoshi = 1 block, 10 satoshi = 2 blocks, 100 satoshi = 3 blocks, 1000 satoshi = 4 blocks. 4000 satoshi can be 1 day.

    Anything over 4000 can be treated ‘business as usual’

  52. NicolasDorier referenced this in commit b1121781ab on Oct 12, 2015
  53. voisine commented at 6:00 am on October 12, 2015: none
    Changing network relay rules suddenly is an drastic measure. (although not necessarily unwarranted in drastic times) Many wallets will be forced to rush out updates, and even more users will find their software broken until they update. Getting 100% of users to update or reconfigure their software is a long process. Thankfully we have an api hook in place for breadwallet to bump fees remotely (within a hard coded range limit for security)
  54. luke-jr commented at 6:29 am on October 12, 2015: member
    @voisine There are no relay rules, just relay policy, which has always been decided on a per-node basis by the node operator and should never be relied on to be consistent.
  55. rubensayshi commented at 6:54 am on October 12, 2015: contributor

    but as a wallet operator if you offer users the option to choose an (calculated) ’low priority’ fee then knowing that a big part of the network runs a certain default means you’ll get it relayed.

    0.00005 will be higher then bitcoin-cli estimatefee 3 for example…

    utACK but I also agree that hardcoding dust or otherwise documenting the reasoning behind it would be very helpful for the meta protocols

  56. voisine commented at 7:30 am on October 12, 2015: none
    @luke-jr, indeed, it’s up to each node operator to decide if they want to follow standard policy and help, or to diverge and hinder the functioning of p2p gossip network. For the network to be useful for wallets, nodes need to follow a predictable relay policy that wallets can adhere to. Otherwise they’ll need to find some alternate method of getting transactions to miners.
  57. mikehearn commented at 4:06 pm on October 12, 2015: contributor
    This will break many wallets that do not fetch fee data from a central server, as BreadWallet does.
  58. bharathrao commented at 4:29 pm on October 12, 2015: none

    Even if we raise the fee, with enough volume, mempools will be full of Tx that are not confirmed in a block. These transactions will languish in mempools indefinitely and the bitcoins in them are in a state of limbo for days. The solution can’t be to keep raising the fee until we are more expensive than paypal.

    Bitcoins hanging in mempools for indefinite duration is bad for meta-protocols, micro-payments, and possibly hundreds of innovations in progress right now. Fill or Kill is a solution from the fiat world that deterministically releases these coins and allows the original sender to retry with a larger fee. This releases us from having to hard-code a fee limit and keep changing it with any periodic fluctuations in tx volume and value of bitcoin.

    We should also consider that the class of solutions encompassing simply changing a min fee, dust-threshold, etc. are band-aids. We need to acknowledge that the current implementation is not addressing a given use case and find a solution for it.

  59. dthorpe commented at 7:37 pm on October 12, 2015: none
    Does the relay penalty on transactions containing a low value (“dust”) output apply to transactions that pay a minimum or better network fee?
  60. dexX7 commented at 7:59 pm on October 12, 2015: contributor

    @dthorpe:

    Does the relay penalty on transactions containing a low value (“dust”) output apply to transactions that pay a minimum or better network fee?

    Yes. A node, which considers an output as dust, won’t relay the transaction, even with high transaction fees.

  61. dthorpe commented at 9:02 pm on October 12, 2015: none

    @dexX7

    Yes. A node, which considers an output as dust, won’t relay the transaction, even with high transaction fees.

    Thanks for the info.

    Doesn’t that seem a little overbearing? If a transaction is paying sufficient network fee, it seems to me it should be exempt from dust spam filtering, because the network fee puts a real cost on spammy transactions.

  62. luke-jr commented at 9:36 pm on October 12, 2015: member
    @dthorpe Just because it has a real cost doesn’t mean it should be relayed or mined. Also remember the standard fees today do not nearly cover the cost of transactions, nor are they paid to the people affected by them.
  63. dexX7 commented at 9:37 pm on October 12, 2015: contributor

    @dthorpe: actually it’s pretty interesting in my opinion.

    The general idea behind “dust” is that no output should be “uneconomic”, i.e.: it should not cost more to spend an output than the output is worth (if this is really satisfied on a larger scale is a different question, but anyway..).

    Consider for example a transaction with high fee, but outputs with only 1 satoshi. This would be fine, if the transaction fees were zero, but otherwise probably no one would be inclined to spend the 1 satoshi outputs (which would ultimately lead to utxo bloat).

  64. NicolasDorier commented at 2:53 am on October 13, 2015: contributor

    @dexX7 So the main rational with the dust is only to prevent utxo bloat ?

    Saying that dust is uneconomic to spend is untrue with colored coin protocols.

    It seems there is massive complexity in preparation for a floating minRelayTxFee, when really I doubt it gives any advantage, except a potential UTXO bloat. But UTXO size is negligible compared to the blocks.

    But let’s admit : “Saying that dust is uneconomic to spend is untrue with colored coin protocols.” is true. If that is the case, dust should not be bound with minRelayTxFee but with the current fee rate, whatever it is at the moment of broadcast. This would simplify the code a bit, and would be consistant with the goal of not having “uneconomic output” and spare us with yet another magic default which can have some dramatic impact. (yes, I know people should not rely on the default in theory, but they do in practice)

    As I said this rushed decision have more impact than a hardfork for wallet. If you have a coin of X btc and the user wants to send (X - 1000 satoshi) BTC, then before your modification, the change of 1000 satoshi would be sent back to the user, but after your modification, the change would prevent the transaction from spreading. So it requires everyone to rebuild/redeploy all of our users.

    The interesting thing to note is that it is impossible in theory to know if an output will be more costly than its value to spend, because the price to spend depends on the fee rate at the time of spending, which can’t be predicted. But well, I guess this is not a better approximation to use the fee rate instead of minRelayTxFee for that.

  65. dthorpe commented at 7:07 am on October 13, 2015: none

    @luke-jr

    Just because it has a real cost doesn’t mean it should be relayed or mined.

    True, there are always other factors in play. I was referring to the network fee as an unrecoverable real cost as a spam deterrent.

    Also remember the standard fees today do not nearly cover the cost of transactions, nor are they paid to the people affected by them.

    I’m well aware of that. That seems like a sustainability problem, no? My understanding is that network fees exist primarily to provide tx prioritization and spam deterrence.

    The economic model of my business-built-on-the-Bitcoin-blockchain (which uses OpenAsset colored coins, btw) is prepared to pay higher fees to support the network. I can only hope that such fees will eventually go towards supporting the network as a whole (validating full nodes) rather than just the miners.

  66. dexX7 commented at 10:17 am on October 13, 2015: contributor

    @NicolasDorier:

    So the main rational with the dust is only to prevent utxo bloat ?

    That’s my understanding, yes, and to some degree that coins are not “lost” (e.g. consider what might happen, if there were millions of uneconomic outputs, hehe). There was quite a bit of discussion, which may be interesting to read: #2351, #2577 (also discusses colored coins)

    But UTXO size is negligible compared to the blocks.

    Sorry, can you please clarify? Mined outputs, or blocks can be prunned, while you can’t just remove entries from the UTXO, which makes UTXO space more “expensive” than block space.

    But let’s admit : “Saying that dust is uneconomic to spend is untrue with colored coin protocols.” is true.

    I agree, but I’m not sure if this could be generalized. It’s an exceptional case.

    dust should not be bound with minRelayTxFee but with the current fee rate but with the current fee rate, whatever it is at the moment of broadcast

    I see where you’re going: since the economic value of an output, when spending, is affected by the transaction fees, this should be the basis to determine the economic value. As such, this sounds reasonable to me. It probably doesn’t make sense, if the thresholds were based on the fees of the sending transaction (e.g. think about a transaction with very high fee for fast confirmation - how does this relate to the spender at all?), but instead based on the context when spending (e.g. mempool size etc.). It seems to overlap, once mempool space becomes limited based on transaction fees though.

    I know people should not rely on the default in theory, but they do in practice

    You may assume certain defaults (or widely used values), and leverage that, but relying on it doesn’t seem solid, and breaks, once the values change, either locally, say when a single node has a different policy, or more global, if default values change, and a large part of the network uses the new default.

    Since your mostly concerned about OA, I’m actually wondering how this is an issue for you, which is not meant to sound offending, as your point cover more than OA. OA isn’t a pure colored coins protocol, which uses output values to represent asset amounts, but instead more like meta protocol, which uses outputs to track the flow of assets.

    Let’s say there were a network wide default static transaction fees of 0.00001 BTC, and you built a wallet, which always sends transactions with 0.00001 BTC transaction fees. At some point the default fees are raised to a static value of 0.00005 BTC. Or let’s say, at some point the default fees are no longer static, but instead floating. Wouldn’d you agree that a wallet that assumes transaction fees of 0.00001 BTC should be overhauled at this point? Your situation is similar in my opinion.

  67. NicolasDorier commented at 6:30 pm on October 13, 2015: contributor

    @dexX7

    Let’s say there were a network wide default static transaction fees of 0.00001 BTC, and you built a wallet, which always sends transactions with 0.00001 BTC transaction fees. At some point the default fees are raised to a static value of 0.00005 BTC. Or let’s say, at some point the default fees are no longer static, but instead floating. Wouldn’d you agree that a wallet that assumes transaction fees of 0.00001 BTC should be overhauled at this point? Your situation is similar in my opinion.

    I agree on this point, but dynamic fee is something people had time to adapt to. (even faster thanks to the spam attacks) So there is lots of service which provide reasonable feerate to use. (I am using blockcypher) The MinTxRelay fee has always been static, and this change as been precipitated. I’ll take you an example on how it breaks wallet, even without OA.

    Say you want to send 1000 satoshi to Alice, but have a coin of 2100 satoshi. A wallet with MinTxRelayFee of 1000 will send the 1100 satoshi of change. But if you change the MinTxRelayFee abruptly to 5000 like that and I don’t update the wallet software, now, the transaction will be ignored because the change is considered Dust.

    So my point are the following :

    • Does the “Dust” concept is really useful to protect some kind of attack ?
    • If you think it is, the definition should be based on tx fee to be consistent with the rational of “not creating an output uneconomical to spend”

    The fee rate is a widely available value nowadays contrary to MinTxRelayFee, and also has been largely tested.

    But UTXO size is negligible compared to the blocks. Sorry, can you please clarify? Mined outputs, or blocks can be prunned, while you can’t just remove entries from the UTXO, which makes UTXO space more “expensive” than block space.

    I was talking about non-pruned node, where the UTXO can’t be bigger than the block directory anyway.

    For pruned one, this is a whole other discussion, but briefly with pruned node you can’t download blocks or merkle blocks from them (the service bit Network is not set). Any services which depend on a bitcoin node will always use non pruned node. For now I see pruned node as a way to keep the node count up artificially by making them less useful. Which is why I disregard the “you can prune blocks, but not the utxo” argument. Note that I do not follow closely what is planned next for pruned nodes, but as they are now, they help no one but the person who just wants a wallet without SPV. Whether this should be enough reason to justify Dust is a question with no easy answer sadly.

    It probably doesn’t make sense, if the thresholds were based on the fees of the sending transaction (e.g. think about a transaction with very high fee for fast confirmation - how does this relate to the spender at all?), but instead based on the context when spending (e.g. mempool size etc.)

    If I understand what you mean, you are saying it makes sense to block the spending of an uneconomical output. If that is so, I don’t agree because spending an output always reduce UTXO size and should never be denied. (also you don’t want user not knowing whether they will be able to spend their coins in the future)

    What I meant is really doing just like now : blocking at the creation of uneconomic output, except based on the fee of the transaction instead of MinTxRelayFee. So we don’t have external state to get to know what the dust is, and the dust value evolve nicely with how much it would cost to spend the output. (you can’t really know how much it will cost to spend in the future, but at least you have the best guess you can get by using the fee of the transaction)

    It makes sense to use the transaction fee instead of the fee rate, because the decision whether the transaction was accepted or not is priory based on the fee rate.

    There was quite a bit of discussion, which may be interesting to read: #2351, #2577 (also discusses colored coins)

    I will check that.

  68. sipa commented at 6:34 pm on October 13, 2015: member
    The dust rule is about creating uneconomical outputs, not about spending them.
  69. sipa commented at 6:49 pm on October 13, 2015: member

    In the long term it is unmaintainable that every node has every block readily available, and thankfully also unnecessary. Pruning right now is black or white (have nothing or have everything available for the network), but that is not expectes to remain the case. There is much more demand on the network for recent blocks that for historical ones, so it makes sense that far more nodes can offer recent blocks. Furthermore, offering historical data is simple. It’s a few gigabytes of sequential large data blobs.

    Being able to independently verify that nobody in the network is cheating is something else, and requires fast access to the UTXO set. It’s very important if you don’t want to outsource trust in what transactions are valid, but that’s not the only reason. Even to someone who does not run a fully validating node himself, knowing that many independent entities are running one and using one gives confidence that the rules of the network will be maintained, as it will require attacking and/or convincing a significant portion of them to violate or change the rules. This is not the case in a world where a large set of the ecosystem relies on big hosted providers of validation services.

    Note that it is not just about storage. To validate blocks quickly, you don’t just need the UTXO on disk, but you need to be able to look up entries in it quickly. Loading it in an indexed form in RAM already requires several gigabytes. When a large fraction of frequently-needed entries don’t fit in RAM anymore, block relay on the network will suffer significantly.

    So, no, keeping the UTXO set small is absolutely in the best interest of everyone who wants to maintain the properties of the system. And operators of full nodes have good reasons to try to prevent uneconomic use of this precious resource - one for which they have to pay without being rewarded.

  70. dexX7 commented at 7:12 pm on October 13, 2015: contributor

    If I understand what you mean, you are saying it makes sense to block the spending of an uneconomical output.

    Sorry, no, this was probably a bit unclear: spending should not be blocked, but to determine whether an output is uneconomic or not, the context when spending the output comes into play. It’s given that only a) historical data can be used, and b) the data that is available when sending. You mentioned that the threshold may solely be based on transaction fees of the sending transaction (which tackles b, but not a), and my point basically was that this is seems like a too limited perspective, which is not necessarily related to the actual economic value at all.

    Does the “Dust” concept is really useful to protect some kind of attack ?

    Are you referring to the bump of the PR? I’m not sure, if this will turn out to be effective, and it doesn’t seem like a good longer term solution, but it may make spamming more difficult, and the higher threshold would have buffered the larger spikes (see #6793 (comment)).

  71. NicolasDorier commented at 3:52 am on October 14, 2015: contributor

    Ok, thanks sipa I see where pruning is going now, it convinces me that dust is useful. I was reflecting on the current state which is black or white. @dexX7 You are already using indirectly historical data b) if you use only the transaction fee as base for dust definition, because nowadays always, the transaction fee is already taking into account historical data. And you are also sure that it will be above minTxRelayFee, so this is already a net improvement compared to today, and would not make us depends on default.

    I opened #6824 to prevent hijacking the PR discussion.

  72. bokobza referenced this in commit 7deeb22c3b on Nov 5, 2017
  73. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-07-05 19:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me