Increase DEFAULT_BLOCK_MAX_SIZE to 1MB #6231

pull chriswheeler wants to merge 1 commits into bitcoin:master from chriswheeler:master changing 1 files +1 −1
  1. chriswheeler commented at 11:21 am on June 4, 2015: contributor
    It seems a number of miners are using the default 750kb block size limit, why not set the default to 1MB? I realise it’s not going to solve the current block size limit debate - but every little helps and, unless I’m mistaken, is a very simple/safe change?
  2. Increase DEFAULT_BLOCK_MAX_SIZE to 1MB 8bc7a10f49
  3. luke-jr commented at 11:46 am on June 4, 2015: member
    Evidence suggests the current ideal soft limit would be 400k, and that even 750k may be too high. Part of why I am neutral on the max block size limit, is because this soft limit exists: so NACK any increase here at least until we get to a comparable day-by-day volume.
  4. XertroV commented at 12:10 pm on June 4, 2015: none

    @luke-jr Can you elaborate on that evidence? The only thing I could find on google was a May 7 email from Alan Reiner on the dev mailing list:

    I think Gavin said that his simulations showed 400 kB - 600 kB worth of transactions per 10 min (approx 3-4 tps) is where things start to behave poorly for certain classes of transactions. In other words, we’re very close to the effective limit in terms of maintaining the current “standard of living”, and with a year needed to raise the block time this actually is urgent.

  5. chriswheeler commented at 12:21 pm on June 4, 2015: contributor

    I believe Luke is refering to this reddit post http://www.reddit.com/r/Bitcoin/comments/38giar/analysis_graphs_of_block_sizes/ - which is actually what made me look at the default block size limit as it’s clear from the ‘raw’ graph that miners are producing blocked limited to 250kb (in the early days) and now 750kb. I assume they are just not bothering to set a custom limit.

    The problem is that if we are currently using 400kb, and the number of transactions continue to increase at the current rate, it won’t be long before that number increases, and it’s going to take a while for miners to get up to date so I feel it’s better to increase it a bit now, while we wait for consesus around the ‘hard’ block size limit increase, or other viable solutions to scalability to come along.

    Also worth noting that the data Luke has used removes ‘Microtransactions’ and transaction which Luke has classified as ‘Inefficient/data use of the blockchain’, but nevertheless are still occurring.

  6. btcdrak commented at 12:34 pm on June 4, 2015: contributor

    Also worth noting that the data Luke has used removes ‘Microtransactions’ and transaction which Luke has classified as ‘Inefficient/data use of the blockchain’, but nevertheless are still occurring. @chriswheeler Nothing has been removed, they are just categorised.

  7. luke-jr commented at 12:35 pm on June 4, 2015: member
    (also, the 400k I suggested here is with the microtx & inefficient included)
  8. ghost commented at 12:45 pm on June 4, 2015: none

    And he removes blocks that are more than “approximately 10 minutes” That data is evidence of nothing but luke-jr bias.

    Back to the point though, I think it is a good idea to set the default block size soft limit to the same value as the hard limit. Miners can make use of it if it is useful to them.

  9. btcdrak commented at 12:57 pm on June 4, 2015: contributor
    @BitcoinBettingGuide quite the opposite, since evidence shows miners are just using the default soft limit, a lower default would actually be more useful because if miners need and want to increase it they would. As it stands, the default is big enough so they haven’t needed to tweak it.
  10. chriswheeler commented at 1:02 pm on June 4, 2015: contributor
    Apologies Luke - I mis-interpreted your post. The current 400kb usage does indeed include all transactions.
  11. gavinandresen commented at 1:07 pm on June 4, 2015: contributor

    NACK from me. I think we need to continue to move away from making policy decisions like this.

    I think we should either keep the 750K soft limit and see how high transactions fees must rise until miners realize they’re “leaving money on the table” and raise the -blockmaxsize themselves.

    Or we should replace the 750K limit with a “go along with the crowd” rule that means any miner that doesn’t care will create blocks that neither increase nor decrease the average block size.

  12. chriswheeler commented at 3:30 pm on June 4, 2015: contributor

    @gavinandresen - Thanks for your comments.

    Out of interest, is there a reason for the soft limit to be 75% of the hard limit? I can’t seem to find any discussion around moving it from 500KB to 750KB in ad898b40aaf06c1cc7ac12e953805720fc9217c0.

    If we did move to a 20MB hard limit would you recommend increasing the soft limit to 15MB, leaving at 750KB or something else?

  13. luke-jr commented at 4:18 pm on June 4, 2015: member
  14. killerstorm commented at 8:52 pm on June 4, 2015: none

    @luke-jr Are you saying that if we are “barely reaching 400k blocks today” (on average), then 400k soft limit is optimal?

    This argument is just asinine…

    Imagine a phone company finds that on average 100 phone lines are used at the same time during peak hours, but only 5 are used at the same time at night. Computing the average for the whole day, they get 20, so in order to optimize costs, they reduce their switching hardware capacity to 20 parallel phone calls. Does it make sense?

    Obviously, no. If people will find that they can’t use phone during the day, they will believe that quality of this company’s service is extremely poor and phone is pretty much useless.

    There is in fact a branch of operations research called queueing theory which studies things like that, and can be used, among other things, to find optimal capacity. But who needs these complexities when you use average, the pinnacle of mathematical thought (according to Luke-jr)?

  15. btcdrak commented at 9:21 pm on June 4, 2015: contributor
    @killerstorm Your analogy doesnt work because callers need to be answered immediately whereas Bitcoin transactions have varying priority and you can compete with higher fees if you need priority, quite unlike a PBX.
  16. killerstorm commented at 10:49 pm on June 4, 2015: none

    @btcdrak Imagine your cell network operator will introduce a 300% surcharge for calling during business hours, how would you feel about it? Won’t you leave it for an operator which has enough capacity to handle all phone calls during peak hours with no surcharges?

    Now imagine that before doing a call, you need to choose a tariff: business, premium, standard, economy. If there is an overload, economy users’ calls will be dropped, then standard users and so on. Business tariff has highest priority. But also costs more, even when network is below its capacity. I’m sure you’d fucking love such a system.

    Even more so, calls aren’t dropped, just delayed… So you might get connected 5 hours later.

  17. dgenr8 commented at 10:52 pm on June 4, 2015: contributor

    ACK. Not doing this now amounts to conducting an uninformed experiment on the ecosystem. @luke-jr excluded blocks that took over 10 minutes – exactly when bigger blocks are needed – because they influenced the result? That is quite a plain statement of bias.

    Here’s an unbiased chart with all blocks included.

    5gfh9cw

    The statement that we are barely reaching 400K blocks today is utterly ridiculous.

  18. ghost commented at 0:07 am on June 5, 2015: none
    Yet another discussion that could have improved the block size situation hijacked by luke-jr’s asinine trolling.
  19. chriswheeler commented at 12:22 pm on June 5, 2015: contributor

    @luke-jr - Thanks for the link, not sure how I missed that.

    Plugging 130,000 txns per day into Mike’s calculations from #3326 gives 1080kb as the soft limit.

    Has anything technical changed since then, or is this now purely political?

  20. davout commented at 2:56 pm on June 5, 2015: none

    NACK from me. I think we need to continue to move away from making policy decisions like this.”

    Pretty hilarious.

  21. luke-jr commented at 3:24 pm on June 5, 2015: member
    @chriswheeler I’m not sure how plugging arbitrary numbers into an arbitrary formula is relevant… In any case, the people you need to convince are miners, not developers.
  22. laanwj added the label TX fees and policy on Jun 9, 2015
  23. maaku commented at 0:08 am on June 18, 2015: contributor

    NACK.

    We need to fix infrastructure (wallets, services) to work in the face of confirmation delays due to block size limits, which is tested by the soft limit. Raising the default block size before it is hit is exactly how we got into this mess.

  24. dgenr8 commented at 0:15 am on June 18, 2015: contributor
    @maaku Not just the 750K soft limit, but 1MB, is currently being hit every 104 minutes based on tx flow.
  25. morcos commented at 0:41 am on June 18, 2015: member
    NACK, agree with @maaku
  26. mikehearn commented at 1:54 pm on June 18, 2015: contributor

    +1, looks good to me.

    Wallets already work in the face of confirmation delays. The problem is not “wallets not working”. The problem is “users hate it and complain and stop using Bitcoin” …. and why should not they complain?

  27. petertodd commented at 2:00 am on June 30, 2015: contributor
    NACK, agree with @maaku
  28. davout commented at 9:09 am on June 30, 2015: none
    @maaku not forcing them to fix their stuff is setting the exact backwards incentives.
  29. laanwj commented at 10:17 am on July 21, 2015: member

    NACK from me. I think we need to continue to move away from making policy decisions like this.

    Seems that I agree with you for a change. 100 people, 100 opinions. Miners that want to change this option, that feel incentive to change it, can (and do) already change it.

  30. laanwj closed this on Jul 21, 2015

  31. jgarzik commented at 2:41 pm on July 21, 2015: contributor

    Note that closing this is endorsing a fee increase, because this PR is in line with what users in the field have been experiencing over the years.

    “let the fee market develop” is an economic policy change which increases fees for existing users.

    This new policy should be communicated to the Bitcoin Core users.

  32. davout commented at 2:43 pm on July 21, 2015: none
    @jgarzik this is by no means a “change” in anything, except maybe in the hopes and expectations of the Bitcoin freeloaders.
  33. jgarzik commented at 2:51 pm on July 21, 2015: contributor

    @davout Wanting to stop “freeloading” is an opinion. It is however a fact that the current years-long behavior has been to bump this whenever pressure approaches.

    First we must care about users in the field who see behavior A (bump if pressure).

    Second, we must be honest with our users that Bitcoin Core is changing to behavior B (fee pressure).

    It is an utter failure of project management and an enormous disservice to users to suddenly stop merging changes like this without communicating same to users .

    The actual user experience, seen by users in the field, is that fees will go up. Where is the blog post or release note informing users of the $3B economy of this change?

    Sandbagging to achieve policy change is the single worst way to run a software project, regardless of the actual change in question.

  34. davout commented at 3:11 pm on July 21, 2015: none

    I do not “want” to “stop” freeloading, you are the one trying to get a change pulled-in, I’m simply pointing out the obvious here.

    If the “users” need that stuff so bad, they’re more than welcome to create their own altcoin, that’s kind of the principle in open-source “scratch your own itch”, don’t start white knighting around based on the supposed hopes and expectations of the “people” that have some problem that in point of fact, resides entirely within your own head.

    Bitcoin isn’t changing, the code has always been open for everyone to audit, inspect, and ultimately decide about using it or not. Likewise, the code is open for anyone to fork if so desired. The narrative that somehow there was a “social contract” that is being broken by not allowing to bloat is disinfo at its best.

    There are no “users of the $3B economy”, there are grown-up actors of this economy, and if they poured $3B in there in the first place, they can probably be trusted to tip the scale of the Bitcoin/Jeffcoin exchange rate once their mutual fungibility is destroyed, without needing you to try to convince everyone about what “they need”.

    Fork it.

  35. petertodd commented at 3:17 pm on July 21, 2015: contributor

    @jgarzik The easiest way to inform users would be to create a reddit thread; you might want to use caps lock to get your point across.

    750KB to 1MB is an insignificant change, and anyway, lots of miners use non-default choices, which further makes any change to this constant insignificant.

    Let’s not waste our time further on trivialities that smell like trolling.

    On 21 July 2015 11:11:54 GMT-04:00, David FRANCOIS notifications@github.com wrote:

    I do not “want” to “stop” freeloading, you are the one trying to get a change pulled-in, I’m simply pointing out the obvious here.

    If the “users” need that stuff so bad, they’re more than welcome to create their own altcoin, that’s kind of the principle in open-source “scratch your own itch”, don’t start white knighting around based on the supposed hopes and expectations of the “people” that have some problem that in point of fact, resides entirely within your own head.

    Bitcoin isn’t changing, the code has always been open for everyone to audit, inspect, and ultimately decide about using it or not. Likewise, the code is open for anyone to fork if so desired. The narrative that somehow there was a “social contract” that is being broken by not allowing to bloat is at best disinfo at its best.

    There are no “users of the $3B economy”, there are grown-up actors of this economy, and if they poured $3B in there in the first place, they can probably be trusted to tip the scale of the Bitcoin/Jeffcoin exchange rate once their mutual fungibility is destroyed, without needing you to try to convince everyone about what “they need”.

    Fork it.


    Reply to this email directly or view it on GitHub: #6231 (comment)

  36. davout commented at 3:21 pm on July 21, 2015: none
    @petertodd @jgarzik tbh i was mostly referring to the max block size increase @jgarzik is proposing, the 750kb -> 1mb change is pretty insignificant indeed.
  37. petertodd commented at 3:34 pm on July 21, 2015: contributor

    @davout Yeah, sorry, I was only replying to Jeff’s comment, not yours. Though that said bigger picture discussion is better off on the mailing list, and/or forums like reddit.

    Re: default blocksize, what should actually happen is it should be removed and replaced by a calculation that takes orphan risk into account. Basically, every transaction you add increases the chance of orphaning your block by some amount due to the larger size. If you know that marginal increase you can then keep a running tally of total block reward and stop adding transactions when marginal reward > marginal cost. As it turns out that constant can be calculated from the orphan rate and your own hash rate relatively easily, and orphan rate can be observed. (IIRC the constant basically has bytes/second as its units) There is the nuance that much of the hashing power has special peering setups that push orphan rate to near zero; your blocks don’t necessarily get to participate in those peering setups.

    Of course, actually writing that patch and setting the default marginal orphan rate constant appropriately would be a great backdoor way of increasing the default max blocksize to 1MB. :)

  38. jgarzik commented at 4:05 pm on July 21, 2015: contributor

    @davout You fail to understand that I agree with your goal.

    However it is a delta, a change from current user experience.

    Any major change in user experience should include a transition plan and communication with users that their fees are going up.

    Responsible software projects do not radically change on-going policies towards users without up-front communication.

    “Deny all further changes similar to old-policy A, thus switching to policy B” is the worst way to run a software project. Users and market hate abrupt, unannounced changes.

  39. petertodd commented at 4:32 pm on July 21, 2015: contributor

    @jgarzik Any announcement like that would be giving the impression that we’re in control of the system, which simply isn’t true. The Bitcoin protocol itself has always obviously had a transaction rate limit and associated economic consequences. We’ve done a bad job in communicating that fact to the general public, but doing that correctly requires a very different message than “we’ve changing a policy which previously meant txs were cheap, and now will mean they’re expensive” - we simply don’t have that level of control because Bitcoin is a decentralized system.

    tl;dr: Don’t try to blame the Bitcoin Core developers for a fundamental property of the Bitcoin protocol itself.

  40. jgarzik commented at 5:12 pm on July 21, 2015: contributor

    @petertodd Any lack of announcement amounts to an unannounced fee increase, and unannounced divergence from a years-long experience in the field.

    Words do not change the very real impact on user experience that closing these changes has.

    Ignoring impact on users is a failure of the first order for any software project.

  41. petertodd commented at 5:32 pm on July 21, 2015: contributor

    @jgarzik That’s just silly. I can confirm that over 90% of the hashing power has changed the default blocksize - almost every single pool. Changing the default from 750KB to 1000KB when almost everyone is ignoring the defaults is an insignificant change that will have almost no effect. (<3% change to total capacity at best)

    We’re just maintainers of software implementing a protocol - we have little if any control over the implications of how that protocol is designed. The divergence is not something we control or can prevent as Bitcoin Core maintainers - it’s a natural outcome of the Bitcoin protocol design that we have no direct control over.

  42. JustinDrake commented at 6:38 pm on July 21, 2015: none
    @petertodd Can you share your calculations? It looks like more than 3% on this image.
  43. petertodd commented at 6:51 pm on July 21, 2015: contributor

    @JustinDrake I did it by going through the list of pools and checking the size of blocks they were producing, counting any pool that made blocks >750KB as definitely not using the default. Secondly, I used my knowledge of pool policy from various sources - like talking to pool owners - to add pools that had chosen to reduce the limit, or make the limit meaningless. (e.g. Eligius actively changes their limits in response to many factors)

    The graph you’re looking at is drawn such that a relatively small number of pools using a 750KB shows up as a clear line - rather misleading.

  44. JustinDrake commented at 7:13 pm on July 21, 2015: none
    @petertodd Hum, not convinced. Look at Slush pool here. Blockchain.info says it alone has 4% and it seems Slush is using the default.
  45. petertodd commented at 7:17 pm on July 21, 2015: contributor
    @JustinDrake Slush was mining >750KB blocks prior; they’ve actively chosen the limit they’re using right now.
  46. laanwj commented at 7:22 pm on July 21, 2015: member

    Even if they happen to be using the current default, that does not guarantee that they will switch to a new default, or on what schedule. They may have a reason to stick with the current value. They may be using an old patched version of bitcoin core that will never get this patch, or completely different software.

    If your goal is to have miners create larger blocks I’d say it’s faster and surer to ask them, instead of hoping to change it under them by nudging a default.

  47. JustinDrake commented at 7:22 pm on July 21, 2015: none
    @petertodd Sure, they may now have actively chosen to stick with the default. Change the default, and it may change Slush.
  48. killerstorm commented at 7:39 pm on July 21, 2015: none

    @laanwj If you want to have fewer arbitrary numbers in the code, make the soft limit same as hard limit by default. 750k is just an arbitrary number and it’s quite meaningless, so just remove it.

    Having a soft limit default which is different from trivial values (0 or hard limit) makes it look like you make policy decisions

  49. davout commented at 7:51 pm on July 21, 2015: none

    @jgarzik

    “However it is a delta, a change from current user experience.”

    Bitcoin makes no promise. Just because most of the time an RDBMS makes writes to the database without errors doesn’t mean that it should print a big warning “o hey, you’re not using transactions, your user experience may change any minute”.

    Systems have constraints and guarantees, it’s up to users to RTFM or use an abstraction layer that dumbs things down sufficiently to match their intellectual capabilities.

    “Any major change in user experience should include a transition plan and communication with users that their fees are going up.”

    Just because things worked some way until now doesn’t mean every single other thing should be dumbed down to the level of what “consumers have come to expect”.

    “Responsible software projects do not radically change on-going policies towards users without up-front communication.”

    No policy is being changed, constraints and guarantees have always been crystal clear to whoever takes a minute to learn about how things work, and how they don’t.

    “Deny all further changes similar to old-policy A, thus switching to policy B” is the worst way to run a software project. Users and market hate abrupt, unannounced changes.

    When exactly was the hard cap bumped in the past. You keep asserting this.

  50. mjamin commented at 7:54 pm on July 21, 2015: none

    When exactly was the hard cap bumped in the past. You keep asserting this.

    He’s talking about the softlimit/default, which was bumped a few times.

  51. davout commented at 7:54 pm on July 21, 2015: none
    @mjamin I see.
  52. davout commented at 7:59 pm on July 21, 2015: none

    To be perfectly clear, I don’t think changing the default has any kind of significance one way or another. I just find the rhetoric “keeping a number a certain way, is a drastic change in policy” a bit weird.

    But like I said, it’s really not significant as this is merely an implementation concern, not a protocol one, I’ll refrain of commenting further. The 1mb vs. bloat-blocks doesn’t really belong here.

  53. greenaddress commented at 8:23 pm on July 21, 2015: contributor

    NACK

    agree with @maaku @petertodd

    Ideally miners would be forced to setup their own soft limit.

  54. jtimon commented at 9:51 pm on July 21, 2015: contributor

    Back to the point though, I think it is a good idea to set the default block size soft limit to the same value as the hard limit. Miners can make use of it if it is useful to them.

    I extremely disagree with this, what’s the point of having two limits if they are the same. At the very very least if the consensus limit is 1000k, the default policy limit should be 950k.

    I think we should either keep the 750K soft limit and see how high transactions fees must rise until miners realize they’re “leaving money on the table” and raise the -blockmaxsize themselves.

    +1 I wouldn’t oppose to make it configurable though, but these long discussions about changing policy defaults are annoying.

  55. ghost commented at 1:21 am on July 22, 2015: none

    Ideally miners would be forced to setup their own soft limit.

    That suggests you would support a default of 0mb so miners had to set something to even get the block reward or a default to the max of 1mb. An arbitrary number somewhere in between which works ok is the opposite of forcing miners to set their own limit.

    Choosing a number between the min and the max which devs think is best is the most political, value judgmental and interventionist policy setting possible.

    Raising the soft limit to the max of 1mb would be the most sensible, logical and non-controversial setting possible.

  56. laanwj commented at 7:45 am on July 22, 2015: member

    wouldn’t oppose to make it configurable though, but these long discussions about changing policy defaults are annoying.

    It is configurable through -blockmaxsize=<bytes> since time immemorial. I fully agree that long discussions about policy defaults are annoying. The more so a policy default that only applies to miners, not other full nodes.

    Which is the entire reason that I closed the issue.

    Again, lobby miners to increase the above option value. They can, and they are aware of it. There is no reason at all for this to be decision for Bitcoin Core development. Everyone has their own responsibility in this decentralized network. Don’t try to play it through me, I’m just here for bugs and issues related to the implementation.

    (and yes, the current default value was grandfathered in, not as a “policy” but as a working default value - maybe @luke-jr is right and these defaults should be removed completely, forcing miners to set them. I’ve always pushed back against that because it makes upgrading for miners harder, and they are already a conservative bunch)

  57. chriswheeler commented at 7:48 am on July 22, 2015: contributor

    I agree with bitedgego - the default should either be 0 or max, anything between implies a policy recommendation from the core developers.

    If miners wish to create a ‘fee market’ by reducing their own limits they are able to do so.

    I also agree that now that most of the miners appear to have changed from default (due to increased transactions and being ‘called out’ on it) this isn’t really a significant change. However, there will still be some miners running the 750k default because they’ve not known or been bothered to change it, and accept it as the recommended setting. @maaku (and those that agree) - now that miners (90% of hash power according to @petertodd) have increased their limits past 750k, does your objection still stand?

  58. Newar417 commented at 9:13 am on July 22, 2015: none

    Slush was mining >750KB blocks prior; they’ve actively chosen the limit they’re using right now. @petertodd Could you point out one? I failed to spot one using https://www.blocktrail.com/BTC/pool/slush/1

  59. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-22 03:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me