Speed limit / throttle network usage #273

issue slothbag openend this issue on May 26, 2011
  1. slothbag commented at 11:14 am on May 26, 2011: none

    I noticed the other day Bitcoin was maxing out my upload bandwidth on ADSL. Probably due to sending the block chain to fellow bitcoin users ( I had about 65 connections at the time )..

    The ability to limit / throttle the network usage like most other p2p programs would be beneficial. Otherwise I have to ensure I close the Bitcoin application to keep it from killing my upload.

  2. m4rcelofs commented at 7:20 pm on June 16, 2011: none

    +1 to that!

    Bitcoin is currently using all my upload speed, which actually means sometimes I can’t even browse! Closing BitCoin solves the issue, but it’s not really beneficial to the network…

  3. NvrBst commented at 8:05 pm on August 6, 2011: none

    +1 as well.

    I have a 60KB/s upload limit on my ADSL, and bitcoin.exe is killing my upload so bad at times that I cannot even browse internet very well. Some extra information:

    I usually have ~32 connections in bitcoin. Receive seems to be fine. After 10 mins the current bitcoin.exe connections have downloaded about 100kB each, but, download is usually not as limited as upload.

    After about 10 mins the current bitcoin.exe connections use about 100kB upload each, but, there are ~3 which uses ~1MB+. This comes to over 10kB/s average (not much average wise). But I suspect when those 1MB+ guys are going it uses 100% of the upload and just drops down to 7.5kB/s average after that part is done.

    Preferably I’d like to limit upload to 2kB/s to 5kB/s on my current connection.

    EDIT: I’m using bitcoin 0.3.24-beta

  4. Trupik commented at 7:46 pm on April 10, 2012: none
    I have hosted public bitcoin node 24/7 for over a year now on my home linux server. It is causing me a considerable packet loss on the line and it is getting worse over time, as the chain grows. Please, add bandwidth limiting to the bitcoind. Otherwise I will have to shut down the node, because my internet connection is getting unusable.
  5. luke-jr commented at 3:23 am on April 26, 2012: member
    QoS is really a router job, but I guess reliable routers aren’t too common :/
  6. neofutur commented at 3:24 am on April 26, 2012: none

    one more +1, also needing this, or a way to have bitcoind use a shared blockchain ( see http://bitcoin.stackexchange.com/questions/3199/read-only-blockchain-in-bitcoind-patch-ideas and https://bitcointalk.org/index.php?topic=71542 ) , even better than just throttle imho

    also,waiting foran option in bitcoind, you could have a look at 3 userspace bandwidth limiting tools : http://monkey.org/~marius/pages/?page=trickle , http://klicman.org/throttle/ and http://stromberg.dnsalias.org/~strombrg/slowdown/

  7. laanwj commented at 6:42 am on April 26, 2012: member

    No +1s here please. I think we all agree it is a good idea for a P2P program to have configurable limits.

    It’s not realistic to require people to move this to a router. Especially on a VPS you don’t really have much control over routing.

    However, this feature has currently no priority for the core developers. If you want to speed up this issue, help with the implementation.

    Edit: and I suggest trying out thin clients such as Electrum, which claim to use much less bandwidth and ‘share’ a block chain on a supernode.

  8. Diapolo commented at 12:22 pm on April 26, 2012: none
    Perhaps unrelated, but you guys could (for Windows) try out cFosSpeed if your connection get’s laggy because of uploads.
  9. rebroad commented at 10:09 pm on May 3, 2012: contributor
    For MS Windows, I can suggest NetLimiter as a nice piece of software, albeit commercial. I’d have thought linux has various QoS options available, doesn’t it?
  10. nadrimajstor commented at 9:12 pm on May 18, 2012: none
    When you use P2P application (e.g. torrent client) you are always presented with some form of information and/or customizing options regarding the nature of P2P communication and its bandwidth consumption. With BitCoin client this is not so obvious and lead me to false belief that BitCoin is not heavy on network traffic. Sporadic internet hogs was always attributed to flaky ISP until I hunted down that actually BitCoin client was the culprit (filling up all of upload bandwidth). For the sake of less tech savvy users, please make obvious and easy option to, at least, add -nolisten argument when starting client.
  11. heynando commented at 11:58 am on December 11, 2012: none

    issue was created 2 damn years ago. What is going on dev guys? Why this has not been patched yet? Don’t you guys see the urgent necessity of such feature?

    it’s so stupid to not implement this ASAP cuz it does give a bad look to the whole app and community, since it fucks up people’s internet, and a lot of them are not into technical stuff, which basically means, bad program -> uninstall..

  12. laanwj commented at 1:15 pm on December 11, 2012: member

    You misunderstand how open source development works. We work on this in our spare time so we decide for ourselves what is important and want to spend time on.

    If you want something to be very badly implemented it is your responsibility to make that happen, not ours.

    Can you contribute to solving this issue?

    Or are you willing to pay to have the feature implemented? You could offer an bounty.

  13. gmaxwell commented at 2:34 pm on December 11, 2012: contributor

    I don’t, fwiw, see an urgent need for such a feature— and until some extra components are developed such a feature would be harmful for the network: if you set a low throttle your peers need to switch to pulling blocks from someone else when you are slow. (Also an issue absent a throttle but many people setting a throttle would make it worse)

    Right now it’s better for traffic constrained nodes to just disable listening. This will largely have the desired effect, doesn’t risk adding problems, and is an already available option.

  14. sipa commented at 2:46 pm on December 11, 2012: member
    I think the issue here is much more that it is not obvious to a new user that by default your node will provide blocks to the network. Disable listening would indeed mostly fix that, but it’s far from obvious.
  15. lucb1e commented at 10:12 pm on January 23, 2013: none
    If I don’t forward my port, can’t other clients download from me? The client makes outgoing connections in order to receive blocks. In BitTorrent outgoing connections are also used for uploading it seems, and it makes sense because this keeps the network much more alive. Does Bitcoin do the same, upload blocks to connected nodes if you didn’t portforward?
  16. gmaxwell commented at 10:14 pm on January 23, 2013: contributor
    Nodes don’t announce themselves as accepting connections until they are mostly caught up… so you’d only be forwarding on new blocks, which isn’t a tremendous amount of data.
  17. lucb1e commented at 10:25 pm on January 23, 2013: none
    I am caught up with the block chain, that’s not the issue here. But I would be forwarding new blocks to connected peers regardless of whether my port is forwarded? Because that might result in an upload job of 1.5MB for a new block, if all connected nodes don’t have the new block yet.
  18. cjastram commented at 4:10 pm on March 3, 2013: none

    “Nodes don’t announce themselves as accepting connections until they are mostly caught up… so you’d only be forwarding on new blocks, which isn’t a tremendous amount of data.”

    Nevertheless, it saturates my uplink. I am on DSL with no other options, and it is a solid 50-70 kilobyte (not kilobit) upload, making the connection completely unusable for anything else. Someday, Google Fiber will bring me a giant fat pipe and this won’t be a concern, but since I live in a fairly rural area, it is not likely for a while.

    This lack of upload control harms the network because I turn the whole node completely off unless I am actually engaging in a transaction. Wouldn’t it be better to have a slow node than no node?

  19. heynando commented at 6:39 pm on March 3, 2013: none

    seems like the devs think everybody has googlefiber, this is a shame, really, there’s X types of internet services around the world with Y upload/download limits. Forcing the user to do something that will harm its own internet is an outrageous decision.

    Not to mention that, once the user finds out what is hammering its internet, the most common decision is kill the program who is causing the problem and abandon it, or use it as low as possible, it becomes a bad thing instead of a good thing. This is bad marketing. This disappoints me in so many ways.

    How am i going to recommend this to a friend knowing that later the same friend will come to me and say “hey remember that program you told me to use? it was messing with my internet, making it very slow, i couldn’t even see a utube video in piece, bad call dud”. That’s the least that could happen, and this is just a simple example.

    Like cjastram just pointed out, better to HAVE a slow node than NO node at all. If the devs were intelligent enough to create such an futuristic and advanced code, i’m sure they already know everything we are pointing out here, no more words, i’m done.

  20. gmaxwell commented at 7:06 pm on March 3, 2013: contributor
    @XcaninoX so, you’re saying you turned off listening as advised and it’s using 70kbytes/sec outbound?
  21. sipa commented at 8:48 pm on March 3, 2013: member

    Personally, I think upload throttling as-is is a bad idea, at least until we improve the block sync mechanism. If you happen to hit a throttled node when syncing, syncing will be slow.

    Instead, I think we need a “non server” mode (some checkbox, perhaps asked on first startup), which disables serving (historic) blocks, disables listening (by default) and disables NODE_NETWORK (so nodes don’t announce themself as full noes). This means people can still do the validation and relaying part of being a full node, but without the bandwidth implications of serving the full chain.

    Later, such a non-server mode could be turned into a pruning mode, where historic blocks aren’t even kept on disk.

  22. cjastram commented at 12:59 pm on March 4, 2013: none

    Personally, I think upload throttling as-is is a bad idea, at least until we improve the block sync mechanism. If you happen to hit a throttled node when syncing, syncing will be slow.

    Of course, but isn’t the sync slow if you hit a node that has limited bandwidth? You don’t magically get a fast sync just because you don’t allow nodes the ability to throttle. The problem is that if you don’t allow throttling, then you get no node because people just turn it off.

    After a while of people just turning it off, then you start getting massive sync load on the system because people need to sync hundreds (or thousands) of blocks whenever they want to do a BTC transaction. Instead of the lightweight sync load that happens in realtime, you have a small (but significant) number of people that require loads of bandwidth all at once.

    Hence the problem. If we could just leave BitCoin turned on at low bandwidth, then we wouldn’t have the sync issue.

    Instead, I think we need a “non server” mode (some checkbox, perhaps asked on first startup), which disables serving (historic) blocks, disables listening (by default) and disables NODE_NETWORK (so nodes don’t announce themself as full noes). This means people can still do the validation and relaying part of being a full node, but without the bandwidth implications of serving the full chain.

    Maybe. I don’t know what my upload bandwidth saturation actually is, whether it is transaction confirmations or other peoples’ block syncs.

    Later, such a non-server mode could be turned into a pruning mode, where historic blocks aren’t even kept on disk.

    Let’s try to not over-engineer a simpler solution?

  23. sipa commented at 1:36 pm on March 4, 2013: member

    @cjastram That is actually my point: I think it’s better for people who do cannot or don’t want to serve historic blocks to not do that at all, and not even advertize on the network that they do. If we enable bandwidth throttling without that, the chance of accidentally hitting a slow node would increase, while if we remove those nodes from the pool altogether, the chance for that decreases.

    People fully shutting down nodes because it saturates their data link is of course bad. Throttling or disabling serving would improve that, but you’re close to the resources you’re willing to spend, perhaps you shouldn’t run a full node in the first place.

    Just relay of new transactions and blocks is typically quite low bandwidth - it’s just when you get hit by a node that is syncing from scratch that you suddenly get an upload burst.

    About over-engineering: the ability to run a pruned node is an inevitable evolution somewhere in the future, in my opinion, and it would also imply a solution to your problem anyway.

  24. jgarzik commented at 4:06 pm on March 4, 2013: contributor

    FWIW, I respectfully disagree, and think the easiest solution is to offer an optional upload (i.e. send or write syscall) throttle. A download throttle is less useful: You cannot really control the amount of incoming remote data; ceasing read(2) will cause good guys to throttle eventually, but bad guys always have a technique or three that will flood incoming anyway. And in the field, users care less about limiting their download speed than they do limiting their upload speed.

    It is a common knob on other P2P data serving apps, so I would ACK a properly implemented upload throttle w/ knob.

  25. sipa commented at 4:30 pm on March 4, 2013: member
    @jgarzik My point is that we’re currently better off with less serving nodes, if that means those nodes are faster. Once we have headers-first, and parallel chain fetching, things are different, and probably anyone willing to contribute a bit of upload is useful.
  26. slothbag commented at 10:04 am on March 6, 2013: none

    OP here, its been a while since creating this issue and for the past 2 years my bitcoin-qt client has very rarely maxed out my upload so I didn’t think too much about it.

    But recently, in the last month or two I have noticed that my upload or sending data out is quite often completely maxed! I have a decent connection, something like 300-500KB/s upload. Luckily i’ve managed to catch it most times and I open up a network monitoring tool and kill the ports in question (always bitcoin-qt). I have to be careful because if it ran at that speed for a long period of time, not only would my net be slow but it will quickly consume my data quota.

    So now instead of running a full node for ~8 hours a day I let it sync with the network and then shut it down.

    If I had a throttle option, I would set it at something like 50KB/s upload.. this is faster than I ever use when downloading the latest blocks (CPU bound).

    I think this is a problem, I would like to run a full node, but will not while it consumes 100% of my upload majority of the time.

  27. gmaxwell commented at 10:06 am on March 6, 2013: contributor
    @slothbag Have you disabled listening?
  28. slothbag commented at 10:13 am on March 6, 2013: none

    No I haven’t. I actually would like to contribute as a full listening node. If I’m gonna disable listening then i might as well not bother and just let it sync and then shut it down.

    On Wednesday, March 6, 2013, Gregory Maxwell wrote:

    @slothbag https://github.com/slothbag Have you disabled listening?

    — Reply to this email directly or view it on GitHubhttps://github.com/bitcoin/bitcoin/issues/273#issuecomment-14491575 .

  29. gmaxwell commented at 10:36 am on March 6, 2013: contributor
    @slothbag As a non-listening node you are still a contributor to the network— validating and forwarding transactions and new blocks, preventing partitioning… but you won’t use much bandwidth. A rate limit while serving historic blocks would be very damaging to the network right now, it is actually better that you do not run a node than serve historic blocks with a rate limit. … but better still to run without listening.
  30. rebroad commented at 9:54 am on March 7, 2013: contributor

    I’d understood that whether a node listened or not was not related to whether it uploads blocks or not. Even a non-listening node will upload blocks if a connected node requests them, won’t it?

    Rather than limiting the speed of block upload, what might be better would be for nodes to identify whether they are block-providers or not upon first connection. A speed limiter isn’t a bad option as long as the speed limit isn’t too low - after all, there are still going to be nodes on the network on slow networks and that’ll always be the case.

  31. rebroad commented at 2:31 am on March 30, 2013: contributor

    For anyone wanting to reduce the impact of their bitcoin-qt client, I’ve created a quick-fix, which minimizes the traffic while I’m using a wifi network that charges per megabyte. This patch adds a command line option, so that the client downloads only blocks and not transactions, and doesn’t upload anything other than the transactions that you create. The downside is that this isn’t healthy for the network as a whole, won’t display transactions that you’re monitoring from elsewhere until they’re included in a block, and reduces anonymity in that you’ll only broadcast your own transactions and not anyone else’s. Also, the alert feature is disabled also, so you won’t see alerts for urgent upgrades (not that this was reliable anyway in the case of the recent 0.8 hardfork!).

    Ideally this would be toggleable from within the GUI or even able to automatically switch on and off depending on which ISP you want it to activate with, IMHO.

    Oh, here is the abovementioned branch: https://github.com/rebroad/bitcoin/commits/MinimizeTraffic

    I still maintain that there ought to be a way that nodes like this (that don’t relay txs and blocks) ought to advertise themselves as such upon connection so that other nodes can make an informed decision on deciding whether to connect to other nodes or not that are relaying. Without this advertising, nodes can only guess based upon frequency of txs and blocks received from such nodes.

  32. rebroad commented at 1:59 am on April 1, 2013: contributor
    I think a system of upload throttle is useful, with nodes advertising their available bandwidth, and other nodes validating this, and this information included in the address information which is relayed. This way, nodes can prefer nodes with greater bandwidth, and the node can calculate the bandwidth available based upon how many nodes are currently uploading or downloading. The bandwidth per connection can continue to be monitored, and connections changed as and when needed. This is already something I’ve been coding in my parallelblocksdownload branch, and allows it to choose how many nodes to download from based upon the bandwidth it knows is available. If it finds the upload bandwidth is saturated, it’ll simply stop sending invs (to new nodes) so that those nodes don’t request data from it.
  33. slothbag commented at 11:05 am on April 1, 2013: none

    I went and tried a bunch of different options to throttle bitcoin-qt externally without any success.

    I tried using trickle on linux, however it seg-faulted on startup

    I tried using iptables QoS to traffic shape port 8333 inbound & outbound and had limited success, sometimes it would work and then other times not at all.

    I tried using NetLimiter on Windows as mentioned above, however it doesn’t handle bitcoin-qt very well, the high traffic connections are shown as system UDP connections so the throttle doesn’t activate.

    I’m just a little suspicious that the high traffic connections are actually malicious, either simple flooding or buffer exploit perhaps.. When viewing the traffic in Wireshark all the packets looked identical??

    Anyway, will continue to investigate as time permits.

  34. simeonpilgrim commented at 8:57 am on April 7, 2013: none

    I just discovered why my 150GB monthly bandwidth is used up, 85gb was upstream bitcoin traffic. Avg 4gb a day for a neat idea.

    Now I see there is no support for limiting, so it’s not getting turned back on. I just really please to be rate limited to 64kb for the rest of the month and not to be pay over fees.

  35. memetica commented at 8:57 pm on April 10, 2013: none

    Where do you think the blockchain lives, do you trust its keeper?

    Don’t throttle the blockchain, it’ll die.

    If you do, don’t complain about increasing fees on transactions, or the time it takes for transactions to be confirmed or confirmed at all.

    1LwRoFi19fUWw4jtxEuQVXFNr29kd4mjmB Thank you!

  36. simeonpilgrim commented at 9:09 pm on April 10, 2013: none

    @memetica I understand your point, but I’m not running the client because I can’t afford to have it use all my bandwidth.

    Thus is it better to have some nodes on the network be slow or limited upload, or to only have “fast” nodes?

    Maybe I shouldn’t be running the client. When I was in the USA we had unlimited bandwidth so the load was meaningless. But out in the “rest of the world” bandwidth is not free, and if I don’t have the ability to limit my exposure, I’m not prepared to run the client. And I’m quite sure the idea is to not have all the “fast” servers in a few countries, as that makes the network weaker.

  37. memetica commented at 0:38 am on April 11, 2013: none

    Just by saying: “And I’m quite sure the idea is to not have all the “fast” servers in a few countries (…)” You make my point.

    By limiting /your/ bandwidth you offload the work to others, who will also limit theirs, &c. Until you end up in the scenario of which you’re quite sure, is not the idea. Kudos to you.

    If you throttle the blockchain, you don’t trust bitcoin. (you think of it as stealing, think about it)

    All this is nice and such, but the client is still interfering with your download speed.

    In answer to the original poster’s : “I noticed the other day Bitcoin was maxing out my upload bandwidth on ADSL. Probably due to sending the block chain to fellow bitcoin users ( I had about 65 connections at the time )..”

    The real answer has already been given by “luke-jr” who posted: “QoS is really a router job, but I guess reliable routers aren’t too common :/”

    Cynics never seem to want to give a straigh answer :/

    He meant to say, probably, that one should better complain about routers not adhering to or even implementing traffic shaping standards. Just checking a box and clicking apply doesn’t necessarily mean it happens, not even on /open/ hard- or software. Bugs anyone? Although in the latter case they have a higher chance of being found and fixed.

    (somewhat poetic methinks: bitcoins changing the world by people complaining to the networkmakers their bitcoins aren’t going through fast enough)

    This works on the premise that bitcoin-qt client adheres to QoS standards. Which was the original posters’ client. (I mistook him for a troll at first)

    Which it seems to do for a long time. Even though traffic has recently increased, my router at least, graciously, lets me see my “ted talk” full speed. Throttling bitcoin traffic? Yes! But according to the QoS bit, ie. my bandwidth is mine.

    In stead of pointing out to the developers that for some reason or other his bitcointraffic had increased (an act in vein I suppose, they already know, or he failed to check) and supplying them with at the very least operating system details and client version used… Nooooo ….

    The immediate functionality proffered: throttling, choking. How human.

    This “solution” has already been applied to bittorrent clients. utorrent (to name one) implemented it’s own traffic shaper. Instead of complaining to the router makers.

    If there are no, or only few seeders for this “ted talk” you just wished to see right now (or within certain limits therof), no “ted talk” for you right now (or within certain limits therof). Or maybe never, the seed is dead.

    Now be honest, what’s /your/ overall share ratio, did you cap the upload? Do you still seed the “ted talk”? Do you cycle ports? Accept only encrypted connections, so your provider can’t sniff bittorent protocol, to keep bandwidth up? Have used TOR? (although, that’s ridiculous: torrents over tor)

    That’s the consequence. Bittorrent, how beautiful it is, is crippled just because of that. Not because it’s censored. (I’m in Holland and cannot access piratebay.com, but DHT seems to work very well though, but I do get a lot of fake “ted talks”.)

    Every bitcoin is retraceable, up to the first 50 (just look up block 0) and every other bitcoin after that. (if you’ve got the time; computers have) There’re no copy- or other rights on the protocol or the numbers that are sent. Furthermore BTC traffic may not, cannot be capped or blocked. It’s legal, or, not illegal.

    I cannot but admire the stanza of the btc-core devs by going all Douglas Adams-y: “It is somebody else’s problem”.

    Ok, and now for the real issue: Nobody has /obliged/ you to keep the client running! Hogs your bandwidth? Stop it! But expect a delay to check recent transactions though, so you know where your coins are coming from, and can thus trust them.

    You don’t walk alround with your wallet open all day do you? Registering where all your money comes from?

    Oh, and ask yourself this question “bitcoins to money or vice versa?” Either way, stop complaining, learn how to program, implement this feature in your fork of bitcoin-qt. Sit back, and see it implemented … or not.

    Then and only then do you have the right to complain, be ready for a battle though. You’ll have to be really convincing and able to explain /why/ throttling is any good.

    As “nadrimajstor” opted: " (…) add -nolisten argument"

    If you don’t listen, there’s no dialog. Therefore your transactions are void..

    :/

    Einstein could have said:

    • “If you can explain the problem to your grandmother you understand the problem,”

    If you think you learned something: 1LwRoFi19fUWw4jtxEuQVXFNr29kd4mjmB

    If you think I should shut up: 15B4oqE6e2skviM67kSmAB3yp77GChjPnL

    Thank you!

  38. memetica commented at 0:55 am on April 11, 2013: none

    simeonpilgrim quoth: “in the USA we had unlimited bandwidth so the load was meaningless. But out in the “rest of the world” bandwidth is not free,”

    Bitcoins aren’t free, nor meaningless. Read the news.

    Why do you put quotes around “rest of the world”?

  39. simeonpilgrim commented at 1:43 am on April 11, 2013: none

    by quote half a point [“Just by saying: “And I’m quite sure the idea is to not have all the “fast” servers in a few countries (…)” You make my point.

    By limiting /your/ bandwidth you offload the work to others, who will also limit theirs, &c. Until you end up in the scenario of which you’re quite sure, is not the idea. Kudos to you.”]

    You make a point with missing my point, that if I’m not running a server your in the same position. I repeated your augment to help make my point, and you just saw your augment and appeared to stop reading for comprehensions..

    So is it better to have fast and slower servers, or only fast?

    I agree QOS is the solution. The fact that my ISP provided router does not do QOS and I don’t want to buy a second router to nest behind it to do the QOS is my problem.

    Maybe the real question is: Do you want many people running the client?

    If so make it easier for them. If not, carry on.

    I have stopped the client because a) I don’t have QOS, b) I am busy with my own OS projects, and c) I don’t care that much but this project, the interest to cost ratio has become to high.

    I get the whole OS solve your own problems, or be quite. I assumed Bitcoin wanted a network effect, thus feedback might be wanted.

  40. memetica commented at 2:48 am on April 11, 2013: none

    I’m sorry you missed my point.

    But here is, an answer to yours.

    Although I’m unsure if these discussions are allowed in a thread like this.

    It does not matter if you run the client or not. If you want to trade bitcoins, you have to run the client. If you don’t want to trade bitcoins, don’t run the client.

    You ask: “Maybe the real question is: Do you want many people running the client?”

    What do you not understand? I want everybody running the client, even more, i want everything running the client! I want to be able to pay with bitcoins, and want immedate confirmation of my payments.

    Whether or not you’re the client, I want you to confirm my transaction, as if you were there. What is so difficult about that?

    Seeing the rise of bitcoin, al lot of people seem to want that too. So why are you postponing or even denying (–nolisten) that?

    You ask: “So is it better to have fast and slower servers, or only fast?

    And I have repeatedly explained there is no difference, you don’’t seem to grab the concept.

    Sometimes you get congestion, when there a lot of transactions going on. Why are the roads only congested between 9 and 5 (well I can only speak for holland of course) The usual solution is bigger roads, but that doesn’t work, they fill up too. Over here the solution is choking the inlet of vehicles, but that only displaces the problem to the inlets. So now the congestion is in the villages (Where it shouldn’t be, children and all that). And it takes forever to get on the freeway … to use an american colloquialism “You know what I mean?”)

    I’m sure you’ve noticed the absurd rise in bitcoin value?

    There are only about 21.000.000 bitcoins

    The issue of bandwidth resolves itself. after 2020 it’s only mining for lost wallets.

    No, the real fight is the size of the blocks….

    If you still don’t understand me, than explain to me why you want your wallet online 24/24 without paying for it.

    But take this in mind.

    Bitcoins cost money, have you ever traded a bitcoin?

  41. gmaxwell commented at 3:29 am on April 11, 2013: contributor

    I don’t think this discussion is productive at this point. The best technical solutions find a way to address everyone’s needs and, even if I’m sometimes guiltily of it myself, it’s hard to get there with an argument that their wants are stupid or wrong. Whats needed here isn’t more debate, whats needed is more implementation and testing even if some of the “implementation” is just good copy to help people understand how they can play an important role in keeping the network healthy.

    (Edit: I deleted a post by mimetica asking for funds to shut up)

  42. robbak commented at 3:40 am on April 11, 2013: contributor

    This really cannot be solved until the history download system is fixed, so that a single client can download from multiple peers, bittorrent style. Or at least allowing it to detect slow peers and search for better ones.

    Perhaps a simple method is preventing a client from uploading to more than one peer at any time by default might make things less of a problem.

    All the mental effort in this thread needs to shift to developing a true P2P download protocol extension.

    On 11 April 2013 13:29, Gregory Maxwell notifications@github.com wrote:

    I don’t think this discussion is productive at this point. The best technical solutions find a way to address everyone’s needs and, even if I’m sometimes guiltily of it myself, it’s hard to get there with an argument that their wants are stupid or wrong. Whats needed here isn’t more debate, whats needed is more implementation and testing even if some of the “implementation” is just good copy to help people understand how they can play an important role in keeping the network healthy.

    — Reply to this email directly or view it on GitHubhttps://github.com/bitcoin/bitcoin/issues/273#issuecomment-16215243 .

  43. memetica commented at 3:43 am on April 11, 2013: none
    Oh, did I fall for the troll, or do you not understand bitcoin al all?
  44. memetica commented at 3:54 am on April 11, 2013: none

    Perhaps a simple method is preventing a client from uploading to more than one peer at any time by default might make things less of a problem.

    …. I have nothing more to say to such stupidity

    You are in a meadow, exits are n,e,s,w There is wizard, he says:“Go North”

    (your move)

    I am not an American, but I know that Jefferson said: He who sacrifices freedom for safety, deserves neither. And I agree with that!

  45. memetica commented at 3:55 am on April 11, 2013: none
    sorry for shouting (off topic)
  46. memetica commented at 4:02 am on April 11, 2013: none

    (Edit: I deleted a post by mimetica asking for funds to shut up)

    My word! CAN YOU EVEN READ!

    mEmetica

  47. memetica commented at 4:11 am on April 11, 2013: none

    (Edit: I deleted a post by mimetica asking for funds to shut up)

    How is: If you like my arguments: 12345554335

    differ from 1dice9wVtrKZTBbAZqz1XiTmboYyvpD3t

    I just posted a number, when was i asking for funds?

    A reply could be “yeah man, I just sent zillions to that”.

  48. memetica commented at 4:13 am on April 11, 2013: none

    See? wrong group, wrong people.

    Ye who do not understand real freedom if it hit them in their faces!

    Well! fornicate you!

  49. memetica commented at 4:17 am on April 11, 2013: none

    (Edit: I deleted a post by mimetica asking for funds to shut up)

    Whoohooo

    So what is it,no funds to me shut up? In the same post I asked for funds whether you learned something,

    I guess you didn’t

  50. memetica commented at 4:31 am on April 11, 2013: none

    All the mental effort in this thread needs to shift to developing a true P2P download protocol extension.

    Thanks robbak!

    But what exactwely do you mean?

    What’s a “true p2p download protocol extension”

    The one you bully onto others? MS has been doing that for years, and it seems to work, in a way. You know the buzzwords, but fail to grasp their meaning, sadly.

    All the mental efforts in this thread need to stop this thread! No more sudden rise in outgoing traffic, its just bitcoins! Let them go!

  51. soulhunter commented at 7:47 pm on April 11, 2013: none

    Funny… I came into this thread because I noted that my node is using a lot of upload bandwidth…. I understand p2p tends to do that, specially as more people “needs” the information from the network… but this also happens when you have less people in the network (because there are fewer places to download from). I have a monthly bw quota on the machine where I’m running bitcoind.

    From my understanding, the more nodes, the more reliable this network is (or the less likely is that some individual or institution can take-over bitcoin network by controlling the majority of nodes), I can add QoS to limit bitcoind (and I will, you bet I will), however, the average user just don’t know how to do that. Adding a simple setting that limits bitcoind bw usage will just encourage people to run the client…. for now, the daemon goes down until I get the time to configure QoS, however, if it had a simpler way to throttle it now, it would continue to run, 24/7.

    I’m not very interested in coins at all (I first heard about this project several years ago), I just found the idea to be interesting, and assumed that adding more “honest” nodes to the network would be good…. apparently you don’t really care about that, you just want “a few fast nodes” instead of many not-so-fast nodes. I think that “the more the best” for this kind of system… but maybe I’m wrong.

  52. sipa commented at 7:56 pm on April 11, 2013: member

    @soulhunter There seems to be a bit of confusion here.

    Of course we need many nodes if we want a decentralized system. But we mainly want many nodes that validate - Bitcoin’s primary advantage over other currencies (in my opinion) is that I don’t need to trust anyone to know nobody is cheating. If it’s easier to run such a node, the less people will have not to do so.

    However, to bootstrap a new node, it needs to be shown the history of the current state, so it can independently also assess that nothing in the past was fraudulent. Thus, we need a way of feeding them this history, and it’s provided by other nodes in the network that also at some had to download that history anyway. Ideally, we also have as many of these as possible. But reality is not perfect. Not everyone wants to sacrifice their upstream for doing this, while others have plenty of upstream to spare. And, with the current buggy implementation, if you’re accidentally hitting a slow peer to download from, you’ll have a slow download. We don’t have a mechanism to search for good peers or parallellize the download. Thus, as long as the sync mechanism is not improved, yes everyone is better off with few faster uploaders than many slower ones. Note that this is just about serving historic blocks to others - you can perfectly validate all blocks and transactions, and participate otherwise in the network without providing that service.

  53. cjastram commented at 1:28 am on April 12, 2013: none

    Do I understand correctly that history is NOT downloaded peer-to-peer? Did I read correctly that it is downloaded from ONE peer? If so, that explains the whole problem. It really needs to be downloaded peer-to-peer: what matters if you only have 10kbit upload if there are 100 others inside a reasonable network radius to pull from?

    Am I understanding correctly? Because that makes all the difference in the world.

  54. gmaxwell commented at 1:34 am on April 12, 2013: contributor

    @cjastram I’m not sure if you’re understanding correctly. If you are you’re using a very weird definition of peer to peer. The blocks are downloaded from a random peer, but they’re currently all downloaded from one peer. (because the way the process works is basically a “tell me what I don’t have” rather than a “here is what I’m missing”).

    And yes, absolutely this needs to be improved and that is why you have Pieter and I saying that while we support having all kinds of knobs to control resource usage, the fetching behavior needs to change first.

  55. cjastram commented at 1:58 am on April 12, 2013: none
    @gmaxwell I see, I guess I did read that correctly. Is there some way to add that issue as a block for this issue, so it is more clear that the other must be done first?
  56. davidmwest commented at 1:15 am on April 20, 2013: none
    +1 for adding throttling. Bitcoin-qt just killed my home wifi network and it took me 20 minutes to figure out what was going on. The client was using all my available upload bandwidth (1mb/s) and I was unable to use the web. That is bananas. I’m happy to contribute some of my bandwidth, but not all of it. There’s no reason it has to be all or nothing.
  57. soundasleep commented at 4:11 am on May 3, 2013: none
    I can no longer run the full client except a few hours a week because of this issue. This issue will prevent home or novice users from ever running a full client. Something as simple as (upload limit of X kb/sec) should suffice.
  58. gmaxwell commented at 4:20 am on May 3, 2013: contributor
    @soundasleep Have you turned off listening for incoming connections? If not, that should enable you to run it with less impact until there are other facilities available.
  59. soundasleep commented at 4:35 am on May 3, 2013: none
    @gmaxwell from what I understand that will just reduce the likelihood of my connection being saturated, and with only 0.8 Mbps upload (which is poorly shaped, poorly provisioned and throttled as well) I doubt it will solve anything since one peer could easily saturate that. I’ll give it a go but I still can’t run the client fulltime.
  60. heynando commented at 4:37 am on May 3, 2013: none

    luke-jr is absolutely right when saying “QoS is really a router job, but I guess reliable routers aren’t too common”, this is simply a fact.

    Developers should realise it and patch the issue ASAP so all types of user (very low-end ~ very high-end) could still be able to use the program without getting out of their comfort-zone. For at least 20 years, so maybe by then, everyone on Earth will have a router with a decent QoS code built-in and also fiber internet to deal with all throughput-data.

    But right now, the way it is, it’s like suicidal mission IMHO, this gets me so sad and mad and disappointed simultaneously that i can not even explain in words how i feel about this. All i know is the decision to not patch it and leave it the way it is, is an outrageous decision.

  61. soulhunter commented at 4:37 am on May 3, 2013: none

    On 02/05/13 23:42, soundasleep wrote:

    I can no longer run the full client except a few hours a week because of this issue. This issue will prevent home or novice users from ever running a full client. Something as simple as (upload limit of X kb/sec) should suffice.

    Well, I actually have no problems running it at home, some QoS is enough…. however, my problem is on the server, I have a monthly BW limit there, and the client will just use the full 100Mbps upload speed whenever it wants (!), I would be happy to give it 3 to 7Mbps upload (that way it can’t get me overquota).

    Ildefonso.

  62. gmaxwell commented at 4:42 am on May 3, 2013: contributor
    @XcaninoX your response is confused and insanely disrespectful. I am not your slave. Having more options for this in the future is good, and planned, as has been explained over and over again but it is not trivial to implement and so it will wait for someone who has the time to work on it. @soulhunter almost all the bandwidth is used by feeding the historical data to newly started nodes. Thus the advice to disable listening if you’re bandwidth constrained so that you won’t do this. Without feeding new nodes the maximum average utilization is on the order of 100kbit/sec or so.
  63. heynando commented at 5:02 am on May 3, 2013: none
    @gmaxwell I’m sorry if i insulted you by any means, it definitely wasn’t nor is my intention, tho i never really saw nor treated you as my slave, i apologize if you thought so.. My opinion regarding this issue stands, to be more clear, I am only expressing my points of view, just saying what it is in my mind, sharing how i see things are right now and how i think they should be instead, just putting the cards on the table and discussing them, that’s it.
  64. gmaxwell commented at 5:21 am on May 3, 2013: contributor
    @XcaninoX OKAY. I don’t know what more needs to be addressed basically everyone agrees that there should be better resource controls. But agreeing that they should be there doesn’t make them happen instantly.
  65. soulhunter commented at 1:44 pm on May 3, 2013: none

    On 03/05/13 00:13, Gregory Maxwell wrote:

    @soulhunter https://github.com/soulhunter almost all the bandwidth is used by feeding the historical data to newly started nodes. Thus the advice to disable listening if you’re bandwidth constrained so that you won’t do this. Without feeding new nodes the maximum average utilization is on the order of 100kbit/sec or so.

    Yes, but still feeding historical data is necessary, because otherwise how would new nodes be added? (this is part of the reason why I’m running a client on a server, to keep it up 24/7 and help the network).
    Also, disabling “incoming” doesn’t magically fix that, because you can still get into an incomplete node via an outgoing connection (although the probability gets greatly reduced). Anyway, this far, the amount of BW it has eaten is not that huge (just “spiky”). I’ll keep it running for a few more weeks and lets see (it looks like it will be using ~500GB/mo, I can afford that, for now).

    Why not implement something simple?

    1. Add basic upload BW limit option (there are several ways of doing this, I’ll not elaborate now).
    2. When a client is downloading, and has more than 1 peer, it should download from a given peer for N seconds (say: N=300), and then move to the next one in a round-robin (or even random) schema. This way a given client will not be “stuck” on a single slow peer, but will likely move from fast to slow peers while it downloads, and spread the download load.

    What do you think?

    Ildefonso.

  66. Rooke commented at 3:08 pm on May 10, 2013: none

    If one imagines the blockchain as a 7GB+ torrent file, is there anything stopping us from using bittorrent architecture ideas to bootstrap clients? @jgarzik already implemented this idea independent of the client over here. I’m guessing that bloating the client by linking to libtransmission or its ilk is not acceptable (or would it be?). It might not be secure either. It probably does solve several problems associated with bandwidth and data cap management.

    Perhaps as an independent experiment someone (I?) should try implementing a patch to respond only to historical data requests beyond a certain date or block number. I’m guessing most peers are either totally out of date (requiring the entire history) or else only a week or two out of date. I’m absolutely making this up, so I could be grossly wrong, but it seems like a reasonable guess. Assuming this is true, most peers could be served by setting a date filter a couple weeks in the past, and require brand new peers to resort to Jeff Garzik’s torrent or whatever other mechanism might be available. It’s not helpful to new clients, but it might easy bandwidth use.

  67. rebroad commented at 4:38 am on May 12, 2013: contributor
    IMHO, this should be a simple GUI configuration option similar to the options for configuring tor - i.e. deciding whether to provide full network functionality, or limited functionality. If it doesn’t become part of the “main” client, then someone will likely provide a fork that does, and then the “main” client will need to cater for the existence of these alternative clients - e.g. catering for ignored getblocks/getheaders requests.
  68. halfawake commented at 5:38 am on May 24, 2013: none

    I too am having this problem. I personally do not at all mind using handling this on my end rather than having it be part of the bitcoin protocol. If anyone knows what port bitcoin uses, I’ll happily throttle it. I use Tomato as my WiFi router and I’m sure it’s full featured enough to provide the functionality to do this - just so long as the port isn’t port 80, of course, because I’m not going to throttle standard web traffic just for the sake of bitcoin, but I’d be surprised if that’s the case.

    Never mind, just googled it and found the answer to my own question in the bitcoin FAQ. For anyone else who is interested, the port Bitcoin uses is port 8333. If there’s anyone else here using Tomato, I’ll happily post instructions for throttling this port once I figure out how to do it. Let me know if there’s any interest.

  69. slothbag commented at 7:39 am on May 24, 2013: none

    The problem is bitcoin will make outgoing connections to any port that a peer may say its using, so 8333 will catch most the traffic but not all.

    Might be possible to detect the ports using uPnP or something… Too hard, I’m waiting for the throttle functionality :)

  70. gmaxwell commented at 7:43 am on May 24, 2013: contributor

    bitcoin will make outgoing connections to any port that a peer may say its using

    No it won’t. It will only try non 8333 peers if its having no luck connecting on 8333.

  71. halfawake commented at 7:49 am on May 24, 2013: none
    0bitcoin will make outgoing connections to any port that a peer may say its using
    1
    2No it won't. It will only try non 8333 peers if its having no luck connecting on 8333.
    

    I haven’t read the code, so I can’t comment on what the core bitcoin client does myself, but this would make mush more sense than what slothbag said. Especially since the port being used is what I got straight out of the bitcoin wiki, I’d think they’d have said a bit more if they meant “uses port 8333 most of the time.”

  72. earthmeLon commented at 1:46 pm on June 5, 2013: none
  73. ghost commented at 0:43 am on March 18, 2014: none

    Funny and sad that you guys don’t get that running a node with a upload bandwidth limit is better than not running a node. Not running a Bitcoin node seems to be the alternative the developers prefer and I’m fine with not doing that.

    Consider this: You have a fiber and ADSL and you run tor, i2p, a bittorrent client and perhaps some other stuff. All these things - basically all SANE P2P software - let you set an upload bandwidth limit so you can browse the web and do other things with your connection normally. Then you have bitcoind which basically tries to use 100% of your connection and makes everything else crawl to a halt. Most people will simply see bitcoind as something that lacks the most basic features all other P2P software has (ability to set port port, limit number of connections, bandwidth etc) and not run it.

    I see this bug is 3 years old. I’ll check back in 3 years to see if the devs arguing against bandwidth limiting have figured out that having a lot of nodes with a bandwidth limit is better than having only a few high-bandwidth nodes with dedicated connections by then. I’m guessing they won’t but one can hope.

  74. gmaxwell commented at 0:46 am on March 18, 2014: contributor
    @oyvinds I seem to have misplaced your patch. Can you resend? FWIW, (and I don’t know if you actually care about the technology or are just here to gripe) no one was at all opposed to having a limit. There were changes needed as prerequisites to having a limit and they are now almost done (headers first has had some preparatory changes in 0.9 and should be finished in the next version). If you’re having problems with the usage immediately you can set listen=0 in bitcoin.conf and shunt 95% of the bursty outbound traffic immediately with no adverse impact on the network, this is the recommendation for a short term workaround— not failing to run a node.
  75. davidmwest commented at 6:04 pm on March 18, 2014: none

    @gmaxwell Snark isn’t productive. But it has inspired me to reply.

    I don’t understand why this is so hard to patch. Here, let me attempt to be helpful and pragmatic.

    Finely-grained controls on upload bandwidth aren’t needed. Just a slider that would let us add a “sleep” in between each sending of a packet. The slider would let us increase the length of the sleep. This would provide some crude but effective rate-limiting without requiring a lot of development time. What is that, like one hour of development to fix the #1 issue preventing people from running bitcoind?

  76. gmaxwell commented at 6:11 pm on March 18, 2014: contributor
    @dwest-trulia Please read the messages by myself and the other people experienced with the Bitcoin protocol in the thread, this has been covered previously. Thanks.
  77. davidmwest commented at 6:16 pm on March 18, 2014: none
    @gmaxwell I searched the thread for the word “sleep” and was unable to find my idea or its refutation. I did find some philosophical arguments against rate-limiting in general and the half-baked idea of turning off listening (which is not reliable enough for people to confidently enable bitcoind). Perfect is the enemy of the good. I will continue not running bitcoind until it is either good or perfect.
  78. gmaxwell commented at 6:29 pm on March 18, 2014: contributor
    The turning off listening is quite reliable, as the nodes we’re connecting out to already have the blockchain. There weren’t any philosophical at arguments at all, no one is opposed to rate limiting on a philosophical basis. But because of the way the Bitcoin protocol currently works a rate limit severely DOS attacks peers. It’s better to just not have them connecting to you right now, though thats being fixed.
  79. davidmwest commented at 6:33 pm on March 18, 2014: none
    @gmaxwell I wasn’t aware there was a potential DOS issue. Interesting.
  80. sipa commented at 6:36 pm on March 18, 2014: member

    Nobody is against rate-limiting connections as such.

    The problem is that the current fetching mechanism is quite stupid, and generally deals very badly with being throttled. So until that is fixed, yes, I believe not running a (reachable) node is better than running a throttled one.

    When we have fixed the sychronization algorithm (see #3077, #3083, #3087, #3276, #3370, #3514, #3884 for work towards that, as well as the old version #2964), I’ll gladly ACK any patch to improve bandwidth shaping in bitcoind.

  81. lucb1e commented at 9:30 pm on March 20, 2014: none

    @gmaxwell

    If you’re having problems with the usage immediately you can set listen=0

    Doesn’t help. On an 8/1mbps (down/up) connection bitcoin-qt would DoS itself by uploading so much stuff that the system had no internet anymore. None at all. Pings would not be delayed; they just timed out. Services like LogMeIn reported there was no longer internet connectivity and browsing became basically impossible.

    Right now, a year later after my previous comments here, I have a fiber connection and even with listen=1 it all works well, but it wasn’t nice to need to run bitcoin-qt (to catch up with the chain) exclusively while the rest of my family was asleep.

  82. gmaxwell commented at 10:38 pm on March 20, 2014: contributor
    @lucb1e Its unfortunate that you didn’t offer that observation while you still had the setup for reproducing it. I’m unable to reproduce and have several listen=0 nodes for testing and none see high bandwidth usage.
  83. lucb1e commented at 1:03 am on March 21, 2014: none

    @gmaxwell Yes, I should have pointed it out earlier. I thought I remembered commenting that bitcoin-qt killed its own connection, but it seems I misremember. Sorry about that. What I said still holds up though: even distributing blocks as they go across the network is a lot of data.

    Looking at blockchain.info right now, 20 minutes ago there was a block of 0.9MB. Times 7 (one peer must already have it) makes a 6MB upload. On my previous connection that’d cause just over 1 minute of no internet.

    Going over these numbers it’s pretty clear that running a full bitcoin node on such a connection is just not going to help anyone. But I don’t need to be a full node, I just want to run the official (or perhaps I should say “generally leading”) bitcoin client with my own block chain. Having a switch for not uploading blocks and/or not uploading transactions would be helpful. This is said to cause DoS issues, but I’d say that’s a problem in this client and not a network issue: anyone can setup and advertise a lot of rogue nodes that never do anything. Non-listening nodes should disconnect peers that seem to be only leeching because they might be trying to slow down or disturb the network.

  84. RobFisher commented at 0:04 am on April 4, 2014: none
    Instead of rate limiting incoming connections (which would make for a bad syncing experience for other users), would it be feasible to set a daily limit, and not accept more connections once the limit is hit? This would let me be a fully participating node some of the time, and avoid triggering my ISPs “fair use” throttling policy.
  85. nanpanman commented at 6:13 pm on April 26, 2014: none

    My preference would be something like this:

    Full node: Serves all historic blocks Open node: Serves the last 1008 blocks Closed node: Does not serve blocks

  86. laanwj added the label P2P on May 9, 2014
  87. Ratief commented at 0:34 am on May 28, 2014: none

    I got tired of going over my 300 GB per month data limit serving bitcoind blocks and so I looked into a way to rate limit bitcoind. I never did find a good solution.

    Since the devs have yet to give us the ability to rate limit in the daemon I decided to use iptables to enforce a limit. In case it helps anyone else, the rules below are what I used. It’s very painful to update after being offline for a while, but it works well once you are synced.

    // Rate limit outbound bitcoind (we have to get both source and destination ports 8333) -A OUTPUT -p tcp –sport 8333 -m state –state ESTABLISHED -m limit –limit 30/sec –limit-burst 150 -j ACCEPT -A OUTPUT -p tcp –sport 8333 -m state –state ESTABLISHED -j DROP -A OUTPUT -p tcp –dport 8333 -m state –state ESTABLISHED -m limit –limit 30/sec –limit-burst 150 -j ACCEPT -A OUTPUT -p tcp –dport 8333 -m state –state ESTABLISHED -j DROP

  88. gmaxwell commented at 0:40 am on May 28, 2014: contributor
    You’re causing that same painful performance to any other user that happens to sync from you, so please be sure to set listen=0 when running that way.
  89. laanwj commented at 5:32 am on May 28, 2014: member
    @Ratief You’re reinventing the wheel, we already have a script for that in contrib/qos/tc.sh for a year.
  90. darkhosis commented at 0:50 am on September 29, 2014: none
    If there was a toggle to disable serving blocks to anyone > 20 behind, wouldn’t that work? Then peers would never try to sync with a slow node and a slow node wouldn’t get its upstream saturated.
  91. luke-jr commented at 1:16 am on September 29, 2014: member

    @darkhosis Right now, if someone chooses a sync node that refuses to serve blocks, the node trying to sync will just fail to sync entirely. We need some code written that will move on to the next peer for syncing. I think @sipa is working on that in headers first (but I could be wrong).

    Perhaps it would be good to have peers with throttles set report that, and ones without that to measure their effective speed to report. Then peers get a bit more info to use when choosing a sync node…

  92. sipa commented at 8:59 pm on September 30, 2014: member
    @luke-jr headersfirst just fetches from wherever it can and should deal relatively well with peers that stall, so after HF is deployed I’m fine with a feature like this.
  93. dexX7 referenced this in commit a55f5d2bb4 on Jan 24, 2015
  94. CryptAxe commented at 0:12 am on February 7, 2015: contributor

    @sipa The idea of sending blocks not considered “historic” seems like a good solution. I’ll call them light-servers. Perhaps based on the desired bandwidth limits of the light-servers, the distance backwards in blocktime that will be uploaded to syncing peers could be set

    For example: User A with ’light-server’ selected who wants to dedicate a minimal amount of their upstream bandwidth will go backwards only to the block that has a blocktime is 7 days before their latest block.

    User B with ’light-server’ also enabled, but willing to donate a little more bandwitdth (but not sync the entire blockchain) can sync blocks all the way to a blocktime of 30 days prior to current block time.

    That way there is no actual connection limit speed causing a node to be stuck syncing from a slow node.

    Update: This would require the ability for nodes to switch to another sync node without failing

  95. arthurbouquet commented at 1:23 am on May 24, 2015: none

    The problem is that the current fetching mechanism is quite stupid, and generally deals very badly with being throttled. So until that is fixed, yes, I believe not running a (reachable) node is better than running a throttled one.

    You’re causing that same painful performance to any other user that happens to sync from you, so please be sure to set listen=0 when running that way.

    Isn’t a node running on a slow connection like a “throttled” one ? Let’s say we have 2 nodes: A running on a 1 Mbps connection and B on a 1 Gbps one. Even with a bandwidth limit on B (like 100 Mbps), it’s better to sync from B than A! Also wouldn’t it be better for a node to limit its upload to 90% of it in order to avoid saturation ? (ack packets etc.). Maybe node A would run better with a 900 Kbps throtle (and other user would be abble to surf the web) ?

    Last, having this kind of seeting would be a way to “announce” your node speed to other and let them decide to sync (or not) from you.

  96. sipa referenced this in commit 0350c7e4b8 on Jul 28, 2015
  97. sipa referenced this in commit 7863d105ea on Aug 4, 2015
  98. sipa referenced this in commit 780d5b3c7b on Aug 23, 2015
  99. jonasschnelli commented at 4:52 pm on August 27, 2015: contributor

    Throttling on a very basic level of network commands would probably a bad idea and can already be done with soft- and hardware (router / iptables, etc.). Throttling would also uncomfortable for the node on the other end.

    What most full node operators should make happy would be a way of not serving historical blocks when certain preconditions are reached. I’m pretty sure the complains where users got very high outbound traffic was because their nodes where serving lots of historical blocks to nodes in initial sync.

    I’d like to implement a feature which would enable to set a total-cap-per-day, and if reached, it would no longer serve nodes requesting blocks < self.height-100(TBD). The connected node might then switch to a different node. Throttling block responses would be a bad idea IMO.

    I’m also searching a good way of limiting merkleblock responses.

  100. TheBlueMatt referenced this in commit a671356e1f on Oct 20, 2015
  101. sipa referenced this in commit 9177950c74 on Oct 21, 2015
  102. sipa referenced this in commit f4787d1caf on Oct 21, 2015
  103. sipa referenced this in commit 6557a8cd46 on Oct 26, 2015
  104. sipa referenced this in commit ea06490d14 on Oct 27, 2015
  105. dexX7 referenced this in commit 90d4032428 on Nov 1, 2015
  106. sipa referenced this in commit 003bb87153 on Nov 5, 2015
  107. sipa referenced this in commit bfd83199c3 on Nov 11, 2015
  108. sipa referenced this in commit b437ea7ec9 on Nov 12, 2015
  109. sipa referenced this in commit 1d84107924 on Nov 12, 2015
  110. laanwj removed the label Brainstorming on Feb 16, 2016
  111. laanwj removed the label Priority Low on Feb 16, 2016
  112. laanwj commented at 12:43 pm on February 16, 2016: member
    First step toward this was implemented by @jonasschnelli in #6622, which puts up a upload limit per time span. Once the new P2P code by @theuni is finished, work toward throttling can be started.
  113. laanwj added the label Resource usage on Feb 16, 2016
  114. rebroad commented at 3:25 am on November 13, 2016: contributor
    @laanwj An upload limit per ASN might also be a useful feature - would help distribute the blockchain more fairly rather than allowing one ASN to use up all of the allowance. Also, using ASNs to calculate eviction of peers would be a good step towards protecting against bogons.
  115. laanwj commented at 9:21 am on November 21, 2016: member
    Yes using the ASNs makes sense for a few things, including evictions and throttling. Though the practical problem that came up last time is that the ASN database is too large to include as-is. Maybe it could be approximated/encoded in some efficient way for querying though. Or maybe leave downloading it up to the user and make it optional.
  116. rebroad commented at 9:47 am on November 21, 2016: contributor
    @laanwj oh, I had thought the outbound node selection logic was already using ASNs, but perhaps not - there is something there to “spread” the net wider though - perhaps worth a closer look at the logic to see if it’s suitable to be used in the absence of an actual ASN database.
  117. rebroad referenced this in commit 40ead34fbe on Dec 7, 2016
  118. CodeShark referenced this in commit 35372512c6 on Jan 18, 2017
  119. deadalnix referenced this in commit 31d0c1fd12 on Jan 19, 2017
  120. ptschip referenced this in commit d38ad187b0 on Feb 17, 2017
  121. earonesty commented at 10:34 pm on May 19, 2017: none
    I think the new sync code does a better job of distributing the load among peers. Throttling should be less of an issue now.
  122. classesjack referenced this in commit 3e8f9847c9 on Jan 2, 2018
  123. andrelam commented at 9:55 pm on February 1, 2018: none
    @earonesty it seems it still is.
  124. jlopp commented at 9:38 pm on March 4, 2018: contributor
    Technically, this issue has been addressed with the addition of the “maxuploadtarget” config parameter. Not sure if that parameter has been exposed in the GUI, though.
  125. jonasschnelli commented at 2:34 am on March 5, 2018: contributor
    maxuploadtarget has not yet been exposed to the GUI. Also, once we have libevent for the p2p layer, network throttling should be a low hanging fruit. Though, I personally think, network throttling should be done a layer deeper then the application layer (router level), since, the application does not know what else – running in the same network – requires bandwidth.
  126. pabloarod commented at 4:00 am on February 9, 2019: none
    I’m using the v5.9.6 still no peer control, as someone said there are software and routers that can control traffic in a network but i found that the number of connections seem to be the problem, Bitcoin appear to open to many connections so no even the internet explorer can display a web page, Looking in the tabs, i found General View, Send, Receive and Transactions. Is there a threat to link the program with the client torrent, integrating it or something like that? in other to control this type of things that are not really of interest of bitcoin?
  127. andronoob commented at 2:17 pm on February 21, 2019: none
    @jlopp maxuploadtarget doesn’t solve the bandwidth spike problem. Solutions for this problem already exist, for example, NAFC, which is supported by eMule years ago. But sure, NAFC has its own limitation, it cannot know what other devices which share the same Internet connection are doing.
  128. Ratief commented at 4:22 pm on February 21, 2019: none

    Long ago I started watching this thread hoping that some day someone would add options for this. In the mean time I found my own solution. I use iptables and rate limit the connections. This works well for me.

    I’m on Ubuntu. I added this to /etc/ufw/before.rules. Tweak as you see fit.

    Rate limit bitcoin

    -A ufw-before-output -p tcp –sport 8333 -m state –state ESTABLISHED -m limit –limit 30/sec –limit-burst 150 -j ACCEPT -A ufw-before-output -p tcp –sport 8333 -m state –state ESTABLISHED -j DROP -A ufw-before-output -p tcp –dport 8333 -m state –state ESTABLISHED -m limit –limit 30/sec –limit-burst 150 -j ACCEPT -A ufw-before-output -p tcp –dport 8333 -m state –state ESTABLISHED -j DROP

  129. Hypocritus commented at 2:50 am on April 7, 2019: none
    (Unfortunately, without being able to understand the full context of what I had written, and without being able to verify the significant contributions which I have made and continue to make to open source projects, my username is labeled as demanding, free-loading, and rude in the following post. An attempt to use humor to dispel the awkwardness of receiving a personal rebuke to a non-personal inquiry failed to win the hearts of the over-controllers of this eight-year-old open github issue. I now wonder if this update I am presently making will also be condemned or censored. There are many stellar examples to be considered of developers responding gracefully and without presumption to the lowly, poorly-written inquiries of new users who have zero posts logged with a given project; of which I have been the personal beneficiary, and therefore left with the option a) to expound, b) to appropriately respond or c) to withdraw without feeling personally attacked)
  130. laanwj commented at 8:01 am on April 7, 2019: member
    @Hypocritus It’s not been implemented because no one did the implementation work, simple as that, that’s how it works in open source projects. You just can’t demand that others spend time implementing the things you want for free. It’s incredibly rude.
  131. bitcoin deleted a comment on Apr 8, 2019
  132. egghead314 commented at 2:26 pm on July 8, 2019: none
    wow such and old thread kinda feel stupid to even post, but still same issue here my internet is being disturbed by running a bitcoin node. i feel pretty strongly on contributing back to bitcoin but at the same time don’t want to give up significant performance of my internet connection, so with reluctance another one has set the listen = 0, i wish there were options :(
  133. Hypocritus commented at 5:32 pm on July 10, 2019: none

    Agreed. A key issue is that in the process of a full node being fully unrestricted, there are a few different calls which are made by the client and its peers to the system or the bitcoin network, any one of or series of which could be causing a performance decrease, stall or crash in an almost endless variety of ways on the host machine.

    Should a generally unobtrusive and intuitive solution to this issue be found. it would be nice if a) this issue were closed (for the benefit of the denser of us laymen who can’t read between the lines), and b) that solution were regularly cited for the relieving sake of those of us who would genuinely and enthusiastically appreciate being able to spend more time building or creating, versus dwindling, searching, or defending (albeit poor) attempts to draw attention to a long-lingering, impacting issue.

    And should there be strong reason to maintain this feature as “necessary”, such as sound arguments about bitcoin integrity or security, then it would benefit us to have those reasons regularly cited, posted, or linked, and for this issue to be closed.

  134. attilaaf referenced this in commit 89bbc3dd77 on Jan 13, 2020
  135. rajarshimaitra referenced this in commit 744ad752c3 on Aug 5, 2021
  136. droid192 commented at 9:38 am on November 6, 2021: none
    assuming most run their node on a dedicated device: relax, buy a managed switch for 20-30$, set the ingress/egress rate and be done. example https://youtu.be/x-Pq27fDfLc?t=764
  137. verdy-p commented at 2:01 am on July 28, 2022: none

    I think that the bset thing to do is to run the node inside a guest VM (including a CLI-oly version of Ubuntu with WSL if you’re running on Windows, or a thinner CLI distrib), and tune the hypervisor to place a cap on its bandwidth.

    And don’t forget to cap also its memory usage: 512MiB for the maxmempoolsize should be largely enough).

    This is also the fastest way to shutdown your node rapidly and safely, and restart it (the hypervisor can suspend and create a restartable snapshot much faster than trying to shutdown and restart the BitcoinCore agent in its local OS, something you’ll do only occasionnally if you need maintenance of yoru local Linux VM for system/security updates). As a bonus, the hypervisor and its host OS will also offer an additional security protection

  138. willcl-ark commented at 2:12 pm on October 28, 2022: contributor

    This issue was created in 2011, when global average internet speeds according to Akamai were 2.3MBit/s.

    Since then global average speeds have increased substantially (according to M-Lab and cable.co.uk) to 34.79MBit/s with many countries having well over 100MBit/s or even 1000MBit/s available to consumers, in conjunction with proliferation of 4 and 5G high speed wireless networks with equivalent speeds.

    In the mean time the Bitcoin blockchain has grown from 0.19GB to the current (Oct 2022) ~450GB which puts more strain on network nodes during the IBD phase, something that cannot really be sidestepped in any meaningful way.

    Technologies such as compact blocks (BIP152) have been introduced which, for tx-relaying nodes, the authors claim can relay blocks 90% of the time without having to request any missing transactions, sharply reducing the bandwidth spikes reported above which used to occur during moments of new block relay. Note that if user has configured their node to run in -blocksonly mode (to reduce total bandwith by something in the order of ~80% by disablilng transaction relay) they will not be able to benefit from compact blocks bandwidth peak reduction during block relay, as they have no mempool to reconstruct the block from.

    In addition to this Erlay is edging towards completion and also claims to reduce bandwidth significantly, although it’s currently unclear if it will meet it’s initial claim of 40%. More details can be found here.

    It is made clear in the comments above that bandwidth limitations on nodes serving historical blocks (listening nodes) is not desirable for the health of the network as a whole, regardless of whether this node wants to feel altruistic by “giving something back to the network”.

    Therefore I conclude that, unless there is a clear proposal here detailing exactly how bandwidth should be further limited in Bitcoin Core, we should finally close this issue. Because in addition to having on average significantly more bandwidth available today, nodes have the following options available to them to configure depending on their specific circumstances:

    Measure Notes Reduce total Limit block spikes
    Compact blocks (enabled) block tx set reconcilliation from mempool ✔️ ✔️
    maxuploadtarget limit total upload bandwith per 24h ✔️
    -listen=0 no incoming connections ✔️
    -blocksonly=1 total bandwith reduction of ~80% ✔️
    Reduce maxconnections= go lower than the default of 10 + 2 outbound peers ✔️
    Software bandwidth monitor e.g. Trickle, per process or interface ✔️ ✔️
    Router bandwidth monitor affests all host traffic ✔️ ✔️
    (Future) Erlay reduces duplicate relay ✔️

    Whilst not all the options are compatible with each other, I believe that there is a working combination here for most operators.

    As I understand it there will remain one case which cannot be covered by Bitcoin Core program options alone (i.e. without using software of router bandwidth monitors), which is limiting peak rate during IBD. Bitcoin Core will still see your node use up it’s maxuploadtarget quota as soon as it can, before limiting itself.

  139. maflcko commented at 2:25 pm on October 28, 2022: member

    Thanks for the summary, going to close for those reasons.

    Just some nits:

    • -blocksonly=1 is incompatible with compact blocks, so it reduces the average usage, but not spikes
    • the trickle software is likely unmaintained and broken
    • There is also the setnetworkactive RPC to completely shutdown/reboot the connection manager
  140. maflcko closed this on Oct 28, 2022

  141. willcl-ark commented at 3:20 pm on October 28, 2022: contributor
    Thanks marco, I think I had that info re. blocksonly in the text but it’s not clear from the table :)
  142. bitcoin locked this on Oct 28, 2023

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-19 03:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me