Introduce -maxuploadtarget #6622

pull jonasschnelli wants to merge 2 commits into bitcoin:master from jonasschnelli:2015/09/maxuploadtarget changing 7 files +394 −1
  1. jonasschnelli commented at 7:39 pm on September 2, 2015: contributor

    This is the first PR in a planed series of bandwidth / DOS prevention PRs. Focus for now is a simple and not over-complex solution to start with.

    The -maxuploadtarget (in MiB) is a configuration value that make bitcoind try to limit the total outbound traffic. Currently there is no guarantee that the limit target will be fulfilled.

    If the target-in-bytes - (time-left-in-24-cycle) / 600 * MAX_BLOCK_SIZE is reached, stop serve blocks older than one week, stop serve filtered blocks (SPV) immediately.

    The timeframe for the measurement is currently fixed to 24h.

    This is a effective method of reducing traffic and might prevent node operators from getting expensive “traffic exceed”-bills.

    Currently the limit also take effect for whitebinded peers.

    Over getnettotals one can get some upload limit statistics:

     0./src/bitcoin-cli getnettotals
     1{
     2  "totalbytesrecv": 0,
     3  "totalbytessent": 0,
     4  "timemillis": 1441222000173,
     5  "uploadtarget": {
     6    "timeframe": 86400,
     7    "target": 300000000,
     8    "target_reached": false,
     9    "serve_historical_blocks": true,
    10    "bytes_left_in_cycle": 300000000,
    11    "time_left_in_cycle": 86400
    12  }
    13}
    

    needs documentation needs unit/rpc tests

  2. jonasschnelli force-pushed on Sep 2, 2015
  3. jonasschnelli force-pushed on Sep 2, 2015
  4. jonasschnelli force-pushed on Sep 2, 2015
  5. casey commented at 2:47 pm on September 3, 2015: contributor

    -maxuploadtarget is in MB (1000000), not MiB (1048576), not currently documented, but just in case it does get documented.

    If a user wants to control the total amount transferred, then I don’t think MB per day is a good unit to use. If they have a transfer cap, then it is most likely tied to a monthly billing cycle, so MB/month is likely better. (For example, I think Comcast in the US has a 250GB/month cap, so a Comcast customer might like to allow 100GB/month, or something like that.)

  6. MarcoFalke commented at 2:54 pm on September 3, 2015: member

    MB/month is likely better

    Which could result in all of the 100GB being eaten up on the first few days of the timeframe. Maybe the owner restarts the server mid-month and then serves 200GB of historic blocks that month.

  7. laanwj commented at 2:57 pm on September 3, 2015: member

    Which could result in all of the 100GB being eaten up on the first few days of the timeframe. Maybe the owner restarts the server mid-month and then serves 200GB of historic blocks that month.

    Once combined with connection throttling, which would make the speed that the 100GB can be ’eaten up’ configurable, (which @cfields is working on ) it’s less of an issue.

  8. laanwj added the label P2P on Sep 3, 2015
  9. jonasschnelli commented at 3:09 pm on September 3, 2015: contributor

    -maxuploadtarget is in MB (1000000), not MiB (1048576), not currently documented, but just in case it does get documented. @casey Thanks. Will fix it.

    Not sure about limit per month or per day. Per month can be difficult because not every month has the same amount of days (30/31/27/28). Also it’s unclear how one could get “in sync” with providers month rhythm (must not be first to last day of month). Though, it still would allow to use a daily measurement cycle (break down the month limit to days, keep focus on a day limit).

    For now i think keep the day-limit does make things more easy and controllable.

  10. luke-jr commented at 3:48 pm on September 3, 2015: member
    I need some kind of goal set in kB/second, as I am often forced to shutdown my node for reliable phone calls. :(
  11. ghost commented at 0:02 am on September 4, 2015: none
    Agree with @luke-jr here, we think about available bandwidth in terms of kB/sec rather than MB/day because that’s how it’s always reported to us - adverts from ISPs, speed checking websites, P2P torrenting apps etc.
  12. jonasschnelli force-pushed on Sep 4, 2015
  13. jonasschnelli force-pushed on Sep 4, 2015
  14. jonasschnelli commented at 7:20 am on September 4, 2015: contributor

    @NanoAkron: not sure if @luke-jr is for a bandwidth kB/sec limit. What is the most annoying thing that can happen when you download something over torrent? Probably if you connected and download from a throttled peer.

    Bandwidth throttled peers is not something we should not encourage within bitcoin-core. If a node-operator likes to limit the bandwidth, he can use different tools. What we really should do is reducing bandwidth without impair the network quality and speed. My stats showed me that most traffic is done outbound and most of the outbound traffic is consumed by serving historical blocks (help other nodes to initial sync a new chain).

    This PR can reduce bandwidth significant while not impair the network quality. An initial sync for a new peer is not something that is very time critical.

    There was a discussion on #bitcoin-dev about the -maxuploadtarget (gmaxwell, wumpus, me) http://bitcoinstats.com/irc/bitcoin-dev/logs/2015/09/02#l1441180902.0)

  15. MarcoFalke commented at 8:47 am on September 4, 2015: member

    download from a throttled [torrent] peer

    I don’t think you can compare torrent to bitcoin-core serving blocks… Using torrent you are happy to get any transmission speed because it is parallel download anyway.

    On the contrary, bitcoin-core is doing moving-window download (c.f. #4468). So you always want to fetch the blocks from peers which can upload to you not much slower than your download speed?

    This PR makes the full node to disconnect from initial-download-peers instead of serving them at lower speed, which sounds like the better solution to me.

  16. jonasschnelli commented at 9:05 am on September 4, 2015: contributor

    This PR makes the full node to disconnect from initial-download-peers instead of serving them at lower speed, which sounds like the better solution to me.

    Two questions:

    • What is the benefit for a peer in initial sync-state when it gets served with throttled bandwidth (lets assume 10kb/s) in comparison to a disconnect?
    • What is the benefit for a bandwidth limited peer to serve historical blocks with limited bandwidth rather then do a disconnect?
  17. sipa commented at 9:12 am on September 4, 2015: member
    Since 0.10 it does not really hurt much anymore to be served by a throttled peer, as the download algorithm will download in parallel from many, and if one is significantly slower than others, disconnect it itself.
  18. MarcoFalke commented at 9:20 am on September 4, 2015: member
    @jonasschnelli To answer your two questions: No, I don’t see a benefit. My wording was confusing, so to be clear: I am supporting this PR/commit. (Just as @sipa said, throttling won’t help as it results in disconnect anyway)
  19. luke-jr commented at 12:30 pm on September 4, 2015: member
    Having an “instant” bandwidth limit to work with can enable Core to make intelligent-ish decisions, like not inv’ing/uploading new blocks to more than one peer at a time unless the current-upload peer is downloading at a slow rate.
  20. ghost commented at 2:27 pm on September 5, 2015: none

    Surely we need some sort of scaling to our bandwidth limitations/throttling here - serving historic blocks to unsynced peers could ’trickle-feed’ in the background at a few 10s of kB/sec, and anything within the last 60 blocks or so could then be served with greater priority.

    This would need the unsynced peer to know with certainty the true current block height so that it didn’t drop slow inbound connections, because this could be a new attack vector.

  21. gmaxwell commented at 7:25 am on September 6, 2015: contributor

    @casey of course, any longer window cap can be divided down. Breaking it up into smaller groups has a side effect of allowing us more freedom to ignore the resource scheduling issue.

    Imagine that a low monthly cap were widely deployed on the network. Then the whole network might blow through its cap in a few days at the start of the month, and be unwilling to serve blocks for the rest of the month! The daily limit (especially with the arbitrary start point) is less of an issue. @luke-jr Your phone call issue is largely orthogonal to what this is addressing. Good behavior with respect to buffer bloat and general rate restriction are another matter from overall usage. Those need to be address too, but in other PRs. :)

  22. gmaxwell commented at 8:10 am on September 7, 2015: contributor
    With the above mentioned comparison bug fixed, I can confirm that the basic functionality works once the limit is crossed: new blocks are relayed without issue, old blocks result in disconnect.
  23. jonasschnelli force-pushed on Sep 7, 2015
  24. jonasschnelli force-pushed on Sep 7, 2015
  25. jonasschnelli force-pushed on Sep 7, 2015
  26. jonasschnelli commented at 3:47 pm on September 7, 2015: contributor
    @gmaxwell: Thanks for the review! Updated the time check (block older than a week) and did some testing. Seems to work like this.
  27. in src/init.cpp: in 32f685b7d5 outdated
    334@@ -335,6 +335,7 @@ std::string HelpMessage(HelpMessageMode mode)
    335     strUsage += HelpMessageOpt("-whitebind=<addr>", _("Bind to given address and whitelist peers connecting to it. Use [host]:port notation for IPv6"));
    336     strUsage += HelpMessageOpt("-whitelist=<netmask>", _("Whitelist peers connecting from the given netmask or IP address. Can be specified multiple times.") +
    337         " " + _("Whitelisted peers cannot be DoS banned and their transactions are always relayed, even if they are already in the mempool, useful e.g. for a gateway"));
    338+    strUsage += HelpMessageOpt("-maxuploadtarget=<n>", strprintf(_("Tries to keep outbound traffic under the given target, 0 = no limit (default: %d)"), 0));
    


    sipa commented at 5:36 pm on September 7, 2015:
    Specify units (bytes? megabytes?) and window size (seconds, days?)
  28. gmaxwell commented at 6:40 pm on September 7, 2015: contributor
    might want to add a test so that it won’t drop the last connection? I’m not sure but if you imagine e.g. someone using -connect and having several peers down already? – then again is the peer really useful if its fetching old blocks?
  29. jonasschnelli commented at 8:26 pm on September 7, 2015: contributor

    Not sure how useful a peer with height < now-1week is. But it could catch up and then be helpful. Thought, I think we should respect the uploadtarget in the first place.

    Excluding -whitebind looks more important to me (could be implemented in a follow up PR).

  30. gmaxwell commented at 11:15 pm on September 7, 2015: contributor
    With respect to forcing a minimum on the argument. Perhaps it should just whine in the logs if you’ve asked for a value that is too small? One consequence of the current setup is that every restart forces another 144MB of history transfer. (I removed that code in my own copy because it makes the software much easier to test.)
  31. jonasschnelli force-pushed on Sep 8, 2015
  32. jonasschnelli force-pushed on Sep 8, 2015
  33. jonasschnelli commented at 12:42 pm on September 8, 2015: contributor
    • Fixed @sipa nit (mention MiB as unit per 24h cycle).
    • Dropped forcing of a minimum target of 24 hours of blocks * MAX_BLOCKSIZE, only warn in a such case.
  34. in src/rpcnet.cpp: in 93f3de1953 outdated
    385+    outboundLimit.push_back(Pair("target_reached", CNode::OutboundTargetReached(false)));
    386+    outboundLimit.push_back(Pair("serve_historical_blocks", !CNode::OutboundTargetReached(true)));
    387+    outboundLimit.push_back(Pair("bytes_left_in_cycle", CNode::GetOutboundTargetBytesLeft()));
    388+    outboundLimit.push_back(Pair("time_left_in_cycle", CNode::GetMaxOutboundTimeLeftInCycle()));
    389+    obj.push_back(Pair("uploadtarget", outboundLimit));
    390     return obj;
    


    jgarzik commented at 11:21 pm on September 15, 2015:

    Univalue nit: I wonder if we should add new code with Pair() Ideal is to eliminate Pair() usage, and simply use object.pushKV()

    Maybe yes (don’t use an obsolete interface), maybe no (stay consistent with surrounding code).


    jonasschnelli commented at 12:22 pm on September 17, 2015:
    Agreed. A KV push method for UniValue would be nice. Out of scope for this PR and probably a relatively big changeset if all current push_back(Pair()) will be converted.

    jgarzik commented at 12:44 pm on September 17, 2015:

    pushKV method already exists.

    Agreed it is out of scope for this PR, and a huge cleanup in terms of LOC to remove Pair() usage tree-wide.

  35. jgarzik commented at 11:21 pm on September 15, 2015: contributor
    concept and quick code review ACK
  36. dcousens commented at 8:28 am on September 16, 2015: contributor
    utACK and ACK if tests are added
  37. jonasschnelli force-pushed on Sep 18, 2015
  38. jonasschnelli commented at 1:49 pm on September 18, 2015: contributor

    Rebased. I have tried (serval hours) to write a rpc test that could test this feature; but didn’t succeed. A solution would be to make the time-frame (currently static 24h) and move the static timeframe that determines if a block is historical (static 1 week) to chainparams. But somehow this looks after a bad solution. Playing around with setmocktime also didn’t help (seems not to affect the getBlockTime()).

    If anyone has a approach how to test this automatically: speak up.

  39. sdaftuar commented at 2:01 pm on September 18, 2015: member
    @jonasschnelli I haven’t reviewed this code carefully but from a quick glance, I think I can write up a test in the python p2p framework – it looks to me like we can use setmocktime to trigger clearing the 24 hour measurement windows and we can send getdata messages over and over to test what happens when the limit is hit (and we can construct blocks with varying timestamps to exercise the 1 week limit). I’ll see if I can put something together and report back…
  40. jonasschnelli force-pushed on Sep 18, 2015
  41. jonasschnelli commented at 3:02 pm on September 18, 2015: contributor
    @sdaftuar: Great! Thanks for having a look at the tests.
  42. sdaftuar commented at 8:14 pm on September 18, 2015: member
  43. jonasschnelli commented at 3:01 pm on September 19, 2015: contributor
    @sdaftuar Nice! Looks good. I did ran into some errors. Will extend the test and add some getnettotals calls (during the next week).
  44. mikehearn commented at 8:24 pm on September 19, 2015: contributor

    If the target-in-bytes - (time-left-in-24-cycle) / 600 * MAX_BLOCK_SIZE is reached, stop serve blocks older than one week, stop serve filtered blocks (SPV) immediately.

    What?

    You start out by saying nearly all bandwidth usage is serving historical blocks to full nodes. You then write code that limits SPV clients, which don’t request full blocks. In fact the whole point of this mode is to NOT download full blocks.

    How does this PR make any sense at all?

  45. jonasschnelli commented at 8:01 am on September 20, 2015: contributor

    I agree that limiting filtered blocks has not much of a importance to the upload bandwidth limiting. My thought where, if one sets a -maxuploadtarget, what commands can be dropped without harming the p2p network? Uploading historical blocks and disconnecting SPV nodes. It could make sense to remove the part where maxuploadtarget also affects filtered blocks limiting.

    I hope i can generate some statistics that could show the bandwidth consumptions of SPV soon.

  46. gmaxwell commented at 8:15 am on September 20, 2015: contributor

    When a node has exhausted its capacity it should stop doing anything to use more capacity beyond keeping itself minimally participating. I don’t think there is anything unreasonable about that.

    I think a case could be made for non-historic filtered results– as at least no worse than serving to other peers, though really if a node is out of outbound capacity it probably shouldn’t have any clients at all, and should instead be shunting them off onto hosts that do have capacity left.

  47. jonasschnelli commented at 8:20 am on September 20, 2015: contributor

    Agree with @gmaxwell. I think we should reconsider that people shutting down nodes because nodes do create uncontrollable heigh amount of outbound traffic. If we could keep nodes alive – even if they don’t server historical and filtered blocks for a limited timeframe – it’s much better than seeing nodes get shut down because of missing traffic limiting options.

    Not saying that this is the ultimative solution. But i think it’s effective and can be merge without taking high risks.

  48. mikehearn commented at 9:40 am on September 20, 2015: contributor

    I’d think you want the opposite - if a node literally exhausted a transfer quota, it needs to stop serving any data at all, stop listening and start using SPV mode itself.

    As is, you will continue to upload at least 288mb of data per day even if you stop serving historical blocks entirely.

    But if quota caps are the issue (as opposed to steady state limiting) then you should just make the node shut itself down and stop serving RPCs if it runs out of bandwidth. Otherwise the cap doesn’t mean much…. people will set it expecting it to work and then be surprised when Core continues using up their (now presumably very expensive) over-quota.

  49. mikehearn commented at 9:43 am on September 20, 2015: contributor
    Oh, and by the way, the idea that disconnecting SPV wallets doesn’t harm the P2P network is a pretty dangerous one. Those clients make up the bulk of all connects and users. Protecting them is very important!
  50. gmaxwell commented at 9:46 am on September 20, 2015: contributor
    It goes into limp mode early enough that it shouldn’t need to shut down completely but will still usually avoid going over; this is obviously a very early first step and isn’t perfect yet. But it is already very useful and can be incrementally improved.
  51. jonasschnelli commented at 3:44 pm on September 20, 2015: contributor

    […] As is, you will continue to upload at least 288mb of data per day even if you stop serving historical blocks entirely.

    Not exactly. After each cycle of 24h your counter will be reset. The minimum buffer for serving blocks is 144 blocks * MAX_BLOCKSIZE. But if a node starts requesting blocks after 12h in you 24h cycle, only a min of 72MB are left. Also: not all blocks are full.

    As said. It’s an easy and effective solution that allows progressive improvements.

    We should not mix SPV with BF. Only because most SPV clients are using a low BF FPR we see low traffic in SPV. Also SPV clients do not participate in the P2P network health.

    I agree that SPV clients are important! That’s why I’d like to see bandwidth limiting without throttling instead of operators shutting down valuable nodes.

  52. mikehearn commented at 10:35 am on September 21, 2015: contributor

    OK, so firstly, do we agree on this statement

    This patch solves the problem of users with capped transfer quotas, probably measured in gigabytes per month. The other patch we’re discussing on the XT repository solves the problem of inference with consumer services at home due to saturating the limited uplink bandwidth.

    Because it seems to me that these are two independent problems that are both being called ’throttling’, and the two approaches can actually be combined, or rather, would not interfere if both were implemented as-is.

    Tor has a hibernate mode that this sounds very similar to. The way it works is essentially shutting down your node when you go over the transfer quota. Similarly, this patch is equivalent to shutting down part of the node when quota is exhausted, i.e. the part that provides useful services to other peers.

    It might be worth separating the two terms. We could call this one hibernation, and the other patch throttling, as that’d align terminology with Tor and “throttling” in network engineering normally means restricting the instantaneous bandwidth usage rather than disconnecting after reaching a daily quota.

  53. jonasschnelli commented at 11:34 am on September 21, 2015: contributor

    Real “throttling” (reducing the bandwidth to a static value) regardless of which node and which command/service being called, is bad and ineffective IMO. If you are connected to a throttled tor node or a throttled node in a bittorent net, it will just result in a bad overall user experience.

    Bitcoin p2p is different to tor and bittorrent. It does not serve or consume uncontrollable and ungroupable/hidden data-streams (tor), neither it serves high bandwidth data streams over a longer period of time (torrent).

    Careful and logically rejecting services if a certain level of consumption is reached, is a way better approach than just trottle the overall network activity of bitcoin. The later can be done already with some network wrappers/tools and i strongly recommend against adding a overall “dumb” network bandwidth limiter to core (or XT).

    Because SPV clients can produce uncontrollable amount of traffic (BF FPR) without any usage for the p2p network (they don’t pass data to other nodes, they just consume), it makes sense to stop that service if network capacity gets critical.

    IMO: the “throttling”/“capping”/“limiting” topic is very important in terms of “scaling bitcoin” and it should get the required attention.

  54. mikehearn commented at 11:44 am on September 21, 2015: contributor

    Hm, so we don’t agree on that statement then?

    When a peer is uploading the block chain to freshly installed nodes/nodes that were offline for a long time, it does look quite a bit like BitTorrent ….. heavy upload traffic that goes as fast as possible for a long period of time.

    The goal of the bandwidth throttler is not to have huge swings in available serving bandwidth, but rather to avoid delaying/blocking the TCP ACKs being used by other services like Netflix and causing user visible disruption. Ideally, a user would contribute most of their uplink if they aren’t actually using it (which is most people most of the time), so the difference between unthrottled and throttled wouldn’t actually be very large.

    As streaming video can be disrupted by any kind of upload, the most important criteria is that the throttle actually works. How a peer schedules bandwidth users into that limited window is a different problem entirely.

  55. jonasschnelli commented at 11:53 am on September 21, 2015: contributor

    When a peer is uploading the block chain to freshly installed nodes/nodes that were offline for a long time, it does look quite a bit like BitTorrent ….. heavy upload traffic that goes as fast as possible for a long period of time.

    Agreed. That IBD serving is similar to torrent. This is the reason why this PR addresses that part if the upload target has been reached.

    The rest of your points are QoS topics. IMO this is a router job (or a lower level OS net stack thing). Reducing bandwidth by a throttling feature would reduce than bandwidth regardless of what else is going on on the system. It would also reduce bandwidth – which would be available – if the User is creating a backup or writing a letter in Word.

  56. mikehearn commented at 1:12 pm on September 21, 2015: contributor
    Well, the issue is people want to configure it at the app level as many consumer wifi routers don’t offer the right features, or people don’t know how to use them.
  57. in src/net.cpp: in 30922c33bf outdated
    2150+        uint64_t timeLeftInCycle = GetMaxOutboundTimeLeftInCycle();
    2151+        uint64_t buffer = timeLeftInCycle / 600 * MAX_BLOCK_SIZE;
    2152+        if (buffer >= nMaxOutboundLimit || nMaxOutboundTotalBytesSentInCycle >= nMaxOutboundLimit - buffer)
    2153+        {
    2154+            return true;
    2155+        }
    


    btcdrak commented at 10:19 am on October 21, 2015:
    Braces not required.
  58. btcdrak commented at 1:38 pm on October 21, 2015: contributor
    Looks good to me. The OP said there is some documentation + tests to complete.
  59. laanwj commented at 2:48 pm on October 21, 2015: member
    utACK
  60. in src/net.cpp: in 30922c33bf outdated
    2095+    LOCK(cs_totalBytesSent);
    2096+    uint64_t recommendedMinimum = (nMaxOutboundTimeframe / 600) * MAX_BLOCK_SIZE;
    2097+    nMaxOutboundLimit = limit;
    2098+
    2099+    if (limit < recommendedMinimum)
    2100+        LogPrintf("Max outbound target very small (%s) and are very unlikely to be reached, recommended minimum is %s\n", nMaxOutboundLimit, recommendedMinimum);
    


    laanwj commented at 9:20 am on October 22, 2015:
    I don’t understand this message. The target is too small, thus unlikely to be reached? Isn’t it the other way around?

    jonasschnelli commented at 9:25 am on October 22, 2015:
    What about just using "Warning: max outbound target is very small, recommended minimum is %s"?

    MarcoFalke commented at 11:54 am on October 22, 2015:
    @jonasschnelli I think you can still print nMaxOutboundLimit, just remove or clarify the unlikely to be reached as no one will understand this as “likely to overshoot”.

    gmaxwell commented at 3:09 am on October 26, 2015:
    Unlike to be reached sounds very much like it will use less bandwidth than specified; so I think this should be changed to “will be overshot” or “very likely to be exceeded.” or similar.
  61. gmaxwell commented at 3:10 am on October 26, 2015: contributor
    @jonasschnelli Interest in performing a squash+rebase+message nits? I’d like to move to merge this.
  62. laanwj commented at 12:13 pm on October 26, 2015: member
    Agree @gmaxwell
  63. jonasschnelli commented at 12:17 pm on October 26, 2015: contributor
    I’m currently rebasing and fixing @sdaftuars rpc test. Have plans to fix everything until tmr.
  64. jonasschnelli force-pushed on Oct 26, 2015
  65. Introduce -maxuploadtarget
    * -maxuploadtarget can be set in MiB
    * if <limit> - ( time-left-in-24h-cycle / 600 * MAX_BLOCK_SIZE ) has reach, stop serve blocks older than one week and filtered blocks
    * no action if limit has reached, no guarantee that the target will not be  surpassed
    * add outbound limit informations to rpc getnettotals
    872fee3fcc
  66. jonasschnelli force-pushed on Oct 26, 2015
  67. Add RPC test for -maxuploadtarget 17a073ae06
  68. jonasschnelli force-pushed on Oct 26, 2015
  69. jonasschnelli commented at 4:01 pm on October 26, 2015: contributor
    Rebased. Added @sdaftuar rpc test. Addressed nits. Passes travis. Ready for merge.
  70. laanwj merged this on Oct 26, 2015
  71. laanwj closed this on Oct 26, 2015

  72. laanwj referenced this in commit 7939164d89 on Oct 26, 2015
  73. in src/rpcnet.cpp: in 17a073ae06
    385+    outboundLimit.push_back(Pair("target", CNode::GetMaxOutboundTarget()));
    386+    outboundLimit.push_back(Pair("target_reached", CNode::OutboundTargetReached(false)));
    387+    outboundLimit.push_back(Pair("serve_historical_blocks", !CNode::OutboundTargetReached(true)));
    388+    outboundLimit.push_back(Pair("bytes_left_in_cycle", CNode::GetOutboundTargetBytesLeft()));
    389+    outboundLimit.push_back(Pair("time_left_in_cycle", CNode::GetMaxOutboundTimeLeftInCycle()));
    390+    obj.push_back(Pair("uploadtarget", outboundLimit));
    


    MarcoFalke commented at 8:11 am on November 6, 2015:
    You’d need to update help getnettotals as well.
  74. zkbot referenced this in commit fd0d435f72 on Feb 18, 2021
  75. zkbot referenced this in commit 1d378b1eb0 on Feb 18, 2021
  76. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 12:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me