Add a -maxoutbound option for use on pool servers to increase bitcoin network connectivity. #6014

pull jameshilliard wants to merge 2 commits into bitcoin:master from jameshilliard:options-maxoutboundconnections changing 3 files +30 −4
  1. jameshilliard commented at 1:19 am on April 15, 2015: contributor

    I’ve brought this previous pull request up to date and defined a new use case for it #4687 .

     0This option is useful for mining pools that want faster block propagation.
     1Outbound connections will statistically have better overall network connectivity
     2than incoming connections. This is due to most nodes that accept these
     3outbound connections supporting a maximum of 125 connections.
     4Inbound connections are often NAT restricted and will then only support 8
     5connections. These NAT restricted connections are less desirable for pools.
     6This can be demonstrated using the following example of a pool with 125 connections.
     7This example makes the assumption that inbound connections are NAT restricted and 
     8limited to 8 connections while outbound connections connect to nodes with the
     9default connection limit of 125.
    10Connection capacity within one hop of pool with default settings:
    11(8 outbound connections * 125 connections per node) +
    12(117 incoming connections * 8 connections per node) =
    131936 Connections within one hop of the pool server
    14(125 outbound connections * 125 connections per node) =
    1515625 Connections within one hop of the pool server
    16Based off of these calculations a pool server can have about 8 times the
    17overall network connections within one hop while establishing
    18the same amount of connections itself. In reality it will likely be even more
    19due to the bitcoin network having far more available connections slots on nodes
    20than there are nodes trying to connect to those slots.
    21This setting should only be used by pool operators.
    
  2. jameshilliard force-pushed on Apr 15, 2015
  3. Add new command line option '-maxoutbound'
    While there is a command line option to limit the total number of
    connections ('-maxconnections'), the number of outbound connections is
    controlled by a hard-coded constant 'MAX_OUTBOUND_CONNECTIONS=8'.
    This number (8) of connections has a bad impact on user's privacy. Let's
    keep the default number of outbound connections (of 8) but allow a user
    to have more privacy by reducing this number (to 3 or 4) using
    '-maxoutbound'.
    
    Explanation: transactions that are first relayed by these 8 entry nodes
    most probably belong to the same user. In fact, even a subset of these 8
    entry nodes can uniquely identify a user. There is a cheap way for an
    attacker to learn this set of entry nodes and the user's public IP and
    use it to deanonymize the user (note that users by default advertise
    their public IP addresses even when behind NAT).  If the user has 3-4
    outbound connection the success rate of the attack becomes quite low.
    
    Conflicts:
    	src/init.cpp
    e1756564a4
  4. jameshilliard force-pushed on Apr 15, 2015
  5. jameshilliard force-pushed on Apr 15, 2015
  6. The option -maxoutbound should only be used by mining pools in order to increase their overall network connectivity. By useing higher quality outbound connections instead of inbound connections pools can propagate blocks faster and reduce orphans. 502e2da0da
  7. jameshilliard force-pushed on Apr 15, 2015
  8. sipa commented at 4:40 am on April 15, 2015: member

    Have you measured that this improves your propagation time more than just connecting to a few fast peers? By the time you have sent out your block to 100 well-connected peers, the last ones to get it likely could have already gotten it from eachother faster than from you, as more fast peers means you suddenly need to do a lot more work to keep up with the sudden burst of network packets to broadcast the block.

    In any case, I do not think this serves the best interests of users of the software. We have had several cases of people thinking that more connections means faster confirmations of their transactions, or “better validated” blocks.

    In the past we have also seen problems with the network running out of connectable slots, so we have actively avoided options for people to overwelm the network with connections. I’m very aware an attacker could do this anyhow, but this is more about preventing people from hurting the network while incorrectly believing it benefits them (for non-miners, a single connection to a non-compromised is as good as any higher number).

    I believe this equally would not actually benefit you.

  9. jameshilliard commented at 5:13 am on April 15, 2015: contributor

    Pool servers typically are run from datacenter grade gigabit connections or better so I don’t think bandwidth is that big of an issue most of the time. My pool server with nearly 6000 active miner connections is normally running at under 5% CPU usage total for all applications at nearly any time, this includes bitcoind, stratum server, database, web front end and relay node. I don’t really see what type of resource bottleneck I would be running into.

    Unless a pool server is expected to run with 8 or less connections total I think that increasing the max outbound to match the max connections would help ensure that the pool is only connecting to well connected nodes rather than nodes behind NAT that would propagate slower(more hops required). The other issue is that even when allowing inbound connections it is difficult to reach the the default max of 125. I haven’t been able to test this myself but I don’t see why this change wouldn’t help. When starting up a pool server it takes a long time to start getting more connections than the initial 8.

    I agree this option could cause problems if too many people enabled it, but I think we need a better solution than just hard coding the outbound connection count to be 8(I think hard-coding arbitrary variables is a bad idea). Do you have any ideas on how one might go about making this option hard for people not running pools to enable? I don’t want to maintain a fork of bitcoin just because someone decided to hard code one variable. Maybe force the user to add “-isweariamactuallyrunningaminingpool” to their config in order to enable this option.

  10. laanwj commented at 8:08 am on April 15, 2015: member

    Making it easily possible to increase the default outgoing connections above 8 would put undue load on the network. See the discussion in #4687 as well as many previous discussions.

    If you want lots of connections, raise the maximum number of incoming connections instead. I have no trouble on my longer running nodes to get lots of incoming connections. “NAT restricted” doesn’t change much, I’d hope that the larger pool servers won’t be running on someone’s home network (but even on my home network the router’s port forwarding doesn’t seem to impose a low connection limit).

  11. laanwj added the label P2P on Apr 15, 2015
  12. jameshilliard commented at 8:35 am on April 15, 2015: contributor

    Would it really cause an issue if it was only pool servers that were increasing this limit? It seems this option can be beneficial to pool operators but its not being considered because nobody trusts the users. I’m not a fan of crippling software because someone might abuse a feature. The comment in that discussion is only talking about regular users and regular nodes and doesn’t seem apply to this use case.

    From what I’ve been reading its been historically common for pool operators to override this option in order to improve connectivity. I’m also wondering who came up with 8 as the default for outbound connections and 125 as the default max connection limit? 8 and 125 are pretty specific numbers and it would be good to know how people came up with them.

    One other thing I’m wondering is why bitcoind can’t just establish thousands of connections like other p2p protocols can? Is the core networking code just in really bad shape or is it something else?

  13. sipa commented at 10:14 am on April 15, 2015: member
    bitcoind’s network code can deal with around 1000 connections, due to the file descriptor limits of select().
  14. laanwj commented at 11:43 am on April 15, 2015: member

    The reason for the default restriction of 125 is to

    • cope with shitty routers that crash with lots of connections
    • bound the memory usage; every active connection will have send/receive buffers and some other associated data

    If these are not concerns for you, you can increase the number of incoming connections up to the select limit (as @sipa mentions).

  15. laanwj commented at 11:46 am on April 15, 2015: member

    Anyhow, if you trust yourself enough to not abuse this feature you can remove the cap in your local builds. This is open source and there’s nothing ‘we’ can do to prevent you from that (although there are some ideas with regard to banning mass connectors*) . But this can not be merged unless you cap the maximum to 8.

    *: not because of you, but the other, malicious use case for mass connecting are ’listening nodes’ that try to subvert privacy by correlation of transaction origins (see #3828)

  16. jameshilliard commented at 10:35 pm on April 15, 2015: contributor

    @sipa it is common for pool servers and other servers with lots of connections to override the OS file descriptor limitation, maybe moving to something libevent based for networking would help scalability. @laanwj so that more or less explains why 125 is set as the default for inbound but I have yet to hear where the number 8 comes from and why it is hard coded.

    One other use case may be for pools that are NAT restricted for reasons outside of their control such as being on farm sites where only wireless internet is available in countries with unreliable backbones. If a pool gets NAT restricted then it gets stuck at the 8 outbound connection limitation which seems far less than ideal. In addition during start-up incoming connections can take a very long time to get anywhere close to 125 even on pools without NAT restrictions.

  17. TheBlueMatt commented at 11:10 pm on April 15, 2015: member
    There was a long discussion on IRC yesterday about why its probably bad even for a pool server to increase the limit significantly. I’m assuming that was you? If not, you should go read it. http://bitcoinstats.com/irc/bitcoin-dev/logs/2015/04/14#l1429040973
  18. jameshilliard commented at 11:14 pm on April 15, 2015: contributor
    @TheBlueMatt Yes that was me, how would you go about calculating the ideal number of connections a pool server should make in cases where manually establishing connections to trusted nodes is not possible?
  19. TheBlueMatt commented at 11:19 pm on April 15, 2015: member
    A pool server should run its mining bitcoind behind a firewall with only connections to other bitcoinds, which are making 8 outgoing connections and accepting other connections, as well as a connection to the relay network (and maybe other similar networks…if someone set them up). But, really, this is not the place to discuss pool architecture.
  20. jameshilliard commented at 0:12 am on April 16, 2015: contributor

    @TheBlueMatt In areas with poor internet connectivity such as China where mining sites and operators aren’t able to reliably set up their own offsite nodes I think a better solution such as this is often needed. The outbound connection limit of 8 just seems very arbitrary. Your suggestion of running multiple nodes seems to itself be a workaround to getting more than 8 outbound connections to the bitcoin network since its basically running more nodes as a way to increase bitcoind’s arbitrary connection limitation.

    For example I don’t see how running 5 nodes with the default limitations of 8 outbound connections each and 125 inbound connections each for a total of 40 outbound and 625 inbound connections would be all that different from just running a single node with 40 outbound and 625 inbound connections allowed. The current bitcoind design encourages people to run a whole lot of small nodes on low resource systems instead of using larger more powerful systems that could perform much better by providing a high performance backbone for the bitcoin network.

  21. gmaxwell commented at 11:38 pm on April 16, 2015: contributor

    NAK.

    Increasing the connection count in abusive to the network, it consumes third party resources and without actual reason. Measurements I performed with p2pool users about a year an a half ago or so showed that cranking the outbound count also increased the propagation time. (But even if it did decrease it, it wouldn’t hold a candle to the improvement from simply better relay systems such as the block relay network)

    Exposing a pool server’s bitcoin node directly to the outside world is a known-bad-practice which will pretty reliably result in DOS attacks. Continuing to ignore professional advice based on years of experience is at your own peril.

    Existing users are already actively banning single nodes that are mass connecting to users and I expect this practice to become more common in the future (especially once I publish the private set intersection tools that enable people to safely perform such blocking at scale and without centralization).

    Matt’s suggestion about multiple nodes is very much not a suggestion about the connection limits, its a suggestion about establishing a trusted topology that gives you a reliably exponentially growing cone of propagation as far as possible and minimizing risks from stalled connections blocking the pipeline.

  22. jameshilliard commented at 0:43 am on April 17, 2015: contributor

    @gmaxwell I don’t really see why increasing outbound connections would increase propagation time unless some sort of resource was being constrained such as CPU RAM, disk IO or network capacity, unless their was some sort of flaw or scalability limitation in the networking code.

    This would seem to only be an issue if there were flaws in bitcoind’s networking code more than anything, if someone is able to block data transfers by say keeping connections open that should be fixed in bitcoind possibly by redesigning the networking to function asynchronously. Allowing more outbound connections would actually make fire-walling easier as you wouldn’t have to have an open port on a pool server in order to have over 8 connections(I still have yet to hear how the number 8 was calculated). In any case bitcoind can usually be run off of an alternate IP address even if its on the same server.

    I really don’t see this accomplishing much for a number of reason, for one thing ipv6 will make it more difficult to target ban IP’s and its also not that hard to get a lot of IPv4 addresses either on a datacenter connection for cheap. Right now there doesn’t seem to be any way to properly make use of high throughput or high resource servers.

    This sounds to be more of a design flaw in bitcoind’s networking code, you shouldn’t be able to stall connections like that in a properly designed web application server. It would be really bad for a pool server if a stalled stratum connection blocked all the others, but pool software is designed not to let that happen. Its the same way with modern web servers like nginx.

  23. sipa commented at 8:54 am on April 17, 2015: member

    Even with a 10 Gbit/s connection, when serving 100 connections, at least one of them will only receive a 1 Mbyte block after 100ms (because it takes that long to pump 100*1 Mbyte through your line). In 100 milliseconds, the first of your peers could already have relayed it to a dozen peers.

    That’s under perfect conditions, where there is no software and operating system overhead from the sudden load of packets.

  24. jameshilliard commented at 9:45 am on April 17, 2015: contributor

    @sipa In your calculation you are ignoring a lot of real world factors. First of which is that receiving 1 Mbyte of data over 100ms requires a connection with a minimum of about 80 megabits a seconds of bandwidth which the vast majority of connected peers will not have. In addition most of those peers will probably be on residential grade connections where their upload speed is generally significantly slower than their download so their relay speeds will be far slower than that of a pool server. Those peers will also have a validation step themselves which will add delay. Most of those peer will also only have 8 connections themselves so there will be more intermediary validation and relay steps needed.

    In any case this could all be optimized around by having bitcoind not upload to more peers than there is bandwidth for at any one time in order to sure it gets out to as many peers as quickly as possible before sending to the next set of connections it has. I don’t see link saturation being much of a bottleneck on datacenter grade connections unless there is an issue unless there are issues with bitcoind’s networking code.

    The problem right now is that after a pool server sends out blocks to its very small list of directly connected peers all of its connections go idle and any excess bandwidth gets wasted. Keeping the connections near max throughput is a good thing since blocks can propagate to a lot of nodes far faster than just uploading to a few and then not doing anything after. Even a 100 megabit symmetric datacenter connection will probably have 20 times the upload capacity of a typical residential cable connection. Using your example and assuming an average upload speed of 8 megabits a second per connected peer with 8 connections to the pool server it would take each peer 8 seconds to propagate a 1Mbyte block to 8 of their own connected peers for about 64 peers total after 8 seconds and this would still be far less than a single server connected to 100 peers on a high bandwidth line could do, 100ms to 100 peers in your example on a 10Gbit/s line vs 64 peers in 8 seconds if you only send to 8 peers. It’s this initial propagation step that seems to be the bottleneck for block propagation.

    If the propagation speeds can’t be optimized then it encourages pools to mine less transactions into their blocks to reduce the size and speed propagation. This is something that has been done before.

  25. sipa commented at 10:01 am on April 17, 2015: member

    I understand your reasoning, and I agree that in theory, the network topology could be improved by having nodes with better connectivity connect to more nodes (which they already do) and peer preferentially with eachother (which you also already can do, with -addnode). With my numbers I tried to show that there are cases where you’re better off relaying to fewer peers; I agree they’re overly simplistic though.

    I however, don’t think it will matter in practice. You’re already using a system which relays far faster than the P2P protocol because it was specifically optimized for it (the block relay network), through which you’ll likely reach a significant portion of miners faster anyway. And more work is being done to research faster relay protocols with lower bandwidth usage (look up @gavinandresen’s IBLT based relay, or @gmaxwell’s block network coding), which perhaps may end up being part of the P2P protocol.

    I’m personally interested in seeing what propagation speeds and/or orphan rate changes you’d see from connecting out to more nodes, and you’re free to run a private patch for it. I don’t think we want it in Bitcoin Core, though, sorry.

  26. jameshilliard commented at 10:33 am on April 17, 2015: contributor

    @sipa One major issue with the block relay network right now I think is China and their unreliable network topography where you sometimes have to rely exclusively on bitcoind’s ability to establish connections automatically. While efforts like IBLT and other similar optimizations may help I think they are just delaying a lot of needed improvements in bitcoind’s core p2p networking code.

    The C10k has been solved in plenty of server applications before and I think that fixing it in bitcoind should be a priority. If high end servers can support thousands of connections I don’t think we then have any need to enforce connection caps and banning nodes that make too many connections. Solving network connection scalability to me sounds like an easy first step since there are already well known examples of how it can be done.

    Unfortunately the pools I run(non-public private pools) don’t find enough blocks for me to do any meaningful statistical analysis on orphan rates, is there currently any way to measure propagation speeds reliably on the bitcoin network? I’m also really weary of maintaining my own fork of bitcoind just so that an arbitrary static variable can be overwritten mainly since my application programming skills are very limited which is why I would like some way to flag the outbound connection limit without having to recompile bitcoind.

    I don’t think this is much of an issue for western based pools that use the block relay network but I’m far more worried about China where these types of things don’t always get set up or work properly due to their network issues.

  27. gmaxwell commented at 11:06 am on April 17, 2015: contributor

    I think they are just delaying

    What is the basis of this belief?

    I’m also really weary of maintaining my own fork of bitcoind

    You should be– your patch is incomplete.

    One major issue with the block relay network right now I think is China

    What is the basis of this belief?

    have to rely exclusively on bitcoind’s ability to establish connections

    No, they’re not limited to bitcoind’s connectivity; e.g. block relay network, manual topology configuration.

    Unfortunately the pools I run

    If you’re concerned about the orphan rates, you should be running the relay network client.

    can support thousands of connections

    It’s not a question of supporting thousands of connections. Parallelism reduces the goodput of the network due to redundant transmissions as bitcoin is a flooding network.

    As was explained on IRC, adjusting the first nodes connection count is at best optimizing the linear part of an exponential. It has diminishing returns; and in reality both due to attack exposure and because blocks take time to transmit and don’t move further (and are not preferred) until they are complete running high connection counts on nodes has been observed in practice to increase orphan rates (as would also be expected from the theory).

  28. sipa commented at 11:29 am on April 17, 2015: member

    I think they are just delaying

    See #5307, #5971, #5941, #5976, #5875, #5989, #5820, #5662, #4468 for P2P improvements in the past few months.

  29. jameshilliard commented at 11:29 am on April 17, 2015: contributor

    What is the basis of this belief?

    The outbound connection limit hasn’t been change from 8 for years and there doesn’t seem to have much progress towards enabling nodes to support very large amounts of connections, I think this would be a better area to focus effort rather than just trying to block nodes that establish many connections.

    You should be– your patch is incomplete.

    This is exactly the reason I want this to be merged in properly.

    What is the basis of this belief?

    Firewall delays in addition to contact with pool operators there, it would be interesting to do some statistical analysis on orphan rates by region by analyzing which regions are involved in the race and which region ultimately finds the block that ends the race.

    No, they’re not limited to bitcoind’s connectivity; e.g. block relay network, manual topology configuration.

    The firewall can be unpredictable which is why in certain cases manual configuration is and relay network is not always possible.

    If you’re concerned about the orphan rates, you should be running the relay network client.

    I do run it myself but I think this would improve connectivity in China for pools which would require them to have this option.

    It’s not a question of supporting thousands of connections. Parallelism reduces the goodput of the network due to redundant transmissions; bitcoin is a flooding network.

    This option is most important for the initial generated block propagation, but I think there are still a lot of things that can be done to improve connectivity overall. Methods to reduce redundant transmissions may be needed but I don’t think that’s anything all that revolutionary as its something other p2p networks employ.

  30. jameshilliard commented at 11:35 am on April 17, 2015: contributor

    See #5307, #5971, #5941, #5976, #5875, #5989, #5820, #5662, #4468 for P2P improvements in the past few months.

    I meant more along the lines of connection count scalability rather than the protocol scalability improvements.

  31. sipa commented at 11:37 am on April 17, 2015: member

    You keep arguing that the way to optimize propagation is by having more connections. No, we are not working on increasing that, or being able to increase that, because we don’t believe it matters in practice - or may matters worse (increase misunderstanding, risk people believing they help while hurting, increase wasted bandwidth and connections).

    There is a lot of work being done to optimize propagation, which does not rely on blowing up resource usage (I consider available network connections as a resource).

  32. gmaxwell commented at 11:41 am on April 17, 2015: contributor

    The outbound connection limit hasn’t been change from 8 for years

    Increasing it is undesirable. Doing so would harm throughput, capacity, and cost of the network without any corresponding improvement. The connection count is necessary to reduce the probability of partitioning; which is exponentially related to the count.

    The firewall can be unpredictable

    So you have no basis for your beliefs– they’re just conjecture?

    which would require them to have this option

    This would harm their propagation.

    but I think there are still a lot of things that can be done to improve connectivity overall

    Sure. But this is the wrong direction.

    as its something other p2p networks employ

    I’m not aware of any comparable with similar requirements or mechanisms.

    You appear to just be interested in arguing, but you are doing so from a position of ignorance. All this is accomplishing is irritating the folks who’ve spent a fair amount of time trying to assist and educate you.

  33. jameshilliard commented at 11:41 am on April 17, 2015: contributor

    You keep arguing that the way to optimize propagation is by having more connections. No, we are not working on increasing that, or being able to increase that, because we don’t believe it matters in practice at this point.

    There is a lot of work being done to optimize propagation, which does not rely on blowing up resource usage (I consider available network connections as a resource).

    Improving propagation via other means is good thing but overall but I don’t think its a good idea to completely ignore connection count scalability. Each can benefit the other.

  34. gmaxwell commented at 11:44 am on April 17, 2015: contributor

    I don’t think its a good idea to completely ignore connection count scalability. Each can benefit the other.

    Let me try repeating this with different but technically accurate language.

    “I think its a good idea for every link in the network to repeat the same network, so that each peer ends up with as many redundantly transmitted copies as they have connections, and have every peer maintain a TCP connection to every other, so the whole network has N^2 traffic; and I expect the increase of traffic from nearly linear in node count to quadratic to somehow improve propagation”.

  35. jameshilliard commented at 11:57 am on April 17, 2015: contributor

    Let me try repeating this with different but technically accurate language.

    “I think its a good idea for every link in the network to repeat the same network, so that each peer ends up with as many redundantly transmitted copies as they have connections, and have every peer maintain a TCP connection to every other, so the whole network has N^2 traffic; and I expect the increase of traffic from nearly linear in node count to quadratic to somehow improve propagation”.

    No properly designed p2p system should be transmitting redundant data merely having lots of established connections in and of itself shouldn’t create any significant traffic.

    So you have no basis for your beliefs– they’re just conjecture?

    They change it all the time and do packet manipulation as well….this isn’t exactly new.

    I’m not aware of any comparable with similar requirements or mechanisms.

    Systems like bittorrent and other file transfer protocols generally don’t duplicate data transmissions.

    You appear to just be interested in arguing, but you are doing so from a position of ignorance. All this is accomplishing is irritating the folks who’ve spent a fair amount of time trying to assist and educate you.

    I’m not as familiar with bitcoind’s networking design as some but I’m trying to get as clear a picture as possible why certain design decisions were made and what improvements can be made.

    Increasing it is undesirable. Doing so would harm throughput, capacity, and cost of the network without any corresponding improvement. The connection count is necessary to reduce the probability of partitioning; which is exponentially related to the count.

    I suspect there may be already some of this going on in China, not fully separated but a situation where Chinese pools have a higher chance of confirming blocks that originate in China with the same thing happening for pools outside of China. It would be interesting to do a statistical analysis on this by comparing outcomes of block races.

  36. sipa commented at 12:22 pm on April 17, 2015: member

    I’m not aware of any comparable with similar requirements or mechanisms. Systems like bittorrent and other file transfer protocols generally don’t duplicate data transmissions.

    Bittorrent is not a flood network, and does not have latency reduction as a goal.

    Block propagation in Bitcoin must reach every node, and must minimize the time to reach a majority. That means that there is a fundamental compromise to be made between latency and bandwidth. You can do more negotiation about who is going to send what to whom (at the cost of more roundtrips), or you can choose to just send things, and have the receiver drop redundant data.

    Bitcoin’s P2P network today advertises new blocks by their hash in an inv message, and has the peer query the data if they need it. This means that over every link, 1 or 2 invs for every block will be sent (2 if they cross each other). Invs are cheap, but just blowing up the number of links will inevitably lead to a proportional number of packets being sent (and processed) over the network for every block.

    The relay network does not have this extra advertisement step, and thus avoids an extra round trip + processing step, at the cost of potential duplicate block transmission (which is lower, due to it not sending blocks in full), and requiring central control (a hub) to make it efficient.

  37. jameshilliard commented at 4:58 pm on April 17, 2015: contributor

    @sipa I’m guessing Invs can be considered negligible resource wise since they are just message advertizements, at least in the case of a block announcement where all of them should be roughly uniform. From working with stratum servers it seems there are very little system or network resources needed to transmit small messages if they are all coming from the same in memory data source as it has been shown that stratum can easily scale to over 10k connections.

    So it sounds like invs are being used already as a de-duplication method so I’m not entirely sure what the issue there is.

    From the looks of it IBLT and block network encoding would make nodes themselves function in a manner similar to the relay network, am I understanding that correctly?

    I still see the option to increase outbound connections as useful since propagation is exponential and getting the initial block to as many peers as possible as fast as possible can lower overall propagation times even with other improvements such as IBLT or block network encoding.

  38. laanwj commented at 2:39 pm on April 24, 2015: member
    Closing this; there is clear consensus that increasing the number of outgoing connections beyond 8 is not a good idea and should not be supported without patching the code.
  39. laanwj closed this on Apr 24, 2015

  40. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-18 12:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me