WIP: remove script checking dependency on checkpoints v2 #9180

pull mruddy wants to merge 1 commits into bitcoin:master from mruddy:isburied changing 1 files +56 −8
  1. mruddy commented at 11:42 PM on November 17, 2016: contributor

    This is a follow-up to #9175 (it's sufficiently different that I closed the other PR and opened this one instead). @gmaxwell does this version implement approximately what you were suggesting? This version compares the block header with respect to both current equivalent proof of work and time.

  2. gmaxwell commented at 11:55 PM on November 17, 2016: contributor

    Yes! This is indeed what I was referring to. Thanks!

    I'd rather the flag be a boolean ("Skip validation of burried blocks"), as is right now it's trivial to (accidentally) set it to 0 and not check anything, which isn't a configuration we should operate on; other people may also have opinions there. One thing that will likely be requested is a unidirectional latch, similar to how IsInitialBlockDownload works, so that in a reorg the signatures will still be validated. (Rationale: a reorg of burred blocks should never happen, so we don't care if its slow. Having it ends any concern "zomg what if all the hashpower goes rogue for a month!"-- no need to debate how unlikely an attack is when we can instead make it so that it would only impact installing new nodes).

    I'll give your patch more review soon.

  3. mruddy commented at 1:26 AM on November 18, 2016: contributor

    Cool!

    About the flag: I'm flexible. I kind of like the ability to choose a time/depth/amount of work beyond just buried validation being allowed or not. It does make it easier for people to be extra conservative and increase it. Although, any non-boolean usage would be only for super-power-users and it might not be prudent to use values much lower than the default. I could make the minimum acceptable value be two weeks (nPowTargetTimespan) instead of zero. Less than one re-target interval kinda seems risky when considering possible sudden increases in rate of work. The default of being equivalent to 30 days worth of current work is longer than two re-targets, so I figured that would mitigate some risk. I guess we'd need to provide some kind of guidance on usage if it's not a boolean flag too.

    About the latch: Your rationale seems to make sense, but I don't see right now how a block would have to be re-validated again. A theoretical ginormous re-org could re-org the buried block off the active branch, or layer different work on top. But, a re-org could not make the buried block be covered by less total work. I'll dig into this more in the morning.

  4. gmaxwell commented at 7:42 AM on November 18, 2016: contributor

    The latch isn't about checking the same block twice, it's the idea that if there is a big reorg our definition of "burried" must be wrong and so we shouldn't skip validation of anything new that shows up.

    E.g. you have blocks "A B C D ..... Y Z " where B, C, D had their scriptchecks skipped because they were burred: then later there is a reorg to "A B C' D' .... Y' Z' ZZ", C' and D' should still get scriptchecked even though they are buried. This limits the exposure to attack to just initialization, and doesn't harm performance because we are already assuming such a large reorg will never happen. (Also: There is less reason for miners to ever try it when it won't let them bypass validation.)

    I think the latch could work like this: Remember the greatest height the function has been called on so far, and only return burried on blocks that are greater height. Make sure to init at the current height at start. Alternatively, I believe the same could be done with total work instead which would be less subject to shenanigans: malicious peer feeds you 100k fake early blocks to cause you to have to run scriptchecks when you reorg to the real chain. Another way to avoid those kinds of shenanigans would be to guard the function with a check that the header tip has total work greater than nMinimumChainWork.

    Actually, this last point is a protection you should put in regardless of the latching: do not return burried while the header tip has less than nMinimumChainWork. This protects against the case where I have network isolated you and I fork the chain early and give you 'burried' junk which is all at low difficulty.

    As far as configuration goes: Every configuration option has a large maintenance cost: We need to test it-- what happens when it's set to crazy values?-- what do we do when the logic changes and the old setting can't really be applied? -- e.g. we realize 'age' is a bad metric, and want to use total work differences?. It also has a direct cost to the user-- one more setting to worry about, some users will misunderstand it and set it in ways that are contrary to their own interests and expectations. There are, indeed, differences in use cases-- at least a few, but generally we're in a better position to pick settings: we have a wider view of the system, we can conduct extensive tests, gather peer review.. etc. So in principle we should think carefully before adding more than the minimally necessary options. I hope we can find settings here which are good enough for performance that the defaults will work for everyone who could otherwise run Bitcoin Core-- and any setting would just be a 'paranoid' mode the primarily exists for auditing and software testing purposes, like the checkpoints=0 setting today. :)

  5. gmaxwell commented at 7:43 AM on November 18, 2016: contributor

    @petertodd @maaku I recall both of you specifically having thoughts about this kind of functionality.

  6. remove script checking dependency on checkpoints - pow version dfc5ed5e07
  7. mruddy force-pushed on Nov 18, 2016
  8. mruddy commented at 7:47 PM on November 18, 2016: contributor

    Thanks for the explanation @gmaxwell. I think I understand. Updates made to incorporate all of your feedback. I removed the new config option and just left it just gated by -checkpoints. On the latch: I call it a high water mark in the code. It's not done yet (see the TODO), but figured I'd put up what I had so far to make sure I was on the right track. Also, minor note: I added a new GetAncestor check before GetBlockProofEquivalentTime in IsBuried. I added it for completeness although I haven't found it actually necessary while running some regtest scenarios.

  9. gmaxwell commented at 10:43 AM on November 19, 2016: contributor

    I'm going to try to prod people who are likely to oppose this and lets see if we can satisfy whatever concerns they have.

  10. petertodd commented at 1:31 PM on November 19, 2016: contributor

    So, if I understand this correctly, this pull-req would define a certain amount of work at which point script validation is skipped?

    I don't think this is a good idea, as you're changing the system to quite clearly give miners the ability to override the rules of the system. This has political and legal implications. For example, if miners can override the rules of the system, it becomes tempting to force them to do so to do things like confiscate funds that authorities believe should belong to different owners.

    Here's an alternative: Known-Good Blocks. The idea here is your client would come with a set of block hashes that the developers asserted correctly followed all the Bitcoin protocol rules, and thus were known to be valid. Unlike checkpoints, in the event of a reorg you would still accept the reorg, but because the blocks in the reorg don't match the known-good block hashes, you'd validate them fully against the protocol rules.

    Unlike this pull-req, Known-Good Blocks don't change the trust model of Bitcoin. Like the rest of the codebase, they're easily audited: anyone with a copy of the relevant parts of the blockchain can verify that those block hashes do in fact refer to valid blocks. If the developers maliciously or otherwise add an invalid known-good block to the codebase, it's easy to prove to the rest of the world that they have done so. Similarly, if the community fails to properly audit changes to the codebase, it's quite possible for the developers to insert changes into it that cause invalid blocks to be accepted as valid - a fake known-good hash is just one of many ways this could be done. Finally, unlike checkpoints known-good blocks don't change the protocol: they're just an implementation detail, and different implementations can have different sets of known-good blocks with no effect on consensus, so long as the blocks picked are in fact valid.

    Finally it's important to note that the fact that the Bitcoin protocol requires blocks to be valid against a large set of protocol rules is an optimization that's needed by SPV clients - it's not an inherent requirement for Bitcoin to function. I explained this in detail a few years back in my article Disentangling Crypto-Coin Mining: Timestamping, Proof-of-Publication, and Validation.

  11. gmaxwell commented at 8:33 PM on November 19, 2016: contributor

    @petertodd

    Thanks for taking the time to comment. I strongly believe that if we don't do something prudent here people will either do something more foolish, or there won't be full nodes around anymore for us to worry about them anymore.

    Lets define work equivalent days (WED), as function of two blocks that returns the number of days of hashing that would be required to mine from the lower to the upper given the difficulties hashrate.

    Lets define a "buried block". To be buried a block must be below a certain amount of WED with respect to the current best header, must be an ancestor of the current best header and must have a timestamp below the best known tip by a certain amount (same as the WED).

    This patch skips validation for the initial sync of buried blocks when the best header chain has more work than a hardcoded amount known to be in the best chain. Note the "initial sync", if there is a reorganization that disrupts our concept of "buried" the newly connected blocks should be checked.

    As a result, this exposure only exist for newly initializing nodes and ones that had been offline and fallen behind. An attacker who attempted to rewrite the state would find their efforts ignored by all preexisting nodes. I believe this may largely mitigates your 'override' concern.

    In particular, an attacker that can replace >100 blocks will start replacing the coinbase transactions in them without any rule violation. A 30 day reorg alone would grant 54000 BTC. An attacker who was technically able to mine back 30 days and catch back up would also almost certainly be technically able to mine back to 0 and catch back up, it would just take them longer. At 30 days there is also enough coinbase intermixing into the transaction flow that most people who transacted during that interval will have their transactions reversed even if the attacker would prefer that they weren't.

    With respect to known good, I would be incredibly hesitant to ship some long hardcoded list of blocks: it is very easily misunderstood as actually fixing the consensus state. It also arguably carries your 'easy to force' (yes, it would be 'visible', but a huge reorg is also visible and that didn't eliminate it from your concern), though I think that always exists it is strongly preferable to not amplify it with an easily misunderstood functionality. If not for the constant negative experiences with checkpoints I'd be more prone to agree with you, but I do not think the distinction between pinning the chain and not really rises to people's minds. The bad experiences with checkpoints often take the form of "we're already trusting these people to validate the chain, lets also trust these authorities to claw back stolen coins" the second class is also "reviewable" in some sense and I feel the fact a slow and highly objective review process is qualitatively different from something highly subjective and ill-suited to public review is a distinction too fine for many.

    The next consideration is that known-good will suffer constant "bitrot", the PR as is skips up to 30 days worth of work back-- which would be a phenomenal system ending reorg-- but a known good check would be stuck with the last release. If the system is depending on known good values, the result may be that frequent releases are encouraged which would diminish the value of review. I think this is a bad incentive for both developers and users.

    I can think of a number of ways to further harden this kind of proposal-- for example, it could validate (say) 1/1000 of the burred blocks at random with a negligible performance hit but create a consequence that an attacker who performed a phenomenal amount of computation to perform an attack like this against an isolated newly syncing peer could still fail. The WED metric could be more aggressive in what hashrate it uses, a somewhat longer interval could be used, the presence of competing header chains could be considered. I don't know if you'd find any of them persuasive.

    Do you have any proposals on how a known good could avoid becoming outdated, resulting in an excessively slow synchronization and creating bad incentives to upgrade too often for developers and users alike?

  12. mruddy commented at 10:49 PM on November 19, 2016: contributor

    @petertodd @gmaxwell Wow, the review you guys are doing is amazing! Good stuff, thanks!

    I've been thinking through all of it and have like a page of thoughts, but it came to mind that by itself (forgetting the high watermark part), the IsBuried code turns the full node into a hybrid SPV-full node when active (with a little bit of extra additional safety if there is a re-org). So, is the simplest fix to just make it guarded by a new flag that is off by default, but that could be turned on by a node operator during initial sync and turned off afterwards? That seems like the simplest change that allows the performance optimization during initial sync and then true full node functionality any time the flag is not explicitly turned on. Thus, node operators would have the choice whether to go full node only, or hybrid full-SPV. It would make this patch easier too :)

    edit: Had some ambiguous wording above... By hybrid SPV-full node, I mean it's a hybrid and somewhere on the spectrum in-between. If a node operator never used the new flag, then they'd be running a true fully validating (full) node. If they turned the flag on at any point, then it could become a hybrid node depending on when it was turned on. If it was on during initial sync (really the only sensical time to turn it on), during IBD for a node that had been offline for 30 days, or during a large re-org then it would be(come) a hybrid node. Then depending on when they turn the flag off again changes how much checking (none or all) that they'd do on a large IsBuried assumption invalidating re-org or long offline IBD.

  13. mruddy commented at 12:30 AM on November 20, 2016: contributor

    If hybrid node mode is not worth considering, then a full node implementing "known-good, fully validate other" could be implemented with a simple command line option that takes a known-good user-obtained block hash. The node operator would just have to find a trusted source for such a hash. What could be trusted would be up to the threat model for each user and would not be limited to just the software distributors. It could be the developers signing something, any other trusted website, etc..., or even a new function in the node where the node starts up and gets a view of header chains from its peers and shows the user a hash with 30 days of work and from a chain with at least so much work that could be used. What I think is important is that the source not be mandated by the code so as to reduce risk of coercion or homogeneity and dependence on a single group such as miners, developers, distributors, etc...

  14. mruddy commented at 12:39 AM on November 27, 2016: contributor

    I'm closing this for now. It was worth considering, but after thinking more about it, I don't want to affect the security model in order to get this "catch-up" time improvement. I ran some benchmark tests to verify that synching is CPU bound. It appears to be. I'm assuming it's mostly due to the ECDSA signature verification portion of script verification. Since it's CPU bound, over time the situation is likely to improve due to increased core frequencies and counts, without making this change.

    Some data from benchmark tests that I ran are below.

    System config:

    • Ubuntu 16.04.1 LTS x86_64
    • Intel(R) Core(TM) i7-3632QM CPU @ 2.20GHz (base frequency) up to 3.20GHz (max frequency): 4 CPU cores (with hyper-threading [2 threads per core] the system thinks it has 8 cores).
    • 8GB RAM
    • Samsung 840 PRO SSD 256 GB (ext4 filesystem on top of LUKS full disk encryption)
    • The tests were run with the standard "bitcoin-0.13.1-x86_64-linux-gnu.tar.gz" binary that has its last checkpoint at block 295,000. These tests do not test this PR's changes. These tests were to verify current performance.

    Test 1: Result Summary: This baseline is quick relative to the following tests because script verification, and importantly ECDSA signature verification, is skipped. 00:23:12 to get through block 295,000 546.32 seconds spent verifying inputs (mostly NOT spent checking scripts because 546.32 - 537.98)

    > /opt/bitcoin-0.13.1/bin/bitcoind -daemon -datadir=/test/bitcoin -txindex -reindex-chainstate -debug=bench -dbcache=512 -maxconnections=0 -listenonion=0 -listen=0 -server=0 -checkpoints=1 -par=4
    2016-11-25 15:26:20 Bitcoin version v0.13.1
    2016-11-25 15:49:32 - Connect block: 3.46ms [1354.29s]
    2016-11-25 15:49:32   - Load block from disk: 1.21ms [293.66s]
    2016-11-25 15:49:32     - Sanity checks: 0.36ms [74.42s]
    2016-11-25 15:49:32     - Fork checks: 0.03ms [37.40s]
    2016-11-25 15:49:32       - Connect 64 transactions: 4.64ms (0.072ms/tx, 0.008ms/txin) [537.98s]
    2016-11-25 15:49:32     - Verify 550 txins: 4.70ms (0.009ms/txin) [546.32s]
    2016-11-25 15:49:32     - Index writing: 0.26ms [109.06s]
    2016-11-25 15:49:32     - Callbacks: 0.04ms [6.77s]
    2016-11-25 15:49:32   - Connect total: 5.46ms [792.68s]
    2016-11-25 15:49:32   - Flush: 0.50ms [85.97s]
    2016-11-25 15:49:32   - Writing chainstate: 0.04ms [66.08s]
    2016-11-25 15:49:32 UpdateTip: new best=00000000000000004d9b4ef50f0f9d686fd69db2e03af35a100370c64632a983 height=295000 version=0x00000002 log2_work=77.864991 tx=36544669 date='2014-04-09 21:47:44' progress=0.112525 cache=128.0MiB(80135tx)
    2016-11-25 15:49:32   - Connect postprocess: 0.44ms [115.89s]
    

    Test 2: Same as "Test 1" except turn off checkpoints to see how much of a difference they make for the first 295,000 blocks. Result Summary: Almost 41 minutes (2453.05 seconds) more time spent verifying input scripts. 01:03:52 to get through block 295,000 2999.37 seconds spent verifying inputs

    > /opt/bitcoin-0.13.1/bin/bitcoind -daemon -datadir=/test/bitcoin -txindex -reindex-chainstate -debug=bench -dbcache=512 -maxconnections=0 -listenonion=0 -listen=0 -server=0 -checkpoints=0 -par=4
    2016-11-25 12:19:21 Bitcoin version v0.13.1
    2016-11-25 13:23:13 - Connect block: 6.92ms [3795.76s]
    2016-11-25 13:23:13   - Load block from disk: 1.33ms [276.54s]
    2016-11-25 13:23:13     - Sanity checks: 0.34ms [73.65s]
    2016-11-25 13:23:13     - Fork checks: 0.03ms [37.18s]
    2016-11-25 13:23:13       - Connect 64 transactions: 5.02ms (0.078ms/tx, 0.009ms/txin) [611.96s]
    2016-11-25 13:23:13     - Verify 550 txins: 43.75ms (0.080ms/txin) [2999.37s]
    2016-11-25 13:23:13     - Index writing: 0.30ms [110.76s]
    2016-11-25 13:23:13     - Callbacks: 0.03ms [6.94s]
    2016-11-25 13:23:13   - Connect total: 44.52ms [3246.08s]
    2016-11-25 13:23:13   - Flush: 0.38ms [87.97s]
    2016-11-25 13:23:13   - Writing chainstate: 0.02ms [64.48s]
    2016-11-25 13:23:13 UpdateTip: new best=00000000000000004d9b4ef50f0f9d686fd69db2e03af35a100370c64632a983 height=295000 version=0x00000002 log2_work=77.864991 tx=36544669 date='2014-04-09 21:47:44' progress=0.112535 cache=128.0MiB(80135tx)
    2016-11-25 13:23:13   - Connect postprocess: 0.42ms [120.74s]
    

    Test 3: Same as "Test 2" except use only half the CPU cores. Result Summary: (4968−550)/(2999−611) = 1.85, so, not quite double the amount of time spent verifying input scripts when using half the number of cores as before. 01:35:49 to get through block 295,000 4968.80 seconds spent verifying inputs

    > /opt/bitcoin-0.13.1/bin/bitcoind -daemon -datadir=/test/bitcoin -txindex -reindex-chainstate -debug=bench -dbcache=512 -maxconnections=0 -listenonion=0 -listen=0 -server=0 -checkpoints=0 -par=2
    2016-11-25 13:28:10 Bitcoin version v0.13.1
    2016-11-25 15:03:59 - Connect block: 7.72ms [5715.59s]
    2016-11-25 15:03:59   - Load block from disk: 1.16ms [253.08s]
    2016-11-25 15:03:59     - Sanity checks: 0.30ms [69.63s]
    2016-11-25 15:03:59     - Fork checks: 0.02ms [35.73s]
    2016-11-25 15:03:59       - Connect 64 transactions: 4.03ms (0.063ms/tx, 0.007ms/txin) [550.32s]
    2016-11-25 15:04:00     - Verify 550 txins: 59.76ms (0.109ms/txin) [4968.80s]
    2016-11-25 15:04:00     - Index writing: 0.24ms [105.60s]
    2016-11-25 15:04:00     - Callbacks: 0.03ms [6.46s]
    2016-11-25 15:04:00   - Connect total: 60.40ms [5203.03s]
    2016-11-25 15:04:00   - Flush: 0.23ms [83.46s]
    2016-11-25 15:04:00   - Writing chainstate: 0.02ms [63.77s]
    2016-11-25 15:04:00 UpdateTip: new best=00000000000000004d9b4ef50f0f9d686fd69db2e03af35a100370c64632a983 height=295000 version=0x00000002 log2_work=77.864991 tx=36544669 date='2014-04-09 21:47:44' progress=0.112528 cache=128.0MiB(80135tx)
    2016-11-25 15:04:00   - Connect postprocess: 0.34ms [112.30s]
    

    Test 4: Same as "Test 2" except use a bigger dbcache to see leveldb impact. Result Summary: dbcache makes some difference as far as disk write time (still very small relative to script checking). note: be careful not to set dbcache too near to max RAM in order to avoid system freeze when the leveldb flush occurs (it can cause sudden additional >1GB memory allocation and cause system to appear to freeze for a while). 01:03:01 to get through block 295,000 2960.31 seconds spent verifying inputs

    > /opt/bitcoin-0.13.1/bin/bitcoind -daemon -datadir=/test/bitcoin -txindex -reindex-chainstate -debug=bench -dbcache=4096 -maxconnections=0 -listenonion=0 -listen=0 -server=0 -checkpoints=0 -par=4
    2016-11-25 15:55:50 Bitcoin version v0.13.1
    2016-11-25 16:58:51 - Connect block: 5.02ms [3743.46s]
    2016-11-25 16:58:51   - Load block from disk: 1.35ms [284.90s]
    2016-11-25 16:58:51     - Sanity checks: 0.36ms [76.74s]
    2016-11-25 16:58:51     - Fork checks: 0.03ms [19.33s]
    2016-11-25 16:58:51       - Connect 64 transactions: 1.75ms (0.027ms/tx, 0.003ms/txin) [513.55s]
    2016-11-25 16:58:51     - Verify 550 txins: 35.38ms (0.064ms/txin) [2960.31s]
    2016-11-25 16:58:51     - Index writing: 0.36ms [150.05s]
    2016-11-25 16:58:51     - Callbacks: 0.03ms [7.01s]
    2016-11-25 16:58:51   - Connect total: 36.23ms [3231.84s]
    2016-11-25 16:58:51   - Flush: 0.32ms [99.14s]
    2016-11-25 16:58:51   - Writing chainstate: 0.02ms [6.77s]
    2016-11-25 16:58:51 UpdateTip: new best=00000000000000004d9b4ef50f0f9d686fd69db2e03af35a100370c64632a983 height=295000 version=0x00000002 log2_work=77.864991 tx=36544669 date='2014-04-09 21:47:44' progress=0.112520 cache=1292.7MiB(3310536tx)
    2016-11-25 16:58:51   - Connect postprocess: 0.40ms [120.84s]
    

    Test 5: Same as "Test 4" except turn CPU frequency scaling off (powersave --> performance). Result Summary: slightly worse to turn frequency scaling to performance mode. this change did not help as had been hoped. 01:03:12 to get through block 295,000 3023.91 seconds spent verifying inputs

    > for g in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor; do echo -n performance > $g; done
    > cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
    performance
    ...
    > /opt/bitcoin-0.13.1/bin/bitcoind -daemon -datadir=/test/bitcoin -txindex -reindex-chainstate -debug=bench -dbcache=4096 -maxconnections=0 -listenonion=0 -listen=0 -server=0 -checkpoints=0 -par=4
    2016-11-26 11:22:26 Bitcoin version v0.13.1
    2016-11-26 12:25:38 - Connect block: 6.15ms [3758.37s]
    2016-11-26 12:25:38   - Load block from disk: 1.37ms [265.73s]
    2016-11-26 12:25:38     - Sanity checks: 0.40ms [67.86s]
    2016-11-26 12:25:38     - Fork checks: 0.03ms [18.49s]
    2016-11-26 12:25:38       - Connect 64 transactions: 1.89ms (0.030ms/tx, 0.003ms/txin) [500.28s]
    2016-11-26 12:25:38     - Verify 550 txins: 38.86ms (0.071ms/txin) [3023.91s]
    2016-11-26 12:25:38     - Index writing: 0.44ms [143.15s]
    2016-11-26 12:25:38     - Callbacks: 0.04ms [6.86s]
    2016-11-26 12:25:38   - Connect total: 39.86ms [3277.59s]
    2016-11-26 12:25:38   - Flush: 0.59ms [90.58s]
    2016-11-26 12:25:38   - Writing chainstate: 0.03ms [6.53s]
    2016-11-26 12:25:38 UpdateTip: new best=00000000000000004d9b4ef50f0f9d686fd69db2e03af35a100370c64632a983 height=295000 version=0x00000002 log2_work=77.864991 tx=36544669 date='2014-04-09 21:47:44' progress=0.112436 cache=1292.7MiB(3310536tx)
    2016-11-26 12:25:38   - Connect postprocess: 0.45ms [117.97s]
    
  15. mruddy closed this on Nov 27, 2016

  16. gmaxwell commented at 2:16 AM on November 27, 2016: contributor

    You give up too quickly. I still thing this is interesting. :)

  17. mruddy commented at 11:32 AM on November 27, 2016: contributor

    I think it's an interesting approach, but after thinking through Peter's feedback and appreciating more how this changes the security model of a node running it, it became less interesting to me.

    It really does make it into a different security model node because even though the node still calculates a UTXO set and validates non-buried blocks as best it can with that set, it's not a fully validated set, which is the point of full node software. At least with the current checkpoints, the node operator is saying that it's OK to skip local validation of part of the chain because that part of the chain is validated already via the assertion that it is an ancestor of some specific good block. It's a subtle difference between trust+assert and just trust (as long as some due diligence was put into the assert part). It's the difference between being effectively fully validated and not.

    Then, to verify that the underlying cause is being CPU bound to script verification (and probably mostly ECDSA signature verification) wrapped it up for me. This PR wouldn't address the root performance bottleneck and it would change the node's security model to be less than full.

  18. mruddy deleted the branch on Nov 27, 2016
  19. DrahtBot locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-17 06:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me