Discourage NOPs reserved for soft-fork upgrades #5000

pull petertodd wants to merge 1 commits into bitcoin:master from petertodd:blacklist-upgradable-nops changing 8 files +49 −3
  1. petertodd commented at 2:10 am on September 29, 2014: contributor

    NOP1 to NOP10 are reserved for future soft-fork upgrades. In the event of an upgrade such NOPs have *VERIFY behavior, meaning that if their arguments are not correct the script fails. By blacklisting these NOPs we ensure that we’ll never accept transactions, nor mine blocks, with scripts that are now invalid according to the majority of hashing power even if we’re not yet upgraded. Previously this wasn’t an issue as the IsStandard() rules didn’t allow upgradable NOPs anyway, but 7f3b4e95 relaxed the IsStandard() rules for P2SH redemptions allowing any redeemScript to be spent.

    We do allow upgradable NOPs in scripts so long as they are not executed. This is harmless as there is no opportunity for the script to be invalid post-upgrade.

    For background on why a blacklist is needed as opposed to bumping tx version numbers to uprade opcodes see: http://www.mail-archive.com/bitcoin-development%40lists.sourceforge.net/msg06247.html

  2. luke-jr commented at 2:34 am on September 29, 2014: member
    Concept ACK, although I’d prefer a more abstract implementation (eg, IsSoftforkSafe similar to IsStandard).
  3. petertodd commented at 2:49 am on September 29, 2014: contributor

    @luke-jr My intent here is to target something very specific - execution of opcodes reserved for upgrades - to avoid unnecessarily rejecting transactions using new opcodes in non-executed codepaths. For instance with a CHECKLOCKTIMEVERIFY opcode to lock funds until a certain time would often be used in conjunction with another way to spend the output that doesn’t use that new feature, and there’s no reason to ignore transactions if the scriptSig doesn’t execute the NOPx. This is similar to how all the undefined opcodes can exist in an unexecuted IF ENDIF block without making the script invalid.

    Keep in mind that the code - and copies of that code - will also be used for applications outside of Bitcoin Core. For example my python-bitcoinlib will add this blacklist feature verbatim to its EvalScript() logic, which will end up in applications that simply want to check if a scriptSig is valid. Keeping the blacklist specific can delay the need to upgrade such applications.

    The one case where a strict blacklist/whiteliat might make more sense is if we want to have an opcode whose mere presence in a script can make the script invalid, even if the opcode isn’t executed. Frankly I just can’t see us ever doing that.

    As for Eligius… that’s a well-managed pool that will be kept up-to-date, so please do go right ahead and take the BLACKLIST flag out of the standard script verify flags to let people test this stuff on mainnet. This isn’t the only case where it’d be reasonable for Eligius to accept/mine a non-softfork-safe transaction.

  4. luke-jr commented at 3:00 am on September 29, 2014: member
    @petertodd My goal isn’t to remove this policy - in fact, I’d like to have it deployed on Eligius sooner. Simply that having a dedicated function to check things like this would be useful abstraction.
  5. petertodd commented at 3:08 am on September 29, 2014: contributor

    @luke-jr What specifically do you want the abstraction for? I mean, do you want to know if a transaction breaks a soft-fork safe rule specifically? If tx is otherwise valid, IsSoftforkSafe(tx) could simply be to run EvalScript() with the softfork-safe rules enabled, returning true so long as EvalScript() returns true and tx.nVersion is as expected. The issue is that it returning false may also mean the transaction is invalid - but like I said, we can simply start by checking the transaction is otherwise valid.

    Note how my SCRIPT_VERIFY_NULLDUMMY verification flag that rejects CHECKMULTISIG with a non-empty dummy argument is implemented in the exact same way.

  6. sipa commented at 3:10 am on September 29, 2014: member
    Concept ACK
  7. luke-jr commented at 3:11 am on September 29, 2014: member
    @petertodd It is potentially helpful to reduce CPU usage to skip non-softforksafe transactions without evaluating them. I don’t feel strongly that this abstraction is needed, but it doesn’t seem like it should be difficult to do?
  8. petertodd commented at 3:14 am on September 29, 2014: contributor

    @luke-jr Sure, but the only time where that’d save a meaningful amount of CPU time is during an attack, and equally the attacker could just use up CPU time with invalid but softfork-safe scripts.

    Again, note how SCRIPT_VERIFY_NULLDUMMY is implemented in the exact same way.

  9. gavinandresen commented at 1:21 pm on September 29, 2014: contributor

    I suggest dropping the term “blacklist” because that is misleading – these are not black-listed, they are just non-standard-if-executed. A miner might still mine these and the block is perfectly valid and will be accepted by the network.

    I also don’t think the tests belong in script_invalid.json, again because they ARE valid, just not standard. I suggest creating a new test case and a script_nonstandard.json where they can live.

    I’m torn on the added complexity versus the utility here; using a redefined NOP opcode before miners have upgraded and are enforcing it as a new consensus rule will always be risky– do we imagine we’ll be telling people “yes, you should use OP_FOO on the main network even though we haven’t reached mining consensus yet” ?

  10. sipa commented at 2:36 pm on September 29, 2014: member
    Blacklist is maybe indeed to strong a word, but IMHO the script engine does not need to know or care whether a particular flag is used by the consensus code or as a policy, and script_valid/script_invalid is the right way to test for those.
  11. gavinandresen commented at 2:46 pm on September 29, 2014: contributor

    Script engine doesn’t need to know. I want to make life a tiny bit easier for other implementations that are using script_invalid as “if you accept any of this stuff then your implementation is broken.”

    I can imagine that having “valid but not standard” stuff in script_invalid.json is likely to cause them painful banging-their-heads-on-the-desk-when-they-figure-out-we-added-Yet-Another-Script-Flag…..

  12. petertodd commented at 3:00 pm on September 29, 2014: contributor

    @gavinandresen We already have lots of “valid but not standard” stuff in (script|tx) _ (in)valid.json, e.g. SCRIPT_VERIFY_NULLDUMMY, SCRIPT_VERIFY_STRICTENC, and soon BIP62 flags. I am one of those “other implementation” maintainers, and the bitcoinj maintainers are using up-to-date (script|tx)_(in)valid.json tests. Having all the relevant tests in one place is highly convenient.

    re: “telling people yes, you should use OP_FOO” - This isn’t an informational patch, it’s a security fix. @sipa How about SCRIPT_VERIFY_REJECT_UPGRADABLE_NOPS? Or SCRIPT_VERIFY_REJECT_RESERVED_NOPS?

  13. sipa commented at 3:06 pm on September 29, 2014: member

    We can add a comment to script_valid saying: “Flags x/y/z are currently not used in consensus code, so if you don’t care about them, skip tests that mention them”.

    The reason I prefer using flags (and the associated script_valid tests) is because it allows clear testing of interaction of different flags), and that for most things it results in a smaller and more efficient implementation (no need to iterate through the script multiple times).

    For example STRICTENC would be near impossible to implement outside of the actual execution, and alternate implementations very likely care about this (to guess whether a wallet transaction may confirm, for example) too. I’ve been working on converting test_canonical into cases in script_valid for that reason (@schildbach wondered why they were not in script_valid).

    I don’t think this is inherently different.

  14. petertodd commented at 3:16 pm on September 29, 2014: contributor

    @sipa I’d perfer to avoid adding extra comments like that with the rational being if you’re using any of the script/tx test cases, you really should have very good familiarity with the Bitcoin Core C++ code anyway. I do describe what the SCRIPT_VERIFY_BLACKLIST_UPGRADABLE_NOPS flag is thoroughly in a comment where it is defined.

    Agreed re: flag rational - NULLDUMMY is another example that’s near impossible to implement outside of actual execution. In general, as much as static analysis is sexy, it’s use in Bitcoin has been an overly complex engineering mistake. (just look at how messed up sigop counting is!) A single pass through scripts works fine and avoids extra codepaths.

  15. gavinandresen commented at 3:42 pm on September 29, 2014: contributor
    RE: other stuff in script_valid: Okey dokey, then maybe it is time to rename script_valid / script_invalid to script_accepted / script_rejected (but I can live with current naming). @petertodd : what do you mean by this is a security fix? My point is that if you use an OP_FOO transaction before miners are validating it then you are insecure. Period. Relying on “most nodes won’t relay and most miners will not mine” is not secure.
  16. petertodd commented at 4:16 pm on September 29, 2014: contributor
    @gavinandresen re-read the pull-req description; remember it’s written from the perspective of a full-node operator, in particular, think about the case where you are a miner.
  17. BitcoinPullTester commented at 5:50 pm on September 29, 2014: none

    Automatic sanity-testing: FAILED MERGE, see http://jenkins.bluematt.me/pull-tester/p5000_9974941aab08c0f97a116a5947b7d969caece16f/ for test log.

    This pull does not merge cleanly onto current master This test script verifies pulls every time they are updated. It, however, dies sometimes and fails to test properly. If you are waiting on a test, please check timestamps to verify that the test.log is moving at http://jenkins.bluematt.me/pull-tester/current/ Contact BlueMatt on freenode if something looks broken.

  18. mikehearn commented at 10:02 am on October 1, 2014: contributor

    I think this is attempting to fix one particular instance of the more general problem: soft forks are a bad idea that we should just stop referring to. Indeed we should be building ways to prevent soft forks and turn them into hard forks. One way to do that would be to introduce a new rule that simply rejects blocks with an unknown version. In the event that the majority switches to making blocks of a new version this would trigger Matt’s fork detection logic and potentially shut down the node or at least ensure it no longer serves work, shutting down any attached miners.

    I believe I’ve laid out the argument for this quite a few times in the past and nobody has really disagreed, but still, the notion of a soft fork keeps cropping up. This patch attempts to fix the fact that in the case where we’re no longer following the majority rule set we might accept transactions that the majority would reject, and thus end up mining blocks that (from our perspective) get re-orgd out. Effectively at that point the node would have SPV-like security. But finding ourselves in this situation at all is the problem, not the interpretation of specific opcodes.

  19. petertodd commented at 10:19 am on October 1, 2014: contributor
    @mikehearn “I believe I’ve laid out the argument for this quite a few times in the past and nobody has really disagreed” <- People have disagreed repeatedly. Since you’re not convinced by our counter arguments - and you’re the only person who seems to think soft-forks are bad around here - I strongly suspect that what you’re interpreting as ‘agreemnt’ is actually just that people have stopped spending their valuable time replying to you on this topic at length. Silence doesn’t necessarily indicate agreement. @gavinandresen Suggestions on a better way to explain this issue that makes it more clear that I’m talking about security for miners/SPV-clients rather than for those creating transactions? When I next get some free time, I should write this issue up for the -dev documentation on bitcoin.org, especially now that we’ll be seeing more devs using non-std redeemScripts. (FWIW blockchain.info is now relaying/displaying them: https://twitter.com/petertoddbtc/status/516658110023606272 https://twitter.com/petertoddbtc/status/516662111523835904)
  20. sipa commented at 9:54 pm on October 6, 2014: member
    Needs rebase.
  21. gmaxwell commented at 0:54 am on October 7, 2014: contributor

    This was always my understand of how NOPs should be handled. I agree that “blacklist” is likely to be misunderstood. “Discourage use of reserved NO OP opcodes for forward compatiblity” may be better.

    Mike, I strongy disagree with your remarks on soft forks, and I’m unclear as to why you would have taken this position. Extending functionality by precluding otherwise meaningless transactions is a polite and reduced risk way to do new things which maximally respects people’s ability to choose to not participate. Making it so that every possible new feature requires universal flag day deployment for all wallets, miners, full nodes, merchants, etc… would be a great way to deadlock progress and create additional centeralziation (e.g. where local patches or forks carry additional cost so people are stuck using One True Bitcoin of the latest version, like it or not). Some things may best be done that way, in spite of the costs, but for many the soft fork approach is simply superior even if you ignore most of the freedom of choice and consensus costs related to a hard fork change.

    So long as the soft fork behavior confines itself to activies which are non-standard in the widely deployed network the risk of accidentally forking yourself is avoided. Avoiding issues like that is part of the planning cost of any softforking change.

  22. webdeli commented at 1:16 am on October 7, 2014: none
    As I don’t have any more potent way of adding sentiment to the stack than what Gregory Maxwell stated, i’d like to simply add: “Yeah, what he said”. Bitcoin is still highly experimental is an observation in the standard disclaimer of the LTB network podcast and that is a GOOD thing. Agility and affordance to build Bitcoin is at stake and soft-forking is a tool with discrete utility to the development of this experiment that should not be as casually dismissed as your earlier contribution, there are voices with a different stance and silence is not an indication of positive support of your position. It could simply be a preference for civility, calm and conflict aversion on the part of the silent.
  23. TheBlueMatt commented at 7:34 am on October 7, 2014: member
    Conceptual ACK, but I have to agree that using the term “blacklist” is probably bad.
  24. mikehearn commented at 9:46 am on October 7, 2014: contributor

    I don’t think we should be relying on IsStandard for preserving the security of full nodes - people can and do change that function on the assumption that it doesn’t affect the fully validating nature of their node. Plus, I thought the goal was to relax and eventually remove it over time anyway. Certainly the trend has been to make more things standard. But I agree that if the previous behave were to be non-standard that makes it harder for people to lose money (it presently isn’t and anyone who doesn’t upgrade to a version including this will still lose out).

    Regardless, I think I made it fully clear what the reasons behind my position are. It is not secure to silently downgrade existing nodes to SPV security silently and behind their back. The understanding of everyone who chose to run a full node is “my computer is verifying the rules of the bitcoin system, miners only affect the ordering of transactions”. That’s the original Bitcoin guarantee. A soft fork breaks that.

    Another way to look at it is like this. If someone discovered a sequence of messages that could be sent to a node that would disable the script engine for the next block received, we would consider that a very serious security problem. Anyone could mine a block rewarding themselves with lots of other peoples coins, then use it to buy things by submitting it to a merchants node. The merchant would end up holding the bag and losing money when the block was re-orgd away (from their POV).

    If we would not accept an exploit that enabled that, why would we deliberately design new changes to do the same thing?

    Finally, I don’t think this is a choice between forward progress and security, I think that’s a false dilemma. There have been successful changes to the rule set before via both hard and soft forks. If the benefits are clear I’m sure the community could get majority consensus within six months or less - ultimately if the biggest mining pools upgrade, that’s enough to prompt merchants and exchanges to upgrade too.

  25. gavinandresen commented at 3:32 pm on October 7, 2014: contributor

    @petertodd : I’m still confused as to how this improves the upgrade situation.

    To be concrete, we’ve got a funding transaction, TxFund. It will look like:

    0version: 1   (because the person sending me BTC has not upgraded yet)
    1scriptPubKey: HASH160 <hash> EQUAL
    

    And a redeeming transaction, TxRedeem. It will look like:

    0version: 2  (because it is redeemed by upgraded software)
    1scriptSig: <signatures> <... NOP1...>
    

    TxRedeem is non-standard for any software that does not understand version=2 transactions.

    So there is no security issue for wallets or miners; they should ignore transactions they do not understand. Bitcoin Core has had code to do that for a long time now; they are not added to the memory pool. However, if they are mined and appear in a block then they ARE added to the wallet (SPV security).

    There is no issue with old wallets sending funds using TxFund.version=1 transactions; the version of TxRedeem is what is important.

    There IS an issue if you’re sending the funds to somebody running an old wallet receiving funds using TxRedeem.version=2; their wallet will (should) ignore the transaction until it appears in a block.


    Until the network has upgraded (which is indicated by increasing the block.version number), it is unsafe to redeem transactions that expect NOP1 to be anything besides a no-op.

    After miners have upgraded and are producing nothing but Block.version=N blocks, it is perfectly secure to broadcast a TxRedeem.version=2 transaction.

    If I understand this pull request, then the proposal is that instead of using transaction.version numbers, instead any TxRedeem that uses a NOP will be treated as non-standard.

    The only case I can see where that matters is if I created a TxRedeem with version=1 to try to cheat and get an old wallet to display 0-confirmation funds that will be rejected by upgraded miners.

    I think the most secure way to prevent that is to have the evaluation rule for the new opcode be:

    0Evaluate as a NOP if transaction.version < tx_version_required OR
    1block.version < block_version_required.
    

    Or, in other words: the new opcode is only enabled in upgraded transactions in upgraded blocks.

    That should give a nice belt-and-suspenders for supporting broken miners who might accidently put a TxRedeem.version=2 into their block.version=old blocks.

  26. petertodd force-pushed on Oct 13, 2014
  27. petertodd commented at 1:10 pm on October 13, 2014: contributor

    @gavinandresen First of all, you’re assuming that post-upgrade we reject nVersion < post_upgrade_nVersion blocks. This is problematic as now non-upgraded miners are creating blocks rejected by the rest of the network that to non-upgraded wallets look like real confirmations. Lots of people accept funds based on a single confirmation, so this risks people getting double-spent. With BIP34 - Height in coinbase - we had a good reason to do that: it was a security patch. With new opcodes we don’t have that necessity.

    Secondly if post-upgrade nVersion=1 transactions continue to treat NOPx as NOP - only applying the new behavior in nVersion=2 transactions - you have a security flaw. For instance consider this CHECKLOCKTIMEVERIFY-using scriptPubKey for a micropayment channel:

    0IF
    1    2 <merchant pubkey> <customer pubkey> 2 CHECKMULTISIG
    2ELSE
    3    <timeout> CHECKLOCKTIMEVERIFY DROP
    4    <customer pubkey> CHECKSIG
    5ENDIF
    

    To steal the funds the customer just has to create a nVersion=1 transaction spending them and get it mined, as I explained in my mailing list post.(1) If post-upgrade nVersion=1 transactions treat NOP2 as CHECKLOCKTIMEVERIFY too - fixing this exploit - the transaction version number upgrade has no purpose.

    That should give a nice belt-and-suspenders for supporting broken miners who might accidently put a TxRedeem.version=2 into their block.version=old blocks.

    This is a case where “belt-and-suspenders” makes the rules more complex and less secure, not less. “Post-upgrade NOP2 becomes CHECKLOCKTIMEVERIFY” is a very simple rule without any ambiguity or ways to bypass it - exactly what you want when not applying that rule correctly results in money getting stolen, as demonstrated above.

    1. http://www.mail-archive.com/bitcoin-development%40lists.sourceforge.net/msg06247.html
  28. petertodd commented at 1:13 pm on October 13, 2014: contributor
    @sipa Rebased.
  29. petertodd commented at 1:13 pm on October 13, 2014: contributor
    Oh, and also changed to terminology from ‘blacklist’ to ‘discouraged’
  30. petertodd renamed this:
    Blacklist NOPs reserved for soft-fork upgrades
    Discourage NOPs reserved for soft-fork upgrades
    on Oct 13, 2014
  31. TheBlueMatt commented at 6:04 pm on October 13, 2014: member
    utACK.
  32. gavinandresen commented at 6:17 pm on October 28, 2014: contributor
    utACK.
  33. sipa commented at 2:08 pm on November 4, 2014: member
    Needs rebase.
  34. petertodd force-pushed on Nov 4, 2014
  35. petertodd commented at 5:19 pm on November 4, 2014: contributor
    Rebased
  36. btcdrak commented at 6:48 pm on November 4, 2014: contributor
    ACK.
  37. sipa commented at 4:26 pm on November 17, 2014: member
    utACK; I really think we want this merged before there is a really with generalized P2SH IsStandard().
  38. gmaxwell commented at 1:19 am on November 18, 2014: contributor

    Before there is a release*. Alternatively, we could back out the IsStandard changes. :(

    Idea/Approach ACK but this needs a rebase to reflect the set_error() changes.

  39. petertodd force-pushed on Nov 18, 2014
  40. Discourage NOPs reserved for soft-fork upgrades
    NOP1 to NOP10 are reserved for future soft-fork upgrades. In the event
    of an upgrade such NOPs have *VERIFY behavior, meaning that if their
    arguments are not correct the script fails. Discouraging these NOPs by
    rejecting transactions containing them from the mempool ensures that
    we'll never accept transactions, nor mine blocks, with scripts that are
    now invalid according to the majority of hashing power even if we're not
    yet upgraded. Previously this wasn't an issue as the IsStandard() rules
    didn't allow upgradable NOPs anyway, but 7f3b4e95 relaxed the
    IsStandard() rules for P2SH redemptions allowing any redeemScript to be
    spent.
    
    We *do* allow upgradable NOPs in scripts so long as they are not
    executed. This is harmless as there is no opportunity for the script to
    be invalid post-upgrade.
    03914234b3
  41. petertodd commented at 3:22 am on November 18, 2014: contributor
    rebased
  42. gmaxwell commented at 4:33 pm on November 18, 2014: contributor
    ACK
  43. schildbach commented at 4:35 pm on November 18, 2014: contributor
    Oh wow, pull request 5000 (-:
  44. sipa merged this on Nov 20, 2014
  45. sipa closed this on Nov 20, 2014

  46. sipa referenced this in commit 3ba5ebc065 on Nov 20, 2014
  47. DrahtBot locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-10-04 22:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me