Mempool only CHECKLOCKTIMEVERIFY (BIP65) verification #5496

pull petertodd wants to merge 4 commits into bitcoin:master from petertodd:mempool-only-checklocktimeverify changing 11 files +258 −10
  1. petertodd commented at 6:51 AM on December 17, 2014: contributor

    Essentially makes passing CHECKLOCKTIMEVERIFY (BIP65) an IsStandard() rule to make it easy to test applications using it; blocks containing transactions with invalid CHECKLOCKTIMEVERIFY arguments are still accepted.

    This pull-req is not a soft-fork nor does it contain any code to start that process.

    An alternative would be to make a command-line flag to enable CLTV, however actually implementing that is somewhat ugly. There is a small risk of people thinking that it's safe to depend on CLTV, although I think that's outweighed by the value of thoroughly testing everything but the soft-fork itself.

    Thanks for @sipa and also @btcdrak for the rebase. Credit goes to @gmaxwell for the suggestion of comparing the argument against the transaction nLockTime rather than the current time/blockheight directly.

    Background: BIP65 - https://github.com/bitcoin/bips/blob/master/bip-0065.mediawiki

  2. gmaxwell commented at 6:57 AM on December 17, 2014: contributor

    Okay I was about to wag my finger at you about making it IsStandard making the hardfork possible then I caught that you said passing. Your text is reasonably clear, I'm leaving this comment just in case someone else makes the same reading error.

    I'm unsure how useful this is. There is some risk that people will mistake IsStandardness enforcement for adequate security. ("I tried to cheat and couldn't!"). OTOH, it sounds like a really interesting way to roll out softforking changes that paces the risk out somewhat better.

  3. petertodd commented at 7:09 AM on December 17, 2014: contributor

    Okay I was about to wag my finger at you about making it IsStandard making the hardfork possible

    To be clear, what exactly were you thinking I'd done?

    I'm unsure how useful this is. There is some risk that people will mistake IsStandardness enforcement for adequate security.

    Yeah, OTOH, the many of the applications for CLTV like micropayment channels already have some of that risk in that they're vulnerable to tx mutability. Not as seriously, but it's analogous.

    Anyway, people who make that kind of mistake are probably making other mistakes that'll have them lose money anyway...

  4. gmaxwell commented at 7:11 AM on December 17, 2014: contributor

    @petertodd My initial read of the description had me thinking you made CLTV IsStandard with no constraint on validity ("No code to start the process of a soft fork"), creating a risk for miners who might included invalid ones.

  5. petertodd commented at 7:13 AM on December 17, 2014: contributor

    @gmaxwell Ah! Yeah, the NOPx discouragement code still applies, and in fact has a special case to make sure that if CLTV isn't enabled, NOP2 usage is discouraged: https://github.com/bitcoin/bitcoin/pull/5496/files#diff-be2905e2f5218ecdbe4e55637dac75f3R339

    (DISCOURAGE_UPGRADABLE_NOPS is unittested too, so without that the unittests would fail)

  6. btcdrak commented at 8:06 AM on December 17, 2014: contributor

    @petertodd I have to agree with @gmaxwell, something about your PR wording immediately made me jump to the wrong conclusion too.

  7. petertodd commented at 10:36 AM on December 17, 2014: contributor

    Any suggestions on wording this better?

  8. in src/script/interpreter.cpp:None in 70fa6b6615 outdated
    1150 | +    // prevent this condition. Alternatively we could test all
    1151 | +    // inputs, but testing just this input minimizes the data
    1152 | +    // required to prove correct CHECKLOCKTIMEVERIFY execution.
    1153 | +    if (txTo.vin[nIn].IsFinal())
    1154 | +        return false;
    1155 | +
    


    SergioDemianLerner commented at 2:21 PM on December 17, 2014:

    I don't understand why this check returns false and why it's not done before (nLockTime > (int64_t)txTo.nLockTime). The comment says that if IsFinal() = true, then the CHECKLOCKTIMEVERIFY is bypassed (accepted).


    SergioDemianLerner commented at 2:23 PM on December 17, 2014:

    I will read the spec again.


    petertodd commented at 2:42 PM on December 17, 2014:

    You're reading it backwards: we're trying to prevent someone from bypassing the nLockTime of the transaction by setting every nSequence field to MAXINT.

    As for the order of the tests, I designed it to go from least data required to prove to most.

    On 17 December 2014 09:21:43 GMT-05:00, SergioDemianLerner notifications@github.com wrote:

    • if (nLockTime > (int64_t)txTo.nLockTime)
    •    return false;
    • // Finally the nLockTime feature can be disabled and thus
    • // CHECKLOCKTIMEVERIFY bypassed if every txin has been
    • // finalized by setting nSequence to maxint. The
    • // transaction would be allowed into the blockchain, making
    • // the opcode ineffective.
    • //
    • // Testing if this vin is not final is sufficient to
    • // prevent this condition. Alternatively we could test all
    • // inputs, but testing just this input minimizes the data
    • // required to prove correct CHECKLOCKTIMEVERIFY execution.
    • if (txTo.vin[nIn].IsFinal())
    •    return false;

    I don't understand why this check returns false and why it's not done before (nLockTime > (int64_t)txTo.nLockTime). The comment says that if IsFinal() = true, then the CHECKLOCKTIMEVERIFY is bypassed (accepted).


    Reply to this email directly or view it on GitHub: https://github.com/bitcoin/bitcoin/pull/5496/files#r21973702


    SergioDemianLerner commented at 2:57 PM on December 17, 2014:

    Right. The Bitcoin protocol first checks the nLockTime and THEN checks the sequence num if nLockTime check is false. I had to read again the IsFinalTx() function.

  9. in src/script/interpreter.cpp:None in 70fa6b6615 outdated
    1143 | +    // Finally the nLockTime feature can be disabled and thus
    1144 | +    // CHECKLOCKTIMEVERIFY bypassed if every txin has been
    1145 | +    // finalized by setting nSequence to maxint. The
    1146 | +    // transaction would be allowed into the blockchain, making
    1147 | +    // the opcode ineffective.
    1148 | +    //
    


    SergioDemianLerner commented at 3:04 PM on December 17, 2014:

    The comment regarding "bypassing" is a bit confusing. Shouldn't it say "Finally CHECKLOCKTIMEVERIFY WILL INVALIDATE THE REDEEMER TRANSACTION if the corresponding txin has been finalized by setting nSequence to maxint."


    petertodd commented at 3:12 PM on December 17, 2014:

    I'm not sure that's any less confusing.

    The key concept to understand is that CLTV only checks the transaction's nLockTime field, not the actual block height or time. Thus anything that disables nLockTime's effect can bypass CLTV.

    Incidentally I think part of the confusion is how the SignatureChecker object separates the logic of CLTV into two completely disjoint parts for no clear reason. It's a lot more clear when all this logic is in one place. A better refactoring would be to encapsulate the execution state, and if needed, cache expensive operations on that state like hashing the transaction. This is especially clear when you consider future improvements like opcodes to sign something other than the txin prevout.

    On 17 December 2014 10:04:40 GMT-05:00, SergioDemianLerner notifications@github.com wrote:

    •    (txTo.nLockTime <  LOCKTIME_THRESHOLD && nLockTime < 
      LOCKTIME_THRESHOLD) ||
    •    (txTo.nLockTime >= LOCKTIME_THRESHOLD && nLockTime >=
      LOCKTIME_THRESHOLD)
    • ))
    •    return false;
    • // Now that we know we're comparing apples-to-apples, the
    • // comparison is a simple numeric one.
    • if (nLockTime > (int64_t)txTo.nLockTime)
    •    return false;
    • // Finally the nLockTime feature can be disabled and thus
    • // CHECKLOCKTIMEVERIFY bypassed if every txin has been
    • // finalized by setting nSequence to maxint. The
    • // transaction would be allowed into the blockchain, making
    • // the opcode ineffective.
    • //

    The comment regarding "bypassing" is b bit confusing. Shouldn't it say "Finally CHECKLOCKTIMEVERIFY WILL INVALIDATE THE REDEEMER TRANSACTION if the corresponding txin has been finalized by setting nSequence to maxint."


    Reply to this email directly or view it on GitHub: https://github.com/bitcoin/bitcoin/pull/5496/files#r21976623


    sipa commented at 4:13 PM on December 17, 2014:

    I believe I would have found @SergioDemianLerner's suggestion to be helpful when trying to understand the code - and that was starting from a version that had everything in one place.

    Extra documentation on the SignatureChecker class and its virtual methods is probably helpful though. They essentially represent "verify something from the context".


    petertodd commented at 6:17 AM on December 20, 2014:

    @sipa My thinking there is that the comments explain why the code is the way it is, the code itself should itself be sufficient to explain what it's doing. This is particularly true in consensus-critical code like this patch where the code itself is extremely simple and the implications and design criteria are the important part. @SergioDemianLerner's suggested change doesn't explain the why; if anything it encourages the reader to read the comments and potentially miss some subtle aspect of exactly what the code really does.

  10. petertodd force-pushed on Dec 25, 2014
  11. petertodd commented at 6:20 AM on December 25, 2014: contributor

    Rebased on top of #5521

  12. NicolasDorier commented at 10:10 PM on December 25, 2014: contributor

    Just to know if I understand well.

    You would want such mempool only check of CLTV out for 0.11, then, if it works without any notable problems and with some people building stuff on top of it, pushing that into the miners' rule for 0.12 ?

    I'm asking because I intend to build some stuff on top of it, and not really motivated to run and adapt my code for a viacoin node. So I guess my only option left for now it to wait for 0.11 ?

  13. TheBlueMatt commented at 10:14 PM on December 25, 2014: member

    Hopefully we enforce more than mempool for 0.11, but enforcing for mempool is a one-line change after merging the code (which we should move forward with, whether with mempool enforcement or not)

  14. gmaxwell commented at 10:25 PM on December 25, 2014: contributor

    @petertodd Say the relay rule were widely deployed. What would be the plan if our we discovered that the exact rule construction wasn't the one we wanted?

  15. NicolasDorier commented at 10:36 PM on December 25, 2014: contributor

    I see. That's great to know !

    I'll be waiting for 0.11, I don't have the time/skill to compile bitcoind from sources on linux.

    Not a deal breaker though, I have my own script evaluator in .NET, I will port the CLTV + tests in it so I can test without a node. Not perfect, but I should be able to play with it truthfully enough.

  16. petertodd commented at 10:44 PM on December 25, 2014: contributor

    Since it's just a relay rule at worst you just reject transactions with NOP2 from the mempool - the default behaviour - pick a new NOP#, and try out the new behaviour.

    This means the implementation either does one or the other version - nice and clean - and neither version will interfere with the other. Of course both NOPs will make it into blocks, but that's not going to do us any harm as neither side has applied a soft-forking rule.

    Now in many cases you don't need to do all this. For instance if we were to decide to make CLTV act against the block height/time rather than the tx one nodes with the old rule just reject txs from the less permissive version. You'd want to be careful re: DoS ban code, but we're finding that crops up over and over again anyway...

    On 25 December 2014 17:25:58 GMT-05:00, Gregory Maxwell notifications@github.com wrote:

    @petertodd Say the relay rule were widely deployed. What would be the plan if our we discovered that the exact rule construction wasn't the one we wanted?


    Reply to this email directly or view it on GitHub: #5496 (comment)

  17. laanwj added the label Feature on Jan 8, 2015
  18. laanwj added the label TX scripts on Jan 8, 2015
  19. petertodd force-pushed on Apr 21, 2015
  20. petertodd commented at 4:43 AM on April 21, 2015: contributor

    Rebased on master.

  21. petertodd commented at 4:45 AM on April 21, 2015: contributor
  22. morcos commented at 5:22 PM on May 1, 2015: member

    ACK I reviewed this and tested by creating this small RPC test: https://gist.github.com/morcos/61084ac8a33363278638 (it requires a rebase to master first as it uses some of the new python tools)

  23. jgarzik commented at 5:28 PM on May 1, 2015: contributor

    Concept ACK - I think this falls on the side of being a useful soft fork upgrade half-step.

    Of course, I would prefer to be more aggressive and go the entire way in one step.

  24. petertodd commented at 10:04 PM on May 3, 2015: contributor

    @morcos Actually your RPC test should work on master now; pull-req #5981 has been merged. @jgarzik We don't know if the most recent soft-fork will actually trigger prior to the release of v0.11, especially with @laanwj's proposed schedule. Merging this for v0.11 would at least get 95% of the code in the release, with actual enforcement happening in a v0.11.1 release. Getting off the rebase treadmill would also be nice.

    While there has been discussion about a relative CLTV by @TheBlueMatt and @jtimon I don't think any of it really impacts implementation of an absolute CLTV, which is also needed. My most recent proposal is that absolute CLTV be implemented as-is, and relative CLTV be left for a future scripting system rethinking: http://www.mail-archive.com/bitcoin-development@lists.sourceforge.net/msg07418.html

  25. TheBlueMatt commented at 2:19 AM on May 4, 2015: member

    I like merging this, but doing both CLTV things in one swoop would be really nice. Certainly if we're gonna use one of the precious few OP_NOPs we have we might as well make it more flexible. The relative-CLTV idea is incredibly useful in systems like the proposed Lightning network, a proposal to build on top of payment challen hub-and-spoke networks (though Ive been unsuccessful in getting them to comment on this specific issue here, as they cant seem to get their paper rewritten).

  26. petertodd commented at 5:09 AM on May 4, 2015: contributor
  27. petertodd force-pushed on May 9, 2015
  28. Make CScriptNum() take nMaxNumSize as an argument
    While the existing numeric opcodes are all limited to 4-byte bignum
    arguments, new opcodes will need different limits.
    99088d60d8
  29. Move LOCKTIME_THRESHOLD to src/script/script.h
    Will now be needed by CHECKLOCKTIMEVERIFY code.
    48e9c57cf0
  30. Replace NOP2 with CHECKLOCKTIMEVERIFY (BIP65)
    <nLockTime> 1 CHECKLOCKTIMEVERIFY -> <nLockTime> 1
    
    Fails if tx.nLockTime < nLockTime, allowing the funds in a txout to be
    locked until some block height or block time in the future is reached.
    
    Only the logic and unittests are implemented; this commit does not have
    any actual soft-fork logic in it.
    
    Thanks to Pieter Wuille for rebase.
    
    Credit goes to Gregory Maxwell for the suggestion of comparing the
    argument against the transaction nLockTime rather than the current
    time/blockheight directly.
    dc17027f2c
  31. Enable CHECKLOCKTIMEVERIFY as a standard script verify flag
    Transactions that fail CLTV verification will be rejected from the
    mempool, making it easy to test the feature. However blocks containing
    "invalid" CLTV-using transactions will still be accepted; this is *not*
    the soft-fork required to actually enable CLTV for production use.
    14a63f433c
  32. petertodd force-pushed on May 9, 2015
  33. petertodd commented at 9:08 AM on May 9, 2015: contributor

    Updated to add a type field for future soft-forks, e.g. a relative CLTV upgrade.

    BIP updated: https://github.com/bitcoin/bips/pull/153

    CLTV demos: https://github.com/petertodd/checklocktimeverify-demos/commit/148e91e74f8b9e31d0bf35ddcf506b5030a05232

    This should satisfy all objections. I propose this is merged for v0.11.0, and the actual soft-fork implemented in v0.11.1 once the ongoing BIP66 soft-fork triggers. (likely with @sipa's new nVersion feature flags BIP)

  34. btcdrak commented at 10:35 PM on May 11, 2015: contributor

    I have tested the code and sent the following tx fc6fa964c510e79b55ccea5bc8f21a505867b8ed286f6da02172bf8c297720dd on testnet using the following redeem script 632102b0efefbe8f8feaabc74ebde89928bfacc6471fe5f4db43b19c2ff4e58fdfc290ad670400ca9a3b51b16d6821032c8296c603d3c5802027d57a8f3ae382d7464abfda6f80e4f7d592ce23d696e7ac which decodes to OP_IF 02b0efefbe8f8feaabc74ebde89928bfacc6471fe5f4db43b19c2ff4e58fdfc290 OP_CHECKSIGVERIFY OP_ELSE 1000000000 1 OP_NOP2 OP_2DROP OP_ENDIF 032c8296c603d3c5802027d57a8f3ae382d7464abfda6f80e4f7d592ce23d696e7 OP_CHECKSIG

    ACK.

  35. NicolasDorier commented at 12:57 PM on May 12, 2015: contributor

    The goal of doing that is to workaround the NOPX scarcity. Would it not be better to take currently unused NOP for creating an "OP_EXT" instruction.

    Then defines

    0 OP_EXT would be for CHECKLOCKTIMEVERIFY,

    1 OP_EXT would be for the next soft fork,

    2 OP_EXT would be for the third soft fork, etc...

    Your PR seems to only add extensibility for future relative CLTV, when I think, it can be broader without downside.

  36. btcdrak commented at 1:02 PM on May 12, 2015: contributor

    @NicolasDorier I am not sure I understand, the parameter that precedes OP_NOP2 already allows for any number of future CLTV implementations (e.g. relative CLTV and other implementations yet to be imagined), 1 OP_NOP2, 2 OP_NOP2, 3 OP_NOP2, 4 OP_NOP2, 5 OP_NOP2, 6 OP_NOP2, 7 OP_NOP2.... n OP_NOP2

  37. NicolasDorier commented at 1:10 PM on May 12, 2015: contributor

    What I mean is that you just changed the CLTV by requiring 1 OP_NOP2 for doing it. You did it for NOP scarcity reason, and (not a big deal though), it broke the simpler code before.

    The rational for adding the PUSH 1, was permitting new mode and implementation for CLTV in the future without wasting other NOP.

    My point is that NOP scarcity is not a problem worth considering.

    Because the day we use all nops except NOP10, we can just say that "NOP10" is a "OP_EXT" operation. And then consider "1 OP_EXT" like a "NOP11", and "2 OP_EXT" like a "NOP12".

    So I see no problem with using NOP3 for RCLTV or other mode we might invent in the future.

  38. NicolasDorier commented at 1:44 PM on May 12, 2015: contributor

    Following what I just said, the latest PR is equivalent with choosing OP_EXT = NOP2, CLTV = NOP11 and RCLTV = NOP12, so why not waiting NOP10 for OP_EXT and just using NOP2 for CLTV and NOP3 for RCLTV.

  39. gavinandresen commented at 1:59 PM on May 12, 2015: contributor

    I agree with @NicolasDorier, no reason to worry about NOP scarcity until we hit OP_NOP10-- and when we do, we can redefine OP_NOP10 as OP_EXT.

    I think there is a good chance before then one of the OP_NOP's will be used to define an entirely different, redesigned scripting system, so we'll never get there.

  40. NicolasDorier commented at 3:41 PM on May 12, 2015: contributor

    Just to be clear, this is not an objection with the parametized CTLV. I just wanted to point out that the reason for doing it were flawed but I don't feel strongly about that.

    I would prefer a parametized CLTV for the 15may freeze than delaying the CLTV one more time for the reasons I gave.

  41. petertodd commented at 5:10 PM on May 12, 2015: contributor

    I opened a pull-req with the unparameterized version as well, #6124

    I personally prefer the unparameterized version without the type argument; I agree with @gavinandresen that running out of NOP's isn't a major concern. More importantly though, let's pick one and get this merged for v0.11.0

  42. btcdrak commented at 5:13 PM on May 12, 2015: contributor

    @NicolasDorier It's a valid point. I don't particularly care either way, although the PR was adjusted to account for discussion on the mailing list. I'm fine with either. Prefer the original #6124

  43. in src/script/interpreter.cpp:None in 14a63f433c
     334 | @@ -335,9 +335,70 @@ bool EvalScript(vector<vector<unsigned char> >& stack, const CScript& script, un
     335 |                  // Control
     336 |                  //
     337 |                  case OP_NOP:
     338 | -                break;
     339 | +                    break;
     340 | +
     341 | +                case OP_CHECKLOCKTIMEVERIFY:
     342 | +                {
    


    jtimon commented at 7:21 PM on May 12, 2015:

    Can you move all this to an inline function outside of EvalScript? It will be build equivalent but much more readable (at the very least for the indentation, but seeing the parameters of each opcode is interesting too). At some point we should do something like #5153 (having a check-identical-build script would help making that trivial to review and non-risky in a completely obvious way), so we could at least start doing it with the new opcodes.


    btcdrak commented at 7:25 PM on May 12, 2015:

    @jtimon Could codestyle nits be left until after the PR is merged?


    jtimon commented at 7:33 PM on May 12, 2015:

    Well, it would be delaying that for the later bigger cleanup of EvalScript, the point is just to make to history cleaner by squashing the fixup into https://github.com/petertodd/bitcoin/commit/dc17027f2c37b97ce5b40154a64f18c16a72ee18. But adding a fixup commit that is proven to produce an identical build it's not really a blocker IMO. What's blocking this is the lack of decision on the parametrized version versus the single opcode one. In any case, this is just a nit, and if we decide before fixing the nit, I'll just remember to say this before considering anything else when the next opcode is proposed.


    btcdrak commented at 7:38 PM on May 12, 2015:

    I think the issue of using parameterised version or not is quite clear: there isn't scarcity as pointed out by @gavinandresen and @NicolasDorier because OP_NOP10 can be repurposed to a general extension. Until we reach the limits it surely makes sense to produce concise scripts without parameter hacks?


    jtimon commented at 7:39 PM on May 12, 2015:

    Well, another blocker is the new softfork deployment mechanism @sipa is working on. And remember this doesn't have to be part of 0.11, we could just release 0.11.1 with this a week after releasing 0.11. So, yes, I think we have time for an identical-build fixup commit (won't imply more testing), why clean later when you can avoid it in the first place?


    petertodd commented at 8:56 PM on May 12, 2015:

    @jtimon I'm skeptical of making this an inline function when no other opcodes are precisely because when (if!) we change other opcodes to be inline functions the last thing we need is two different ways of doing it. Better to stick to the same, well-tested, mechanism everything else uses; making it an inline function now risks having to change it again anyway.

    re: the softfork mechanism, I'm well aware of it, and have been reviewing it myself. The idea here is just to get all but the actual softfork into a release to make network-wide testing of the new functionality easy. In the unlikely event that the functionality has to change this gives us one last chance to do so prior to the actual soft-fork.


    jtimon commented at 1:06 AM on May 13, 2015:

    re softfork: ok, that makes sense.

    I really hope is when and not if. Your argument sounds like "let's just dump the trash in the floor of the kitchen because the rest of the house is also dirty". I don't understand why having ugly code should preclude us from written the new stuff in a cleaner way, it will be less work and smaller diffs in future cleanups.

    same, well-tested, mechanism everything else uses

    This would be equally well tested since the build would be identical. That's the main reason to make the function inlined (one could also argue about performance but I'm pretty sure the compiler is smart enough to "think that by itself").

    I insist this is a nit and shouldn't be a blocker: "I'm too lazy to do that now. You should have asked before, too late, it will be more work for you or someone else later" would be a perfectly valid answer.

    On #5153 in general, It's sad that the interpreter of other implementations (like libbitcoin's) is much more readable than libconsensus'... But it's worse than people not only don't want to implement it but actively oppose to that kind of obvious, trivial and completely safe imprevements. This also applies to a subset of #4646 and any other huge nesting we have. Years of experience in software engineering show that it is a bad practice that leads to less readable and thus less maintainable code. That's why tools like checkstyle exist.

    But, anyway, it's not worth it to discuss this further in here. Do the more-readable-but-equivalent-build-fixup commit or don't. It's completely up to you. Nobody coded the identical-build-check tool that I proposed and you created a bounty for, so actually checking that the build is identical will be more work than making the inline function.


    petertodd commented at 5:57 PM on May 15, 2015:

    I don't understand why having ugly code should preclude us from written the new stuff in a cleaner way, it will be less work and smaller diffs in future cleanups.

    Well, like I say, I think having to fix something in the future in two different ways rather than just one different way is more work. I'd give the analogy of a factory that produces drums of arsenic as a byproduct, which currently get dumped in a big pile out the back. You'd be better off continuing to put it all in the one big pile, rather than deciding "Ah, we're pretty sure we're going to dump it in the river in the future, arsenic doesn't bioaccumulate so the dilution will fix the problem. So stick all new drums in the river." - If it turns out that plan was a bad idea, now you have to clean up both a backyard and a river - twice as much work to plan the cleanup! Better having a problem with exactly one solution that can be repeatedly mechanically to fix it.

  44. petertodd commented at 5:53 PM on May 15, 2015: contributor

    Seems we have rough consensus that a type parameter isn't needed, so closing in favor of #6124

  45. petertodd closed this on May 15, 2015

  46. maaku commented at 4:46 PM on May 16, 2015: contributor

    Gah. I was not subscribed to this issue and didn't see the last few days debate.

    Allow me to make an argument in favor of the type parameter. I have never seen opcode scarcity as the prime justification, and that was not the motivation for suggesting it as I did in an earlier PR. The reason I believe it is valuable to include the type parameter is for possible type checking efficiencies and future-proofing if the convention is adopted that OP_NOP(n) has the format [n n-1 ... 3 2 aux OP_NOPn] where the first parameter is the auxiliary opcode. So OP_NOP1 is used for opcodes which take no parameters [aux OP_NOP1]; OP_NOP2 for those like CLTV and RCLTV which take a single parameter [x aux OP_NOP2]; OP_NOP3 for those which take two parameters [y x aux OP_NOP3]; etc.

    The implementation becomes a little bit cleaner as the OP_NOP(n) case statement can check that sufficient number of parameters are on the stack for all of its auxiliary opcodes, and future script authoring tools can have some rudimentary future-proof stack bounds checking.

    I am strongly in favor of including the type parameter.

  47. maaku commented at 5:05 PM on May 16, 2015: contributor

    Also, I should mention that including the type parameter allows OP_CLTV and its relative variant to share nearly all of their implementation inline without a call-out. I hope to have code demonstrating this available in the very near future.

  48. jtimon commented at 5:09 PM on May 16, 2015: contributor

    You can still reuse code in the checker (https://github.com/bitcoin/bitcoin/pull/6124/files#diff-be2905e2f5218ecdbe4e55637dac75f3R1129), having 2 separated opcodes for cltv and rcltv doesn't preclude code reuse.

  49. petertodd commented at 6:55 PM on May 16, 2015: contributor

    @maaku I think that's making a lot of premature assumptions about how the scripting system will be extended in the future. I think we're a lot more likely to implement an OP_MAST/OP_EVAL which just calls a new scheme, rather than continue extending what we have now.

    re: code reuse, I'm not convinced that avoiding code reuse in cases where we're talking about just a few lines of consensus-critical code really makes all that much sense. As @gmaxwell gmaxwell commented a few months ago, if every detail of a codebase needs to be reviewed, having everything in one place can be easier than splitting things up across many different functions.

  50. maaku commented at 7:04 PM on May 16, 2015: contributor

    @petertodd the only non-aesthetic argument I've seen for multiple top-level opcodes instead of an extended opcode is that it saves 1 byte. However that byte is in the scriptSig, as there truly isn't any reason to use CLTV / RCLTV within a scriptPubKey. On the other hand, there may be future extensions to script which we want the use of an unallocated NOP in scriptPubKeys, I'd much rather take the hit of an extra byte in the scriptSig for CLTV/RCLTV than the UTXO if find later that we've exhausted the NOP space.

  51. petertodd commented at 7:11 PM on May 16, 2015: contributor

    @maaku Hmm? It's quite likely that we'll see CLTV use in scriptPubKey's for payment channels for backup reasons, and absolutely guaranteed they'll be in redeemScripts. CLTV arguments are certainly always going to be in scriptPubKey's/redeemScript's as they rarely, if ever, are something where it makes sense to provide to spend a txout.

    Anyway, the cost of an extra script version byte when it comes time to eventually do an OP_MAST is <3% of the total storage requirements to store a single UTXO.

  52. jtimon commented at 7:41 PM on May 16, 2015: contributor

    @maaku I believe the only non-aesthetic argument in the other direction is no_op scarcity. The reuse argument is invalid and your other "primary argument" seems aesthetic to me as well.

  53. jgarzik commented at 8:30 PM on May 16, 2015: contributor

    Non-aesthetic? In a wider context, agree w/ @petertodd RE "I think we're a lot more likely to implement an OP_MAST/OP_EVAL which just calls a new scheme, rather than continue extending what we have now."

    e.g. OP_ETHEREUM would likely just require one opcode.

  54. maaku commented at 9:50 PM on May 16, 2015: contributor

    Even though I'm the first to argue in favor of a script replacement, when discussing changes to the existing script system I think it is prudent to assume that we'll be stuck with script for much longer than we anticipate. I think that conservatism favors a two-byte extension opcode: (1) it leaves more single byte nopcodes available for OP_EVAL-like extensions which benefit more from brevity, and (2) if done in a certain way, it allows future-proof stack bounds checking in script authoring tools or policy checks.

  55. petertodd commented at 9:55 PM on May 16, 2015: contributor

    Without a full script replacement why would you ever need any script authoring tools? Script is just too limited to warrant that kind of complexity right now - there are literally less than a dozen or so basic script fragments that are useful.

    On 16 May 2015 17:50:58 GMT-04:00, Mark Friedenbach notifications@github.com wrote:

    Even though I'm the first to argue in favor of a script replacement, when discussing changes to the existing script system I think it is prudent to assume that we'll be stuck with script for much longer than we anticipate. I think that conservatism favors a two-byte extension opcode: (1) it leaves more single byte nopcodes available for OP_EVAL-like extensions, and (2) it done in a certain way, it allows future-proof stack bounds checking in script authoring tools or policy checks.


    Reply to this email directly or view it on GitHub: #5496 (comment)

  56. jtimon commented at 11:13 PM on May 16, 2015: contributor

    (1) it leaves more single byte nopcodes available for OP_EVAL-like extensions which benefit more from brevity,

    Let's reserve OP_16 for script2.0 and let's assume that will take ages to be deployed. Let's reserve OP_15 for taking the extra byte instead of doing it now, that will give us 255 more instructions. Let's reserve OP_15 255 to take a short (2 bytes) in a similar way, that will give us 65536 more. Let's reserve OP_15 255 65535 to take 4 bytes, that will give us 4294967296 more. Etc, etc... I really think we're safe.

    (2) if done in a certain way, it allows future-proof stack bounds checking in script authoring tools or policy checks.

    This is the part I'm not understanding. I'm missing why the reuse of "stack bounds checks" is not just bikeshedding and how the alternative is any less "future-proof". Can you elaborate on this?

  57. maaku commented at 11:55 PM on May 16, 2015: contributor

    Make OP_NOP2 fail unless there are at least two items on the stack. It doesn't care what items, but they must be >=2. At this point a script tool, or relay policy is written that has no idea what the extra OP_NOP2 functionality is, but it knows that at the point OP_NOP2 is executed, there must be two items on the stack. A script compiler or transaction validator throws an error if it can determine that the stack would contain less than two items when the OP_NOP2 is executed. For some scriptPubKeys, the minimum number of pushes in the scriptSig could be inferred based on this and other constraints.

    Later, CLTV is implemented but none of this software has been updated to understand the new rules yet. Nevertheless, the compiler still throws an error when the lock time parameter is not provided, relay nodes drop the same transactions, and your script type checker helpfully reminds you that you are probably missing a value for that unrecognised OP_NOP2 extended instruction.

    It's a very limited form of type checking, and not nearly as useful as it could be in a much better designed language. But it is an improvement over the status quo for existing script.

  58. petertodd commented at 6:00 AM on May 19, 2015: contributor

    @maaku Relay nodes already drop transactions that use unknown opcodes; in fact I should have done that for this type patch... shows it's an ill-advised idea IMO - yet another thing to take into account over the simpler "one-opcode one-function" principle.

    Again, I'm just not seeing the value of things like "type checkers" for the scripting language right now, given how utterly simple it is. Equally, in the future if the choice was to make the scripting language more or less complex for the sake of type checking vs. getting consensus, I'd 100% choose the latter, always. People are free to write scripts that are bad ideas; the important thing is that the consensus critical codebase stay simple to ensure we don't make mistakes that affect everyone. (like the one I made above!)

  59. MarcoFalke locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-15 15:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me