Opening this here for wider discussion and feedback.
edit: Probably need to add implementation section: https://github.com/instagibbs/bitcoin/tree/2025-07-op_templatehash
Opening this here for wider discussion and feedback.
edit: Probably need to add implementation section: https://github.com/instagibbs/bitcoin/tree/2025-07-op_templatehash
Some previous related discussions:
I’m afraid I’m going to write a lot and say very little, but I have a few thoughts maybe worth sharing. TL;DR I like this proposal. As-is I prefer it slightly over CTV, but I think committing to the annex may be undesirable, and it’s worth further discussion (as much as I’d like to avoid excessive bikeshedding).
The improvements on CTV are obvious, pushing the hash onto the stack is better, leveraging only cached intermediate hashes is better.
Dropping the scriptSig commitment seems debatable, but I accept the given rationale for omitting it. Furthermore, CTV’s scriptSig commitment introduces a new variable length chunk of data to hash.
The annex commitment is of course the most controversial addition. On the one hand, it is incredibly useful to eltoo style protocols, and TEMPLATEHASH
commits to similar items as the taproot sighash, which includes the annex. On the other hand, the annex is the only witness item committed to, and there are plausible conflicts with future annex usage. I have to plead ignorance on the future annex usage issues, I can only say it seems plausible. Furthermore, it could be desirable to commit to only part of the annex, which is not permitted by TEMPLATEHASH
, and developing a structured annex format adds a significant development dependency to TEMPLATEHASH.
I can imagine, for example, that CISA might leverage the annex to set the order of public keys in an aggregate key. In this scenario, committing to the entire annex ahead of time would prevent using CISA, though once again I confess I don’t know what such a scheme would actually look like.
My instinct here is to reduce the scope of TEMPLATEHASH
. When in doubt, leave it out (how can it be wrong if it rhymes?) I could envision a future OP_TEMPLATEHASH_ANNEX
if annex commitments proved to be that desirable, but at present it seems to me like the practical motivation for committing to the annex is entirely around eltoo, and there’s a non-zero chance it could conflict with other annex usages.
On the other hand, introducing a future OP_TEMPLATEHASH_NO_ANNEX
would be similarly simple if the annex commitment proved to be a problem, and OP_TEMPLATEHASH
(with annex) would be immediately useful for, and remain useful for eltoo style constructions.
This is why I’m ok with the proposal as-is, if the annex commitment proves to be a mistake, it can still serve a useful purpose now and into the future, and a second opcode could be introduced to address the annex-less case.
I’d like to very tentatively float another way to dodge the annex issue: a hash selector bitfield. To this, instagibbs said something like “why not go full TXHASH then?” Unlike TXHASH which has a huge amount of flexibility, a bitfield that permits choosing to commit to all or none of the sequences, all or none of the outputs, and all or none of the annex should not change the stopping point. One could also leverage the other five bits of a 1 byte stack item to potentially commit to the first N<32 outpoints, to support the sibling commitments. This last one in particular changes the performance profile of TEMPLATEHASH
by an order of magnitude (though 5 bits is arbitrary, it could be 1, it could be 32) and drastically increases the surface area to consider, but it might be worth considering if it’s useful enough to the BitVM crowd. (The sibling commitment might be well past the point of “we should just go full TXHASH” however)
I can say that for me, committing to all of the outputs but none of the input sequences (and therefore not the input count) would be very useful for one of my projects which currently uses a deleted key ALL|ANYONECANPAY signature.
Today one can use the script PUSHDATA2 0x0802 <520 bytes> DUP SHA256 DROP
to hash 520 bytes, requiring a 526 byte script. Repeating DUP SHA256 DROP
can cause a verifier to hash 520 more bytes. You can cause a verifier to hash hashed_bytes(n) = 520 * n
bytes for a script of script_len(n) = 523 + 3 * n
bytes.
With TEMPLATEHASH
as proposed and implemented, one can use the script 1 TEMPLATEHASH DROP
to hash 109 bytes. Repeating TEMPLATEHASH DROP
can cause a verifier to hash 109 more bytes. Therefore, you can cause a verifier to hash hashed_bytes(n) = 2 * n
bytes for a script of script_len(n) = 2 * n
bytes.
At exactly n = 80, the SHA256 script will hash 41600 bytes with a 763 byte script. TEMPLATEHASH
on the other hand, will hash only 41529 bytes with a 763 byte script, and the SHA256 script’s worst case hashing will get considerably worse from there.
Less naively, one can use the script PUSHDATA2 0x0802 <520 bytes> 3DUP SHA256 DROP SHA256 DROP SHA256 DROP
to hash 1560 bytes, requiring a 530 byte script. Repeating 3DUP SHA256 DROP SHA256 DROP SHA256 DROP
will cause a verifier to hash 1560 more bytes. This works out to hashed_bytes(n) = 1560 * n
bytes for a script of length script_len(n) = 523 + 7 * n
bytes.
Similarly, the TEMPLATEHASH
abusing script can be made more efficient with 1 TEMPLATEHASH TEMPLATEHASH 2DROP
to hash 218 bytes. Repeating TEMPLATEHASH TEMPLATEHASH 2DROP
can cause a verifier to hash 218 more bytes. Therefore you can cause a verifier to hash hashed_bytes(n) = 218 * n
bytes for a script of length script_len(n) = 1 + 3 * n
bytes.
The less naive SHA256 script will hash 57720 bytes in 782 script bytes, and the less naive TEMPLATEHASH
hashes 56898 bytes in 784 script bytes. Once again, beyond this point, the SHA256 script is worse, therefore at the limit of script size, the SHA256 script will be worse.
I suppose all of that is to say “This is strictly less hashing than is necessary for other existing operations.” seems correct, assuming there aren’t more clever pathological scripts. Furthermore, the result of TEMPLATEHASH
would be trivial to cache, while the result of DUP SHA256 DROP
would be more difficult.
the
TEMPLATEHASH
abusing script
Wouldn’t the hash naturally be cached in implementation?
Thanks for the feedback @Ademan.
as much as I’d like to avoid excessive bikeshedding
Some amount of bikeshedding is justified for a proposal as consequential as a consensus change. :)
annex commitment
If i understand correctly, your criticism is that while committing to the annex is very useful for the flagship usecase of this proposal, this may clash with potential future usages of the annex. That on its own i don’t think is a valid reason not to have the annex commitment, if only because you can make the same argument in the other direction: not committing to the annex would be a design mistake as it allows to be forward-compatible with future usages of the annex, and is already provably useful for some of the main applications enabled by this proposal.
To substantiate your point you bring up that the annex would be the only witness element committed to by TEMPLATEHASH
and that it might be useful for CISA. But the annex is different from other witness elements, it was a new field expected to be useful precisely because it is committed to by signatures. In this sense, it’s the only witness element committed to by TEMPLATEHASH
in the same way it is by CHECKSIG
/CHECKSIGADD
today. Now, i don’t quite understand your point about CISA. You present a scenario where “committing to the entire annex ahead of time would prevent using CISA”, but in this case it would also prevent any other opcode that commits to the transaction, like the signature opcodes. If those needs to be re-designed (new pubkey types?) then it seems fine to do TEMPLATEHASH
along with it. And until then, i’d much rather the opcodes that commit to a transaction be consistent with each other.
As more of a general point, i think starting from CTV
to analyze TEMPLATEHASH
is the wrong way to approach this. We should start from what we want to enable and argue the best way to achieve it. If what we want to enable is committing to a transaction, then since Taproot the annex is just another transaction field that should be committed to.
Wouldn’t the hash naturally be cached in implementation?
In the Taproot signature hash (which this proposal re-uses), various parts of the transaction are pre-hashed such that the hash can be cached across signatures. For each signature, those pre-computed hashes are hashed together along with some fixed-size transaction fields, such that the maximum amount of data hashed per signature is capped: https://github.com/bitcoin/bips/blob/87f3fe164484f73c30cdb122481bed96a1f79af9/bip-0341.mediawiki#L128
This final hash is pointless to cache, because other opcodes let you hash strictly more arbitrary (i.e. uncachable) data (and even have less restrictions on them).
The Annex Commitment
Clearly, I think, this is the most debatable point precisely because there isn’t widespread use (or demand yet) for such space.
the practical motivation for committing to the annex is entirely around eltoo
Any scheme that wants to shift from sign-time OP_RETURN to witness data may want to use it in the same manner. It’s my fault for making ln-symmetry the go-to during discussion. Granted, this is a fairly minimal improvement, and can be done by many other means with CSFS and friends, or just eating the OP_RETURN cost.
In the end, “Any annex scheme that is compatible with OP_CHECKSIG(ADD) is compatible with OP_TEMPLATEHASH” seems the most straight-forward and least foot-gunny to me.
Crazy Talk
I feel like without a specific proposal, we’re going to miss the real costs in terms of complexity.
I’m also worried that if a proposal becomes “general purpose” without actually being general, it won’t capture the behavior we actually want, with respect to “programmable money”.
(The sibling commitment might be well past the point of “we should just go full TXHASH” however)
No BitVM bridge has launched, and the space is moving very very fast. It’s on my docket to try to learn more about the fundamental requirements of these systems to transition from permissioned systems to permissionless for both safety of funds and unilateral exit (these are different concepts IIUC), but I think this is out of scope, and the answer may be “we need GSR / btclisp / Simplicity” and stop trying to guess what specific systems need? I have plenty of controversial thoughts on this but seems way out of scope here, unless we find the next-tx capability and rebindable signatures insufficient motivation for a softfork.
My take on the annex is as long as there is no clear potential use for the annex proposed, treating it consistently seems to make the most sense. We treat it exactly like the taproot sighashes are, so that’s consistent. Take note that we’re only committing the current input’s annex, if the annex would be used for something that doesn’t work with TEMPLATEHASH, it just means they can’t be used together, which seems fine. It would also not be able to be used together with any other sighash-(all/default)-using scheme.
I feel that since new use-cases for the annex will almost certainly come with their own soft fork, they can be resolved when the time comes. You speak about CISA, which I think is a great example, but in my opinion by the time we have a solid CISA soft fork proposal, the TXHASH and TXSIGHASH ideas will hopefully also be more mature and they can solve the annex aspect more elegantly.
This proposal is trying to move the needle for simple next-tx commitment and basic rebindable signatures. Two arguably very simple features that are undeniably useful for almost all second-layer protocols currently being developed and used.
If nodes start relaying transactions with non-empty annex the result will predictably a new exogenous asset protocols and a new simpler more efficient ways to inscribe arbitrary data without the need for pre-commitment.
Unleashing the annex is a predictable footgun socially speaking.
Unleashing the annex is a predictable footgun socially speaking.
To be clear, this PR / implementation isn’t about relaxing these rules for relay.
To be clear, this PR / implementation isn’t about relaxing these rules for relay.
Yes, this is only relevant further down the line. I think what @Ademan was also picking at is that this proposal feels predictive direction wise.
Thanks for the feedback @Ademan.
Sure! I hope it’s productive…
you can make the same argument in the other direction: not committing to the annex would be a design mistake as it allows to be forward-compatible with future usages of the annex,
From my very limited perspective, that seems considerably less likely than the opposite, but that is kind of the problem, this is ~guessing. Maybe it would be useful to at least attempt to identify concrete or partially specified proposals that use the annex. I suppose the onus is on me since I’m the one suggesting they may conflict, but you, instagibbs, steven and moonsettler are all likely closer to the people who might be working on them.
and is already provably useful for some of the main applications enabled by this proposal.
Absolutely! I just want to ensure that “it’s useful” isn’t overriding compatibility concerns. There are other ways to do data publication, it doesn’t necessarily need to be a part of TEMPLATEHASH
.
but in this case it would also prevent any other opcode that commits to the transaction, like the signature opcodes.
To be clear, I’m distinguishing between ahead of time commitment (via TEMPLATEHASH
or a pre-signed transaction) and signing time commitment. Obviously with CHECKSIG{,VERIFY,ADD}
you can typically modify the annex and re-sign at any point before confirmation. This is a significant difference in flexibility. (OTOH, you can also modify the non-witness portions of the TX and re-sign, but I don’t accept that as an argument against committing to sequences or outputs… after all that’s the whole point here.)
I think I may have been talked out of my position, but I still think it deserves some litigation (perhaps with a better advocate than I).
To give it one last go, I suspect (admittedly without evidence!) that varying annex data independently of the rest of the TX will be the norm and desirable if/when the annex finds uses besides data publication.
i think starting from CTV to analyze TEMPLATEHASH is the wrong way to approach this. We should start from what we want to enable and argue the best way to achieve it.
Fair enough, I definitely agree with the second part.
92+
93+Unlike `OP_TEMPLATEHASH`, `OP_CHECKTEMPLATEVERIFY` also commits to the scriptSig of all inputs of the spending
94+transaction. `OP_CHECKTEMPLATEVERIFY` gives txid stability when the committed spending transaction has a single input,
95+and when the scriptSig of this single input has been committed by the hash.
96+Taproot scriptSigs must be empty and therefore under the single input case `OP_TEMPLATEHASH` has no requirement
97+to commit to scriptSigs to achieve txid stability.
everytime I read through this always my biggest question is “why not commit to all script sigs being empty” or “what are the potential downsides to having multi input malleability”
would be nice to have an explanation in this to explain why this trade off is okay
Given that it doesn’t ensure the “next transaction” commitment we are aiming for which includes additional hashing on top of taproot hashes already existing, it was not added to the proposal.
62+existing operations.
63+
64+The specific fields from the BIP341 signature message that are ommitted when computing the template hash are the
65+following:
66+- *hash_type*: this is the sighash type identifier. Only a single hash type is supported by `OP_TEMPLATEHASH`, so there
67+ is no need to commit to such an identifier.
Ah, I see. The phrase “only a single hash type is supported by OP_TEMPLATEHASH
” confused me in the context of hash_type
being omitted, and I think the context could be made clearer. Perhaps something like:
0- *hash_type*: refers to the sighash type identifier in the context of BIP341 signatures. The input for `OP_TEMPLATEHASH` is fixed, so there
1 is no need for a mechanism to modify the hash composition.
69+ signature message, *ext_flag* is set to 0. Therefore we commit directly to *annex_present*.
70+- *sha_prevouts* / *sha_scriptpubkeys*: committing to these fields as is would introduce a hash cycle when the hash is
71+ committed in the output itself. Committing to all other prevouts or scriptpubkeys would introduce hashing a quantity
72+ of data quadratic in the number of inputs. It would also prevent spending two coins encumbered by a template hash
73+ check in the same transaction. Finally, the flexibility of not committing to the specific coins spent is also
74+ desirable to recover from mistakes[^no-commit-other-coins].
Wouldn’t the commitment to all of the inputs’ sequence fields, sha_sequences
, indirectly commit to the count of inputs, and therefore prevent adding another input?
(Or maybe I’m misconstruing what sha_sequences
exactly is, feel free to correct me in that case.)
commit to the count of inputs, and therefore prevent adding another input?
Yes, it commits to the total number of inputs. Typically this would be 1, but you can certainly have more for batching scenarios.
there’s an included footnote for a (weak) example with respect to value, though it doesn’t apply to prevouts per se. If the other coin gets swept, you can “contribute” a new utxo to make it whole and rescue the locked funds.
It’s not the primary motivation for the design decision compared to the others.
124+simple test cases exercising the various fields of a transaction committed to when using `OP_TEMPLATEHASH`. The [second
125+one](bip-templatehash/test_vectors/script_assets_test.json) is a more exhaustive suite of tests exercising `OP_TEMPLATEHASH`
126+under a large number of different conditions. It reuses the [Bitcoin Core Taproot test framework][feature_taproot.py]
127+introduced with the implementation of BIP341. Format details and usage demonstration are available
128+[here](bip-templatehash/test_vectors/README.md).
129+
0@@ -0,0 +1,115 @@
1+```
2+ BIP: ?
3+ Layer: Consensus (soft fork)
4+ Title: OP_TEMPLATEHASH + OP_CHECKSIGFROMSTACK + OP_INTERNALKEY
14+```
15+
16+## Abstract
17+
18+This document proposes bundling three new operations for [Tapscript][tapscript-bip]:
19+[`OP_TEMPLATEHASH`][templatehash-bip], [`OP_CHECKSIGFROMSTACK`][csfs-bip], and [`OP_INTERNALKEY`][internalkey-bip].
23+
24+## Motivation
25+
26+The three proposed operations are simple, well-understood, and enable powerful new capabilities while minimizing the
27+risk of surprising behavior or unintended applications. They improve existing, well-studied protocols and make promising
28+new ones possible, while minimizing the risk of surprising behavior or unintended or undesirable applications.
25+
26+The three proposed operations are simple, well-understood, and enable powerful new capabilities while minimizing the
27+risk of surprising behavior or unintended applications. They improve existing, well-studied protocols and make promising
28+new ones possible, while minimizing the risk of surprising behavior or unintended or undesirable applications.
29+
30+`OP_TEMPLATEHASH` enables commitment to the transaction spending an output. `OP_CHECKSIGFROMSTACK` enables
“enables commitment” sounds funny to me:
0`OP_TEMPLATEHASH` enables committing to the transaction spending an output. `OP_CHECKSIGFROMSTACK` enables
46+in second layer protocols, as illustrated by the Ark variant "[Erk][ark-erk]" or the [dramatic simplification][greg-rebindable-ptlcs]
47+they bring to upgrading today's Lightning to [Point Time Locked Contracts][optech-ptlcs].
48+
49+The ability to push the Taproot internal key on the stack is a natural and extremely simple optimisation for rebindable
50+signatures.
51+
75+This document proposes to give meaning to three Tapscript `OP_SUCCESS` operations. The presence of an `OP_SUCCESS` in a
76+Tapscript would previously make it unconditionally succeed. This proposal therefore only tightens the block validation
77+rules: there is no block that is valid under the rules proposed in this BIP but not under the existing Bitcoin consensus
78+rules. As a consequence these changes are backward-compatible with non-upgraded node software. That said, the authors
79+strongly encourage node operators to upgrade in order to fully validate all consensus rules.
80+
@instagibbs how do you feel about adding OP_SHA256TAGGED to this mix? Even with the “French CTV” it would be a pretty viable LNhance alternative I could support without any misgivings.
Seems out of scope here, unless I’m missing some key motivation.
Seems out of scope here, unless I’m missing some key motivation.
You came up with the motivation yourself afaik (for LN-Symmetry). It’s about the whole annex/nulldata/paircommit data availability thing. It is also mildly useful for MATT stuff in a more expressive future script, otherwise it gives us a safe pair-commitment for two stack elements.
Assuming this proposal is activated as-is (as a thought experiment), even if core would be reluctant to, as proven by precedents, it would be a trivial exercise for an LSP to break the annex “filter” using the libre strategy with just a few tens of nodes. Forcing core to “face the reality of the network” and relax policy. Hence our comments on this proposal being predictive of annex use.
I don’t want to waste too much time on this topic. But this is exactly why after much agonizing PC was included in LNhance.