policy: make pathological transactions packed with legacy sigops non-standard #32521
pull darosior wants to merge 3 commits into bitcoin:master from darosior:2503_nonstd_tx_sigops changing 5 files +180 −0-
darosior commented at 9:05 pm on May 15, 2025: memberThe Consensus Cleanup soft fork proposal includes a limit on the number of legacy signature operations potentially executed when validating a transaction. If this change is to be implemented here and activated by Bitcoin users in the future, we should make transactions that are not valid according to the new rules non-standard first because it would otherwise be a trivial DoS to potentially unupgraded miners after the soft fork activates.
-
DrahtBot commented at 9:05 pm on May 15, 2025: contributor
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
Code Coverage & Benchmarks
For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/32521.
Reviews
See the guideline for information on the review process.
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.
Conflicts
Reviewers, this pull request conflicts with the following ones:
- #31682 ([IBD] specialize CheckBlock’s input & coinbase checks by l0rinc)
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.
-
DrahtBot added the label TX fees and policy on May 15, 2025
-
darosior force-pushed on May 15, 2025
-
DrahtBot added the label CI failed on May 15, 2025
-
DrahtBot commented at 9:09 pm on May 15, 2025: contributor
🚧 At least one of the CI tasks failed. Task
lint
: https://github.com/bitcoin/bitcoin/runs/42319362568 LLM reason (✨ experimental): The CI failure is caused by a Python linting error intest/functional/mempool_sigoplimit.py
where a variable is redefined.Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:
-
Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.
-
A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.
-
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
-
-
darosior force-pushed on May 15, 2025
-
in test/functional/mempool_sigoplimit.py:209 in f13acba6f0 outdated
204+ txid = int.from_bytes(bytes.fromhex(res["txid"]), byteorder="big") 205+ outpoints.append(COutPoint(txid, res["sent_vout"])) 206+ self.generate(self.nodes[0], 1) 207+ 208+ # Spending all these outputs at once accounts for 2505 legacy sigops and is non-standard. 209+ tx = CTransaction()
instagibbs commented at 1:13 pm on May 16, 2025:might be worth getting this one mined to show it’s consensus-valid still
darosior commented at 8:11 pm on May 19, 2025:Expanded the test to check the original, non-standard, one can still be mined.l0rinc commented at 2:16 pm on May 16, 2025: contributorWhile sigops aren’t necessarily difficult to compute, there’s a lot of them - and now even more. Please consider the related sigop optimization PR I just pushed: https://github.com/bitcoin/bitcoin/pull/32532in src/policy/policy.h:42 in e3eceb8492 outdated
37@@ -38,6 +38,8 @@ static constexpr unsigned int MIN_STANDARD_TX_NONWITNESS_SIZE{65}; 38 static constexpr unsigned int MAX_P2SH_SIGOPS{15}; 39 /** The maximum number of sigops we're willing to relay/mine in a single tx */ 40 static constexpr unsigned int MAX_STANDARD_TX_SIGOPS_COST{MAX_BLOCK_SIGOPS_COST/5}; 41+/** The maximum number of potentially executed legacy signature operations in a single standard tx */ 42+static constexpr unsigned int MAX_LEGACY_SIGOPS{2'500};
Sjors commented at 11:06 am on May 19, 2025:MAX_TX_LEGACY_SIGOPS
?
darosior commented at 8:11 pm on May 19, 2025:That’s actually what i had initially but i reduced it toMAX_LEGACY_SIGOPS
in an attempt to avoid too long variable names. With the comment expliciting it’s about transaction inputs, i think it should be fine as-is.
Sjors commented at 7:43 am on May 20, 2025:I don’t think the 3 character savings is worth making people jump to the documentation.MAX_TX_
vsMAX_BLOCK_
seems like a good convention to keep.
darosior commented at 8:51 pm on May 20, 2025:Fine, done.in src/policy/policy.cpp:199 in e3eceb8492 outdated
194 195+ unsigned int sigops{0}; 196 for (unsigned int i = 0; i < tx.vin.size(); i++) { 197 const CTxOut& prev = mapInputs.AccessCoin(tx.vin[i].prevout).out; 198 199+ sigops += tx.vin[i].scriptSig.GetSigOpCount(true);
Sjors commented at 11:08 am on May 19, 2025:GetSigOpCount(/*fAccurate=*/true)
And the
fAccurate
is terribly vague, it’s useful to explain:0// Unlike the block wide sigop limit, the BIP54 per 1// transaction limit uses "accurate" BIP16 style accounting.
darosior commented at 8:11 pm on May 19, 2025:Done.in src/policy/policy.cpp:229 in e3eceb8492 outdated
221 return false; 222 } 223+ sigops += p2sh_sigops; 224+ } 225+ 226+ if (sigops > MAX_LEGACY_SIGOPS) {
Sjors commented at 11:14 am on May 19, 2025:Although there are nobreak
orcontinue
statements in the loop, it would feel a bit more robust to move this check below the loop (although an early break is more efficient).
darosior commented at 8:00 pm on May 19, 2025:I understand the slight additional complexity concern but for a transaction submitted to us with no PoW i think we should discard it as soon as we know we won’t accept it. I really don’t think we should keep doing unnecessary work by iterating, solving scripts and counting.in src/policy/policy.cpp:187 in e3eceb8492 outdated
182@@ -183,16 +183,22 @@ bool IsStandardTx(const CTransaction& tx, const std::optional<unsigned>& max_dat 183 * as potential new upgrade hooks. 184 * 185 * Note that only the non-witness portion of the transaction is checked here. 186+ * 187+ * We also check the total number of sigops across the whole transaction.
Sjors commented at 11:31 am on May 19, 2025:Could add:The per transaction sigop limit introduced in BIP54 does not cover the witness.
darosior commented at 8:01 pm on May 19, 2025:Don’t you think it would be confusing to point here since this function does not concern itself with the witness? Especially as it’s pointed right before.
Sjors commented at 7:39 am on May 20, 2025:Well that’s what confused me. My first thought was: why does this work if we’re not checking the witness?
But maybe the sigops related comment needs to go above the “only the non-witness” portion.
Or something like:
0* Note that only the non-witness portion of the transaction is checked here. 1* This is fine because the per transaction sigop limit introduced in BIP54 2* does not cover the witness.
(and then drop the “We also check the total”)
darosior commented at 8:47 pm on May 20, 2025:Fine, but went for a more compressed version:
0 * We also check the total number of legacy sigops across the whole transaction, as per BIP54.
Looks good?
Sjors commented at 7:54 am on May 21, 2025:Sounds fine, forgot to push this?
(maybe say “non-witness” instead of “legacy”)
darosior commented at 3:13 pm on May 21, 2025:Changed to “non-witness” and actually pushed it this time. :)in src/policy/policy.cpp:200 in e3eceb8492 outdated
195+ unsigned int sigops{0}; 196 for (unsigned int i = 0; i < tx.vin.size(); i++) { 197 const CTxOut& prev = mapInputs.AccessCoin(tx.vin[i].prevout).out; 198 199+ sigops += tx.vin[i].scriptSig.GetSigOpCount(true); 200+ sigops += prev.scriptPubKey.GetSigOpCount(true);
Sjors commented at 11:40 am on May 19, 2025:0// Unlike the block wide sigop limit, the BIP54 per 1// transaction limit includes the prevout scriptPubKey.
darosior commented at 8:11 pm on May 19, 2025:Done, added a sentence to this effect.darosior force-pushed on May 19, 2025maflcko commented at 8:42 pm on May 20, 2025: memberfor reference, the CI failure is:
0[16:36:22.725] /ci_container_base/src/test/transaction_tests.cpp:1089:20: error: use emplace_back instead of push_back [modernize-use-emplace,-warnings-as-errors] 1[16:36:22.725] 1089 | tx_create.vout.push_back(CTxOut(424242, max_sigops_p2sh)); 2[16:36:22.725] | ^~~~~~~~~~~~~~~~~ ~ 3[16:36:22.725] | emplace_back( 4[16:36:22.725] /ci_container_base/src/test/transaction_tests.cpp:1122:25: error: use emplace_back instead of push_back [modernize-use-emplace,-warnings-as-errors] 5[16:36:22.725] 1122 | tx_create_p2pk.vout.push_back(CTxOut(212121, p2pk_script)); 6[16:36:22.725] | ^~~~~~~~~~~~~~~~~ ~ 7[16:36:22.725] | emplace_back( 8[16:36:22.725] 1515 warnings generated. 9[16:36:22.725] 10[16:36:22.725] ^^^ ⚠️ Failure generated from clang-tidy
darosior force-pushed on May 20, 2025DrahtBot removed the label CI failed on May 21, 2025in test/functional/mempool_sigoplimit.py:219 in 29e8fe7144 outdated
216+ std_tx = deepcopy(nonstd_tx) 217+ std_tx.vin.pop() 218+ self.nodes[0].sendrawtransaction(std_tx.serialize().hex()) 219+ 220+ # Make sure the original, non-standard, transaction can be mined. 221+ self.generateblock(self.nodes[0], output="raw(42)", transactions=[nonstd_tx.serialize().hex()])
Sjors commented at 9:46 am on May 21, 2025:29e8fe71440a7e27ee89527a49753be615f205c7: note to other reviewers, yesgenerateblocks
checks for errors, as you can see if you duplicatenonstd_tx.serialize().hex()
in test/functional/test_framework/script_util.py:26 in 29e8fe7144 outdated
21@@ -22,6 +22,12 @@ 22 sha256, 23 ) 24 25+# Maximum number of potentially executed legacy signature operations in validating a transaction. 26+MAX_STD_LEGACY_SIGOPS = 2_500
Sjors commented at 9:48 am on May 21, 2025:29e8fe71440a7e27ee89527a49753be615f205c7:MAX_TX_LEGACY_SIGOPS
would be consistent with the c++ code
darosior commented at 3:13 pm on May 21, 2025:Meh. Then it would not make it consistent with MAX_STD_P2SH_SIGOPS, and having in the name that’s it’s a standardness rule is useful. I don’t think it’s worth changing.in src/test/transaction_tests.cpp:1075 in a139a538c2 outdated
1068+ // single transaction, and a transaction spending them. 1069+ CMutableTransaction tx_create, tx_max_sigops; 1070+ const auto p2sh_inputs_count{MAX_TX_LEGACY_SIGOPS / MAX_P2SH_SIGOPS}; 1071+ tx_create.vout.reserve(p2sh_inputs_count); 1072+ for (unsigned i{0}; i < p2sh_inputs_count; ++i) { 1073+ tx_create.vout.emplace_back(424242 + i, max_sigops_p2sh);
Sjors commented at 9:50 am on May 21, 2025:a139a538c298621674ba622e63971704cbb7ceff: what’s this magic424242
value?
darosior commented at 1:21 pm on May 21, 2025:The value of a transaction output.
Sjors commented at 2:13 pm on May 21, 2025:Ah ok, any reason for the+ i
?
darosior commented at 3:18 pm on May 21, 2025:Just varying the amounts because why not.
Sjors commented at 3:42 pm on May 21, 2025:It’s confusing, but otherwise harmless.in src/test/transaction_tests.cpp:1083 in a139a538c2 outdated
1076+ tx_max_sigops.vin.reserve(p2sh_inputs_count); 1077+ for (unsigned i{0}; i < p2sh_inputs_count; ++i) { 1078+ tx_max_sigops.vin.emplace_back(COutPoint(prev_txid, i), max_sigops_redeem_script); 1079+ } 1080+ 1081+ // p2sh_inputs_count is truncated to 166 (from 166.6666..)
Sjors commented at 9:51 am on May 21, 2025:a139a538c298621674ba622e63971704cbb7ceff: it seems more clear if you define it as an integer instead ofauto
.
darosior commented at 3:13 pm on May 21, 2025:Done.in src/test/transaction_tests.cpp:1119 in a139a538c2 outdated
1115+ AddCoins(coins, CTransaction(tx_create_p2pk), false); 1116+ 1117+ // The transaction now contains exactly 2500 sigops, the check should pass. 1118+ BOOST_CHECK(p2sh_inputs_count * MAX_P2SH_SIGOPS + p2pk_inputs_count * 1 == MAX_TX_LEGACY_SIGOPS); 1119+ BOOST_CHECK(::AreInputsStandard(CTransaction(tx_max_sigops), coins)); 1120+
Sjors commented at 9:55 am on May 21, 2025:a139a538c298621674ba622e63971704cbb7ceff: could insert a segwit spend to show that it doesn’t count.
darosior commented at 3:13 pm on May 21, 2025:Good call, done.Sjors approvedSjors commented at 9:56 am on May 21, 2025: memberACK 29e8fe71440a7e27ee89527a49753be615f205c7darosior force-pushed on May 21, 2025Sjors commented at 3:45 pm on May 21, 2025: memberACK 80d213677962268a82d65d8915c518f0c02867d9DrahtBot added the label CI failed on May 21, 2025glozow referenced this in commit d2c9fc84e1 on May 22, 2025DrahtBot removed the label CI failed on May 23, 2025in src/policy/policy.cpp:204 in 80d2136779 outdated
199+ // Unlike the existing block wide sigop limit, BIP54 counts sigops when they are actually executed. 200+ // This means sigops in the spent scriptpubkey actually count toward the limit. 201+ // `fAccurate` means correctly accounting sigops for CHECKMULTISIGs with 16 pubkeys or less. This 202+ // method of accounting was introduced by BIP16, and BIP54 reuses it. 203+ sigops += tx.vin[i].scriptSig.GetSigOpCount(/*fAccurate=*/true); 204+ sigops += prev.scriptPubKey.GetSigOpCount(/*fAccurate=*/true);
mabu44 commented at 10:40 am on June 2, 2025:Moving these lines directly before the check on sigops (e.g. to line 228) can improve performances because we avoid performing the calculation when a “return false” is executed for another reason in the block between lines 206 and 227.
darosior commented at 5:05 pm on June 2, 2025:Done.in src/test/transaction_tests.cpp:1083 in 80d2136779 outdated
1078+ tx_max_sigops.vin.emplace_back(COutPoint(prev_txid, i), max_sigops_redeem_script); 1079+ } 1080+ 1081+ // p2sh_inputs_count is truncated to 166 (from 166.6666..) 1082+ BOOST_CHECK(p2sh_inputs_count * MAX_P2SH_SIGOPS < MAX_TX_LEGACY_SIGOPS); 1083+ AddCoins(coins, CTransaction(tx_create), false);
mabu44 commented at 12:34 pm on June 2, 2025:The third parameter should be 0, not false. For all the occurrences of the AddCoins function.
darosior commented at 2:12 pm on June 2, 2025:See
AddCoins
’s declaration insrc/coins.h
:0void AddCoins(CCoinsViewCache& cache, const CTransaction& tx, int nHeight, bool check = false);
mabu44 commented at 4:13 pm on June 2, 2025:nHeight is indeed an int. Am I missing something?
darosior commented at 4:42 pm on June 2, 2025:Wow my bad.
darosior commented at 4:49 pm on June 2, 2025:Done.darosior force-pushed on Jun 2, 2025darosior force-pushed on Jun 2, 2025mabu44 commented at 7:35 pm on June 2, 2025: noneACK abd0749fae7259a292cb652fc9974c8cdcdd4d9d Reviewed the code, compiled it, and ran the new tests. Additionally, made modifications to both the code and the tests to intentionally cause the tests to fail in expected ways.DrahtBot requested review from Sjors on Jun 2, 2025Sjors commented at 5:21 am on June 3, 2025: memberre-ACK abd0749faepetertodd commented at 11:17 am on June 22, 2025: contributorConcept ACKmaflcko commented at 12:47 pm on June 24, 2025: memberreview ACK abd0749fae7259a292cb652fc9974c8cdcdd4d9d 🌘
Signature:
0untrusted comment: signature from minisign secret key on empty file; verify via: minisign -Vm "${path_to_any_empty_file}" -P RWTRmVTMeKV5noAMqVlsMugDDCyyTSbA3Re5AkUrhvLVln0tSaFWglOw -x "${path_to_this_whole_four_line_signature_blob}" 1RUTRmVTMeKV5npGrKx1nqXCw5zeVHdtdYURB/KlyA/LMFgpNCs+SkW9a8N95d+U4AP1RJMi+krxU1A3Yux4bpwZNLvVBKy0wLgM= 2trusted comment: review ACK abd0749fae7259a292cb652fc9974c8cdcdd4d9d 🌘 3q0sc0viVRqyP5OWHaiE6g7Q9ZX/OB0XgbdfLMkuwT8D5RvFxF+7q4cmgZTEVG2knPSvhzE0UEp7iGeVpYsZbCw==
fanquake requested review from instagibbs on Jun 24, 2025fanquake requested review from ajtowns on Jun 24, 2025instagibbs commented at 1:17 pm on June 24, 2025: memberOP or commit message should probably explicitly link https://github.com/bitcoin/bips/blob/master/bip-0054.md#specificationinstagibbs commented at 1:47 pm on June 24, 2025: memberconcept and approach ACK
I have a preference for breaking out the new checks into its own function since if we intend it to someday soon(TM) be a consensus rule it would have to be pulled out anyways. https://github.com/instagibbs/bitcoin/commit/5d63372f75c4a2403c4032b63e5c604ee96c5a40
darosior commented at 9:35 pm on June 25, 2025: memberI have a preference for breaking out the new checks into its own function since if we intend it to someday soon(TM) be a consensus rule it would have to be pulled out anyways. instagibbs@5d63372
If we do this the separate function could be much simpler than that (no solver etc). I’ll have a go at it tomorrow morning, inspired by the consensus implementation i have on a private branch.
darosior force-pushed on Jun 26, 2025darosior commented at 2:40 pm on June 26, 2025: memberExtracted the BIP54-specific sigop counting into a separate function that can be moved intoconsensus/tx_verify.cpp
with no modification if/when this is implemented in consensus, as suggested by @instagibbs.Sjors commented at 2:54 pm on June 26, 2025: memberAlthough it’s more readable, the downside of this approach is that we’re looping over tx.vin twice and, probably much worse, callingAccessCoin
twice.darosior commented at 3:33 pm on June 26, 2025: memberCoins are already accessed multiple times in checking an unconfirmed transaction (which should cache them anyways), doing once more shouldn’t introduce any noticeable overhead and i like the upside of facilitating review of a future consensus-touching PR. But if many feel strongly i’m happy to revert the previous approach, and i’m sure Greg would be fine too as he Approach ACK’d the previous approach. Let me know!in src/policy/policy.cpp:188 in 7add4522d2 outdated
183+ // Unlike the existing block wide sigop limit, BIP54 counts sigops when they are actually executed. 184+ // This means sigops in the spent scriptpubkey count toward the limit. 185+ // `fAccurate` means correctly accounting sigops for CHECKMULTISIGs with 16 pubkeys or less. This 186+ // method of accounting was introduced by BIP16, and BIP54 reuses it. 187+ sigops += tx.vin[i].scriptSig.GetSigOpCount(/*fAccurate=*/true); 188+ sigops += coin.out.scriptPubKey.GetSigOpCount(tx.vin[i].scriptSig);
instagibbs commented at 3:54 pm on June 26, 2025:TIL. We could use this one in the other standardness check, right?
darosior commented at 4:15 pm on June 26, 2025:Yeah.darosior force-pushed on Jun 26, 2025in src/policy/policy.cpp:187 in 2ead56f0d2 outdated
182+ // Unlike the existing block wide sigop limit, BIP54 counts sigops when they are actually executed. 183+ // This means sigops in the spent scriptpubkey count toward the limit. 184+ // `fAccurate` means correctly accounting sigops for CHECKMULTISIGs with 16 pubkeys or less. This 185+ // method of accounting was introduced by BIP16, and BIP54 reuses it. 186+ sigops += tx.vin[i].scriptSig.GetSigOpCount(/*fAccurate=*/true); 187+ sigops += prev_txo.scriptPubKey.GetSigOpCount(tx.vin[i].scriptSig);
l0rinc commented at 9:50 am on June 27, 2025:Seems wasteful to iterate and parsetx.vin[i].scriptSig
twice, we should already know the last item that thescriptSig
pushed onto the stack after the first iteration - can we optimize that?
darosior commented at 8:14 am on June 30, 2025:I don’t think there is any point in trying to micro-optimize this.
l0rinc commented at 8:34 am on June 30, 2025:The current implementation is ~66% slower, nothing “micro” about that. Very often we can make code simpler (i.e. having fewer moving parts) AND faster at the same time. This repeats work currently, so it’s not as simple as it can get - what I’m suggesting is to explore that since the redundancy has a measurable cost.
darosior commented at 11:56 am on June 30, 2025:The current implementation is ~66% slower
This is suggesting this PR makes transaction processing slower. It does not. The cost of adding an additional iteration through inputs is trivial compared to the total amount of work we do when processing a transaction. This seems silly to even state it. I don’t know why you would fixate on this. Please let’s spend our time on more important issues than removing the number of
for
loops used by the program.This repeats work currently
I’m aware. This was done for a good reason in response to a justified request from another reviewer #32521 (comment).
l0rinc commented at 12:47 pm on June 30, 2025:this PR makes transaction processing slower. It does not
This seems silly to even state it.
It’s not silly, others have added a dedicated benchmark for this exact method. And I don’t appreciate this patronizing tone.
darosior commented at 2:10 pm on June 30, 2025:Apologies if that came across as patronizing, this was not my intention. That said, you should expect to receive pushback if you insistently derail a PR with irrelevant concerns.
darosior commented at 2:10 pm on June 30, 2025:I think the code is fine as-is, i don’t plan on doing anything else here so will resolve this thread.
l0rinc commented at 2:46 pm on July 1, 2025:Apologies if that came across as patronizing
if you insistently derail a PR with irrelevant concerns
no, they’re not irrelevant or silly, I expect better arguments than these
pinheadmz commented at 2:52 pm on July 1, 2025:Can we please ask a different reviewer for an opinion on this one thing to break the tie before anyone’s feelings get hurt?
sipa commented at 3:00 pm on July 1, 2025:@l0rinc We should certainly not gratuitously make changes the worsen performance, but I think there have been reasonable arguments given here:
- It results in simpler code changes, specifically ones that impact consensus logic less.
- The slowdown in negligible compared the overall cost of transaction validation. It’s good to be aware that there is a slowdown, but also reasonable to dismiss it on the basis that it’s entirely justifiable in the presence of a much more important concern (reducing risk of consensus bugs).
l0rinc commented at 3:11 pm on July 1, 2025:Thanks for the response, @sipa.
Is it unreasonable to ask for a benchmark showing that the slowdown is indeed negligible in the worst case (given that it’s meant to avoid an attack)? Benchmarks were added for this exact scenario, if they’re not representative (I can agree with that) - let’s make them representative first. My hunch is also that the slowdown is minor, but I would prefer more than a “just trust me bro™” here…
sipa commented at 3:22 pm on July 1, 2025:Is it unreasonable to ask for a benchmark showing that the slowdown is indeed negligible in the worst case (given that it’s meant to avoid an attack)?
Transactions can easily take milliseconds to verify. A single signature check can take 10s of microseconds, and a single transaction input can do hundreds of those. The worst case is likely way worse than that still, but it is not even necessary to spend time coming up with a worst case to conclude that validation times is orders of magnitudes removed from the slowdown involved here.
It’s not unreasonable to ask for benchmarks when things aren’t obvious. But come on, it’s reasonable to get a bit of familiarity of the orders of magnitude involved here, before demanding that a benchmark prove it.
Benchmarks were added for this exact scenario
The presence of a benchmark is not an indication that the thing being tested is so performance critical that any slowdown is a problem. It may just have been added for someone to get an idea of the order of magnitude.
l0rinc commented at 3:27 pm on July 1, 2025:Thanks for the explanation, please resolve this comment.in src/policy/policy.h:42 in 2ead56f0d2 outdated
37@@ -38,6 +38,8 @@ static constexpr unsigned int MIN_STANDARD_TX_NONWITNESS_SIZE{65}; 38 static constexpr unsigned int MAX_P2SH_SIGOPS{15}; 39 /** The maximum number of sigops we're willing to relay/mine in a single tx */ 40 static constexpr unsigned int MAX_STANDARD_TX_SIGOPS_COST{MAX_BLOCK_SIGOPS_COST/5}; 41+/** The maximum number of potentially executed legacy signature operations in a single standard tx */ 42+static constexpr unsigned int MAX_TX_LEGACY_SIGOPS{2'500};
l0rinc commented at 9:51 am on June 27, 2025:Note: since this magic value is referencing a soft-fork that may or may not be applied, we could mention it here for context: https://github.com/bitcoin/bips/blob/master/bip-0054.md#specification
Q: How often do we expect mined blocks to contradict this new policy rule? Asking because of https://b10c.me/observations/11-invalid-blocks-783426-and-784121, claiming:
On April 1st, 2023, F2Pool mined an invalid block at height 783426. Bitcoin Core nodes rejected the block with the reason bad-blk-sigops and the note too many sigops. On April 6th, 2023, F2Pool mined another bad-blk-sigops block at height 784121
and https://bitcoin.stackexchange.com/questions/121355/sigop-count-and-its-influence-on-transaction-selection which claims:
Over the last 10,000 blocks (between heights 815316 and 825316), only 62 were constrained by the sigop limit, despite an abnormally large number of high sigop transactions during that period.
darosior commented at 8:19 am on June 30, 2025:Asking because of
This is unrelated. Most sigops counted here are not counted in the old rule. In any case this change cannot lead to miners creating invalid blocks.
l0rinc commented at 8:31 am on June 30, 2025:In any case this change cannot lead to miners creating invalid blocks.
I know, what I was asking is how we expect the non-standardness of blocks to change because of this PR, i.e. how many blocks are usually mined that contradict the new rules and how we expect this PR to change that in time.
TheCharlatan commented at 8:57 am on June 30, 2025:non-standardness of blocks to change because of this PR
I don’t get what you are asking here. I don’t think there is such a thing as “non-standardness of blocks”.
l0rinc commented at 11:26 am on June 30, 2025:non-standardness of blocks
i.e. mined blocks that contain non-standard transactions - what’s the accepted term for that?
I don’t get what you are asking here
Do we have data on how many main-chain transactions historically exceed the new threshold of 2500 legacy sigops? I’d like to confirm that the new policy would block no real-world traffic and is purely a preventive DoS measure ahead of any future BIP-54 soft-fork activation.
TheCharlatan commented at 4:29 pm on June 30, 2025:Right, I wrote a little kernel application to count the sigops per transactions here: https://github.com/TheCharlatan/bitcoin/tree/sigop_stats . Just ran it, but I’m currently away from home and only have access to a pruned node. Still, did not find any offending transactions the last two days.
l0rinc commented at 2:42 pm on July 1, 2025:It seems to me the first historical tx that violates this new rule is https://mempool.space/tx/659135664894e50040830edb516a76f704fd2be409ecd8d1ea9916c002ab28a2 with 2585 sigops.
darosior commented at 3:03 pm on July 1, 2025:Do we have data on how many main-chain transactions historically exceed the new threshold of 2500 legacy sigops? I’d like to confirm that the new policy would block no real-world traffic and is purely a preventive DoS measure ahead of any future BIP-54 soft-fork activation.
I don’t think past usage is necessarily the best predictor, i think in this case it’s more convincing to reason through what real-world traffic would necessarily look like. The BIP goes through this reasoning to justify the 2'500 value: “A non-pathological transaction would have a public key per signature operation and at least one signature per input. Per standardness a single P2SH input may not have more than 15 signature operations. Even by using 1-of-15
CHECKMULTISIG
s a transaction would bump against the maximum standard transaction size before running into the newly introduced limit. To run against the newly introduced limit but not the transaction size a transaction would need to spend P2SH inputs with a redeem script similar toCHECKSIG DROP CHECKSIG DROP ...
. This type of redeem script serves no purpose beyond increasing its validation cost, which is exactly what this proposal aims to mitigate.”That said, past usage is still an interesting data point. I had previously looked into it and vaguely remembered it did not invalidate any currently standard usage. It’s fair to ask that i have this data handy, so i went ahead and harvested it for you. You can reproduce my finding simply by running through the historical chain with this minimal patch applied on top of this PR:
0diff --git a/src/policy/policy.cpp b/src/policy/policy.cpp 1index 48dfa14a8f7..6586159e8c1 100644 2--- a/src/policy/policy.cpp 3+++ b/src/policy/policy.cpp 4@@ -171,7 +171,7 @@ bool IsStandardTx(const CTransaction& tx, const std::optional<unsigned>& max_dat 5 /** 6 * Check the total number of non-witness sigops across the whole transaction, as per BIP54. 7 */ 8-static bool CheckSigopsBIP54(const CTransaction& tx, const CCoinsViewCache& inputs) 9+bool CheckSigopsBIP54(const CTransaction& tx, const CCoinsViewCache& inputs) 10 { 11 Assert(!tx.IsCoinBase()); 12 13diff --git a/src/policy/policy.h b/src/policy/policy.h 14index dab39aa4f8b..8b7a6e99d5f 100644 15--- a/src/policy/policy.h 16+++ b/src/policy/policy.h 17@@ -153,6 +153,7 @@ static constexpr decltype(CTransaction::version) TX_MAX_STANDARD_VERSION{3}; 18 * [@return](/bitcoin-bitcoin/contributor/return/) True if all outputs (scriptPubKeys) use only standard transaction forms 19 */ 20 bool IsStandardTx(const CTransaction& tx, const std::optional<unsigned>& max_datacarrier_bytes, bool permit_bare_multisig, const CFeeRate& dust_relay_fee, std::string& reason); 21+bool CheckSigopsBIP54(const CTransaction& tx, const CCoinsViewCache& inputs); 22 /** 23 * Check for standard transaction types 24 * [@param](/bitcoin-bitcoin/contributor/param/)[in] mapInputs Map of previous transactions that have outputs we're spending 25diff --git a/src/validation.cpp b/src/validation.cpp 26index 13dbe83f806..5853d59cb27 100644 27--- a/src/validation.cpp 28+++ b/src/validation.cpp 29@@ -2671,6 +2671,10 @@ bool Chainstate::ConnectBlock(const CBlock& block, BlockValidationState& state, 30 31 if (!tx.IsCoinBase()) 32 { 33+ if (!CheckSigopsBIP54(tx, view)) { 34+ LogInfo("Transaction %s is invalid according to BIP54 sigops rule.", tx.GetHash().ToString()); 35+ } 36+ 37 std::vector<CScriptCheck> vChecks; 38 bool fCacheResults = fJustCheck; /* Don't cache results if we're actually connecting blocks (still consult the cache, though) */ 39 TxValidationState tx_state;
There are 92 historical transactions that would have bumped into this limit, all either pathological or/and DoSy, all already non-standard by current rules. The last transaction that would have bumped into this limit occurred a decade ago. Here is the list of all historical transactions that would have bumped into this limit, with for each a link to Blockstream’s block explorer so you can inspect them.
l0rinc commented at 3:25 pm on July 1, 2025:Yes, I’m already running such a script, that’s how I knew about the first coinjoin posted earlier.
Thanks for posting the result, I have one last question in this regard: I have of course read the bip-0054 reasoning, but given that this is meant to avoid an attack, I still wanted to understand how long it takes on a typical device to validate the remaining 2'500 signatures in the worst case? I’m asking because based on #32832 I want to know if I would still consider that to be too high. Maybe it would make sense to add another benchmark that measures how long it takes to validate such a tx - which we could run on affected platforms.
darosior commented at 3:51 pm on July 1, 2025:Good to know you are verifying my findings. I hadn’t seen your message before leaving my comment, looks like you already confirmed the first one, good.
Regarding how expensive transaction processing can be, yes this is interesting and something i have spent a fair bit of time looking into. However it’s fairly tangential to this PR so let’s discuss it separately. I DM’d you about this elsewhere.
l0rinc commented at 11:51 am on July 2, 2025:I can confirm that I got the exact same result. I can also confirm that we didn’t have any such block in the past decade.
Note that the biggest sigop per tx was 19971.
sigops vs heights:
0Found 2585 sigops in transaction 659135664894e50040830edb516a76f704fd2be409ecd8d1ea9916c002ab28a2 in block 228538 1Found 4020 sigops in transaction bea1c2b87fee95a203c5b5d9f3e5d0f472385c34cb5af02d0560aab973169683 in block 332118 2Found 3480 sigops in transaction 24b16a13c972522241b65fbb83d09d4bc02ceb33487f41d1f2f620b047307179 in block 332122 3Found 3480 sigops in transaction 53666009e036171b1aee099bc9cd3cb551969a53315410d13ad5390b8b4f3bd0 in block 332122 4Found 8040 sigops in transaction ffc178be118bc2f9eaf016d1c942aec18441a6c5ec17c9d92d1da7962f0479f6 in block 332122 5Found 12060 sigops in transaction 2f1654561297114e434c4aea5ca715e4e3f10be0be8c1c9db2b6f68ea76dae09 in block 332126 6Found 7960 sigops in transaction 62fc8d091a7c597783981f00b889d72d24ad5e3e224dbe1c2a317aabef89217e in block 332131 7Found 8040 sigops in transaction d939315b180d3d73b5e316eb57a18f8137a3f5943aef21a811660d25f1080a3f in block 332131 8Found 8040 sigops in transaction 8a6bfaa78828a81147e4848372d491aa4e9048631982a670ad3a61402a4ec327 in block 332136 9Found 8040 sigops in transaction 02cc78789cc070125817189ec378daa750355c8b22bbce982ed96aa549facb1f in block 332136 10Found 4020 sigops in transaction b97a16ae2e8ae2a804ed7965373b42055f811653f4628e4bef999145d4b593bc in block 333529 11Found 19971 sigops in transaction c51ffaf08188859669571f897f119b6d39ea48a9334212f554bf4927401b71f3 in block 340463 12Found 3980 sigops in transaction 33ccdda65abdda8025adb03841410dda5fa8948bd38b7fbaf3fed521daf5c4d3 in block 346402 13Found 5569 sigops in transaction bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08 in block 364292 14Found 3116 sigops in transaction ba588134ecc93fdbfa06f795c9bf7a05b09ca9ca9095659e401325d501a90363 in block 364295 15Found 3418 sigops in transaction ba6c6d2389f765f7873f5a9a7c11bf806afd784d15b0b8dff011fe95d1d5e841 in block 364306 16Found 5000 sigops in transaction dd49dc50b54b4bc1232e4b68cfdd3d349e49d3d7fe817d1041fff6dd583a6eaf in block 364312 17Found 5000 sigops in transaction 3d724f03e8bcc9e2e3ea79ebe4c6cffca86d85e510742cd6d3ac29d420787a34 in block 364320 18Found 5000 sigops in transaction 8bcf8e8d8265922956bda9b651d2a0e993072c9dca306f3a132dcdb95c7cee6e in block 364321 19Found 3600 sigops in transaction ba31c8833b7417fec9a84536f32fcb52d432acb66d99b9be6f3899686a269b2b in block 364329 20Found 3390 sigops in transaction 9cc0a350d50fa252264636e62d586c7121d0a5656bc7e6b27354325684bec007 in block 364340 21Found 3390 sigops in transaction dd5e32971010ef0c9f4cda8977830d727a6828165c69005a3fab67415c755b7d in block 364349 22Found 3390 sigops in transaction 66be4e766df2c23e08f4cf8c3e6cfa202b20967616ace38e1cbc1f20ee78db2e in block 364355 23Found 3390 sigops in transaction e229a83deafec5f49e4990dba914fd815d05809f5eefbd979d55fb64f93827a3 in block 364360 24Found 3390 sigops in transaction 901e3695838925dd020a085e8f078c393e64179dcf0a43134a1547d21acab49a in block 364367 25Found 3390 sigops in transaction 49ab4d05adbc3294fbbd63d3b876fb97a87a3f5090aa6b1d87f31ab1c4564235 in block 364372 26Found 3390 sigops in transaction c4f4357657ba403455167b2897acfcb922c2a0973c34e58733ca352394e6c629 in block 364378 27Found 3390 sigops in transaction 6c3ee29a9b584fbeae60169f0abce573a7b768bf21398d4dfad9011aa7132530 in block 364380 28Found 3390 sigops in transaction 5dc2bdc5ce29c52840f10203fd93ffed82da4cf49eddf93437dd1329baa9ade5 in block 364384 29Found 3390 sigops in transaction f40fd92f5e8cecf956caec4b6abe0b88decafde0ae71d16a72c41cb1a3da0d60 in block 364386 30Found 3390 sigops in transaction 92b68e4a19ef47c0dd022727a9c4b8ceb9466ce752ad8995ffbc5948bdfabf57 in block 364388 31Found 3600 sigops in transaction 1b604a075075197c82d33555ea48ae27e3d2724bc4c3f31650eff79692971fb7 in block 364407 32Found 5569 sigops in transaction 5d8875ed1707cfee2221741b3144e575aec4e0d6412eeffe1e0fa07335f61311 in block 364422 33Found 5569 sigops in transaction 14dd70e399f1d88efdb1c1ed799da731e3250d318bfdadc18073092aa7fd02c2 in block 364423 34Found 5000 sigops in transaction bb75a8d10cfbe88bb6aba7b28be497ea83f41767f4ee26217e311c615ea0132f in block 364429 35Found 5569 sigops in transaction a684223716324923178a55737db81383c28f055b844d8196c988c70ee7075a9a in block 364449 36Found 3411 sigops in transaction fa9120afe1bb09b7154ba82a022cbdcc29fc5be2699b346ebb7ffdc46807b2f7 in block 364459 37Found 5000 sigops in transaction 5e640a7861695fa660343abde52cfe10b5a97dd8fc6ad3c5e4b2b4bb1c8c3dd9 in block 364566 38Found 2853 sigops in transaction 7e7e69b4de5ef05750d27a863bdcb2d9dbc4a732c2a719f435ae5a9a4690b30f in block 364576 39Found 3651 sigops in transaction d85ce71f583095a76fb17b5bb2a1cbf369e2a2867ca38103aa310cbb2aaf2921 in block 364582 40Found 3651 sigops in transaction 905df97982a2904d6d1b3dfc272435a24d705f4c7e1fc4052798b9904ad5e597 in block 364587 41Found 5160 sigops in transaction 5b0a05f12f33d2dc1507e5c18ceea6bb368afc51f00890965efcc3cb4025997d in block 364610 42Found 5569 sigops in transaction cb550c9a1c63498f7ecb7bafc6f915318f16bb54069ff6257b4e069b97b367c8 in block 364639 43Found 4691 sigops in transaction 66b614e736c884c1a064f7b0d6a9b0abd97e7bb73ac7e4b1b92b493d558a0711 in block 364669 44Found 7350 sigops in transaction 9fdbcf0ef9d8d00f66e47917f67cc5d78aec1ac786e2abb8d2facb4e4790aad6 in block 364773 45Found 7350 sigops in transaction 9c667c64fcbb484b44dcce638f69130bbf1a4dd0fbb4423f58ceff92af4219ec in block 364784 46Found 4386 sigops in transaction 2e7c454cfc348aa220f53b5ba21a55efa3d36353265f085e34053c4efa575fda in block 378197 47Found 2728 sigops in transaction 484c8f9d1adf16a576bce52721a5e4edd5239df346d94fd730f6d7b681ab3652 in block 378214 48Found 3314 sigops in transaction e3d3d1ecec5d6b57792c793c413fc9b19189f5b932db8887d46f06abc101d24e in block 378390 49Found 2810 sigops in transaction b23c99c30339cb93eb04790bc5213f7732365488fe2549802bb472084b6ec508 in block 378402 50Found 3524 sigops in transaction 92f217ec13ab309240adc0798804b3418666344a5cbfff73fb7be8192dad5261 in block 378423 51Found 3513 sigops in transaction 22e861ee83c3d23a4823a3786460119425d8183783068f7ec519646592fac8c2 in block 378509 52Found 3792 sigops in transaction fa5a58f787f569f5b8fab9dadb2447161fac45b36fb6c2c0f548ed0209b60663 in block 378523 53Found 2621 sigops in transaction 767885bc144bdc0414217547f0169352a065083750371cabe5acbd0e0dd60436 in block 378791 54Found 3844 sigops in transaction 461308024d89ea4231911df4ef24e65e60af2a9204c8282a6b67f4214c1714e7 in block 378800 55Found 4294 sigops in transaction 9db4e0838c55ef20c5eff271fc3bf09a404fff68f9cdad7df8eae732500b983d in block 378804 56Found 4141 sigops in transaction 6e278c0ca05bf8e0317f991dae8a9efa141b5a310a4c18838b4e082e356ef649 in block 378893 57Found 3991 sigops in transaction 9c972a02db30f9ee91cc02b30733d70d4e2d759b5d3c73b240e5026a8a2640c4 in block 378907 58Found 4746 sigops in transaction d38417fcc27d3422fe05f76f6e658202d7fa394d0c9f5b419fef97610c3c49f1 in block 378934 59Found 4667 sigops in transaction e32477636e47e1da5fb49090a3a87a3b8ff637d069a70cd5b41595da225e65b4 in block 378969 60Found 3250 sigops in transaction a7e0a86944e9750a3fe372dd318da7b81c4f4c4ab228741ba0cf7fb6d56d6640 in block 379006 61Found 4199 sigops in transaction f62f2c6a16b5da61eaae36d30d43bb8dd8932cd89b40d83623fa185b671c67f9 in block 379045 62Found 2575 sigops in transaction 856a2117b6d81acbe6add60a114307d9572dff027079151d50ee77a7b0c70404 in block 379112 63Found 4233 sigops in transaction 763e13f873afa5f24cd33fc570a178c65e0a79c05c88c147335834fc9e8f837b in block 379114 64Found 4603 sigops in transaction c3f2c2df5388b79949c01d66e83d8bc3b9ccd4f85dbd91465a16fb8e21bf8e1b in block 379169 65Found 3856 sigops in transaction e245f6c3c6b02dc81ea1b6694735565cc535f603708783be027d0e6a94ac3bd5 in block 379220 66Found 3895 sigops in transaction 02313ac62ca8f03930cdc5d2e437fabc05aea60a31ace18a39678c90b45d32bd in block 379223 67Found 4377 sigops in transaction 01d23d32bccc04b8ca5a934be16da08ae6a760ccaad2f62dc2f337eee7643517 in block 379246 68Found 4680 sigops in transaction d985c42bcd704aac88b9152aede1cca9bbb6baee55c8577f84c42d600cfec8e4 in block 379277 69Found 4909 sigops in transaction 6bb39576292c69016d0e0c1fe7871640aab12dd95874d67c46cf3424822f8dfd in block 379378 70Found 4625 sigops in transaction 79e30d460594694231f163dd79a69808904819e2f39bf3e31b7ddc4baa030a04 in block 379385 71Found 2976 sigops in transaction 26ed04bd772c7d3d621329bda8eefec9f81108d46ef0bd3b690000465f5254b3 in block 379387 72Found 4663 sigops in transaction bf40393fedc45a1b347957124ef9bb8ae6a44feecee10ef2cc78064fabf8125f in block 379402 73Found 4507 sigops in transaction 446c0a1d563c93285e93f085192340a82c9aef7a543d41a86b65e215794845ef in block 379417 74Found 3847 sigops in transaction 1cf52f9ef89fa43bb4f042cbd4f80e9f090061e466cbe14c6b7ba525df0e572e in block 379418 75Found 4925 sigops in transaction 54bf51be42ff45cdf8217b07bb233466e18d23fd66483b12449cd9b99c3a0545 in block 379478 76Found 4027 sigops in transaction b5ca68205e6d55e87bd6163b28467da737227c6cbcc91cb9f6dc7b400163a12b in block 379490 77Found 4348 sigops in transaction 9f8cc4496cff3216608c2f2177ab360bd2d4f58cae6490d5bc23312cf30e72e0 in block 379537 78Found 4617 sigops in transaction 4eba5deb2bbf3abf067f524484763287911e8d68fb54fa09e1287cf6cd6d1276 in block 379538 79Found 4124 sigops in transaction e3de81a5817a3c825cf44fbf8185e15d446393615568966a6e3fc22cba609c7d in block 379546 80Found 4297 sigops in transaction 1e700d8ce85b17d713cad1a8cae932d26740e7c8ab09d2201ddfe9d1acb4706c in block 379576 81Found 4227 sigops in transaction b8ba939da1babf863746175b59cbfb3b967354f04db41bd13cb11da58e43d2a8 in block 379998 82Found 2758 sigops in transaction 22df2138a28c9d339813854a38181ffc5ac6f37d171d5b5bd6b0b79bf8d3c641 in block 380015 83Found 4654 sigops in transaction 1d93bfe18bc05b13169837b6bc868a92da3c87938531d6f3b58eee4b8822ecbf in block 380027 84Found 4423 sigops in transaction e0c5e2dc3a39e733cf1bdb1a55bbcb3c2469f283becf2f99a0de771ec48f6278 in block 380040 85Found 2688 sigops in transaction c1e00054cc3fef88c2fdec2967e96fdb4c645bcf3f08b0580b51e47ecbfab71a in block 381615 86Found 2580 sigops in transaction 9a567fde7c65bf85bbc2b109277933431ebc8a35de321a4b6436601d68aa4521 in block 381805 87Found 2537 sigops in transaction 6a9263e48a076bfc37deb93d01bf354305f4ac929be6b0ca05c078381a562c0c in block 381813 88Found 2541 sigops in transaction 0683fb9cfcd4a211c14ef7cd2d03739160ff9ba1cb5396be227e475e932a8e9b in block 381841 89Found 2508 sigops in transaction 2290263dddf8f06f099fcfb130a85f8f00b6cc1e05565a4c028d2187c8592d15 in block 381941 90Found 2704 sigops in transaction d27ca813c97eef5c6e9ed4bd2fe08a7e34aa8ac0c2af353826c73a4e2711a797 in block 382491 91Found 2609 sigops in transaction b28d67131d00bda9dac0da484dd1352c55ec6a869ea04a7e85b71d2ddf2356d9 in block 382505
l0rinc commented at 11:53 am on July 2, 2025:bumped into this limit, all either pathological or/and DoSy, all already non-standard by current rules
Question: is the reason against such massive coinjoins that they currently take too long to validate? Wouldn’t forking based on that prohibit such transactions in a few decades when we could expect them not to be dosy anymore?
darosior commented at 12:22 pm on July 2, 2025:Yes, the maximum standard size was initially introduced because of quadratic hashing. It is also desirable for other reasons (see for instance here). I don’t expect the worst case with large legacy transactions to be reasonable even in a decade from now. Also, such large transactions can be done with Segwit v0 or v1 inputs instead and those concerns are moot.
l0rinc commented at 1:33 pm on July 3, 2025:Wanted to understand the orders of magnitude we’re talking about - especially for low-end raspberry pi 4 devices -, ran a
-reindex-chainstate
with-assumevalid=0
until 400k blocks to see the correlation between script size and validation time - and the order of magnitude of validating tens of thousands of scripts in a single tx (since these are normally skipped during my IBD benchmarks at these historic heights).I simply subtracted the subsequent times to get the real time (including block reading and flushing) which indicates that even in the worst case full validation is done within a minute (top 8 are bloated because of flushing).
It’s likely that the txs that were included have survivorship bias (i.e. were included exactly because they aren’t the worst kind), but it’s useful to know the extent of the attack (since you’ve mentioned quadratic hashing).
0COMMITS="ceeb1ebb2f463919a44b0bf3ba24f9b7f814bb5e"; STOP=400000; DBCACHE=1000; CC=gcc; CXX=g++; BASE_DIR="/mnt/my_storage"; DATA_DIR="$BASE_DIR/BitcoinData"; LOG_DIR="$BASE_DIR/logs"; (echo ""; for c in $COMMITS; do git fetch -q origin $c && git log -1 --pretty='%h %s' $c || exit 1; done; echo "") && hyperfine --sort command --runs 1 --export-json "$BASE_DIR/rdx-$(sed -E 's/(\w{8})\w+ ?/\1-/g;s/-$//'<<<"$COMMITS")-$STOP-$DBCACHE-$CC.json" --parameter-list COMMIT ${COMMITS// /,} --prepare "killall bitcoind; rm -f $DATA_DIR/debug.log; git checkout {COMMIT}; git clean -fxd; git reset --hard && \ 1cmake -B build -G Ninja -DCMAKE_BUILD_TYPE=Release && ninja -C build bitcoind && \ 2./build/bin/bitcoind -datadir=$DATA_DIR -stopatheight=$STOP -dbcache=1000 -printtoconsole=0; sleep 10" --show-output --cleanup "cp $DATA_DIR/debug.log $LOG_DIR/debug-{COMMIT}-$(date +%s).log" "COMPILER=$CC ./build/bin/bitcoind -datadir=$DATA_DIR -stopatheight=$STOP -dbcache=$DBCACHE -reindex-chainstate -blocksonly -connect=0 -printtoconsole=1 -assumevalid=0 -debug=bench"
The x axis is restricted to the blocks that had the above sigop count violations (red vertical lines) - and the validation speed seems to barely affected:
in src/policy/policy.cpp:186 in 2ead56f0d2 outdated
181+ 182+ // Unlike the existing block wide sigop limit, BIP54 counts sigops when they are actually executed. 183+ // This means sigops in the spent scriptpubkey count toward the limit. 184+ // `fAccurate` means correctly accounting sigops for CHECKMULTISIGs with 16 pubkeys or less. This 185+ // method of accounting was introduced by BIP16, and BIP54 reuses it. 186+ sigops += tx.vin[i].scriptSig.GetSigOpCount(/*fAccurate=*/true);
l0rinc commented at 9:59 am on June 27, 2025:Similarly to the discussion on #32521 (review), it seems to me we could fail early by exiting if the first one is already too big:
0 sigops += tx.vin[i].scriptSig.GetSigOpCount(/*fAccurate=*/true); 1 if (sigops > MAX_TX_LEGACY_SIGOPS) { 2 return false; 3 }
or inversely, if this isn’t performance critical, we can do the check at the very end instead, to speed up the happy path by not doing intermediary checks.
0 sigops += tx.vin[i].scriptSig.GetSigOpCount(/*fAccurate=*/true); 1 sigops += prev_txo.scriptPubKey.GetSigOpCount(tx.vin[i].scriptSig); 2 } 3 4 return sigops <= MAX_TX_LEGACY_SIGOPS; 5}
Is there any advantage to this in-between solution?
darosior commented at 8:20 am on June 30, 2025:Don’t think that matters much, but i’ll take your suggestion.We are already exiting early. I don’t think there is a need to duplicate the check after each increment to thesigops
variable. I’ll leave it like that.
l0rinc commented at 8:37 am on June 30, 2025:We’re doing needless work in this case - calling the second sigop count calculation when we already know that we could exit early. This was the argument you gave againstreturn sigops <= MAX_TX_LEGACY_SIGOPS
. I would be fine with either, but this in-between needs a proper explanation in my opinion.
darosior commented at 11:58 am on June 30, 2025:I’m not planning on addressing this, as i think it would unnecessarily duplicate the check for no clear benefit.in src/policy/policy.cpp:180 in 27e54beff7 outdated
175+{ 176+ Assert(!tx.IsCoinBase()); 177+ 178+ unsigned int sigops{0}; 179+ for (unsigned i{0}; i < tx.vin.size(); ++i) { 180+ const auto& prev_txo{inputs.AccessCoin(tx.vin[i].prevout).out};
l0rinc commented at 10:05 am on June 27, 2025:I have a similar concern to #32521 (comment), the benchmarks indicate a ~66% slower standardness check for me locally (similar slowdown shown by https://corecheck.dev/bitcoin/bitcoin/pulls/32521):
0# latest commit, rebased: 811115c75da74ef50aa37cfcb807e4ac1a767604 1% build/bin/bench_bitcoin -filter='CCoinsCaching' --min-time=10000 # ran 3 times
Before (without
CheckSigopsBIP54
):ns/op op/s err% total benchmark 141.29 7,077,842.70 0.4% 10.94 CCoinsCaching
141.16 7,083,924.08 0.2% 10.98 CCoinsCaching
140.31 7,127,285.74 0.3% 10.97 CCoinsCaching
After:
ns/op op/s err% total benchmark 212.20 4,712,429.73 0.3% 11.00 CCoinsCaching
214.75 4,656,677.68 0.3% 10.98 CCoinsCaching
211.30 4,732,499.91 0.1% 11.00 CCoinsCaching
instagibbs commented at 1:42 pm on June 27, 2025:For policy no way does this slowdown matter. That said, since we’re already doing the computation in both policy and consensus contexts, maybe we can just reuse it? https://github.com/instagibbs/bitcoin/tree/2025-06-patho-bip54
it compiles
l0rinc commented at 8:56 am on June 29, 2025:Doing a differential perf flame graph on b3bb4031 vs 1ba50d81 (master vs rebased
darosior:2503_nonstd_tx_sigops
) indicates the slowdown is indeed caused by bothAccessCoin
and the extraCScript::GetSigOpCount
calls:0COMMITS="b3bb4031ab32a1306c610f8683b25b8459b21dbc 1ba50d81b251c705aae872a2a9b37778ffa50c62"; \ 1CC=gcc; CXX=g++; \ 2BASE=/mnt/my_storage; FG=$BASE/FlameGraph; LOG_DIR="$BASE/logs"; OUT=$BASE/flamegraphs/bench-$(date +%s); mkdir -p "$OUT" "$LOG_DIR"; \ 3FILTER="CCoinsCaching"; MIN_TIME=10000; \ 4export CC CXX; \ 5flame_prepare(){ commit=$1; \ 6 git fetch -q origin "$commit" && git checkout -q "$commit" && \ 7 rm -rf build && \ 8 cmake -B build -G Ninja -DBUILD_BENCH=ON -DCMAKE_BUILD_TYPE=RelWithDebInfo \ 9 -DCMAKE_C_FLAGS="-g3 -ggdb -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer" \ 10 -DCMAKE_CXX_FLAGS="-g3 -ggdb -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer" && \ 11 ninja -C build bench_bitcoin; }; \ 12flame_execute(){ tag=${1:0:8}; \ 13 echo 100000 | sudo tee /proc/sys/kernel/perf_event_max_sample_rate >/dev/null; \ 14 echo 0 | sudo tee /proc/sys/kernel/kptr_restrict >/dev/null; \ 15 echo -1 | sudo tee /proc/sys/kernel/perf_event_paranoid >/dev/null; \ 16 perf record -F 99 -e cycles -g --call-graph fp -o "$OUT/bench_$tag.data" -- \ 17 build/bin/bench_bitcoin -filter="$FILTER" --min-time=$MIN_TIME && \ 18 perf report --stdio -i "$OUT/bench_$tag.data" > "$LOG_DIR/bench-report-$tag-$(date +%s).txt" && \ 19 perf script -i "$OUT/bench_$tag.data" 2>/dev/null | "$FG/stackcollapse-perf.pl" > "$OUT/bench_$tag.folded" && \ 20 rm -f "$OUT/bench_$tag.data" && \ 21 "$FG/flamegraph.pl" --title "Bench $FILTER $tag" "$OUT/bench_$tag.folded" > "$OUT/bench_$tag.svg"; }; \ 22flame_diff(){ first=${1:0:8}; second=${2:0:8}; \ 23 "$FG/difffolded.pl" -n "$OUT/bench_${first}.folded" "$OUT/bench_${second}.folded" > "$OUT/bench_${first}-${second}.folded" && \ 24 "$FG/flamegraph.pl" --title "Bench $FILTER $first-$second" --colors hot --negate \ 25 "$OUT/bench_${first}-${second}.folded" > "$OUT/bench_${first}-${second}.svg"; }; \ 26A=${COMMITS%% *}; B=${COMMITS##* }; \ 27flame_prepare $A && flame_execute $A && \ 28flame_prepare $B && flame_execute $B && \ 29flame_diff $A $B && flame_diff $B $A; \ 30echo "Benchmark flamegraphs at: $OUT"
bench_b3bb4031.svg
bench_1ba50d81.svg
Comparing the current version with the previous one mentioned by @Sjors indicates that it is indeed
AccessCoin
hashing that’s causing the excessive slowdown:
The version @instagibbs posted (but given that
transaction_tests/max_standard_legacy_sigops
still fails it might not be reflective of a final solution yet) seems to be different enough to make comparison a bit harder to interpret:
Note that merging the previous version of this pr with #32532 (optimizing the sigop operations) brings back the exact original performance of
AreInputsStandard
:% build/bin/bench_bitcoin -filter=‘CCoinsCaching’
ns/op op/s err% total benchmark 140.14 7,135,491.69 0.3% 0.01 CCoinsCaching
140.99 7,092,774.75 0.4% 11.01 CCoinsCaching
141.05 7,089,910.08 0.3% 11.02 CCoinsCaching
The differential flames also show that the additional script checks don’t really change the outcome:
However, the optimizations in #32532 would also enable completely eliminating the expensive
Solver
for all standard templates, making standardness checks >2x faster on average:ns/op op/s err% total benchmark 59.77 16,731,162.36 0.1% 10.99 CCoinsCaching
59.73 16,740,845.93 0.1% 10.99 CCoinsCaching
59.47 16,815,024.70 0.1% 11.00 CCoinsCaching
The differential flames confirm the lack of
Solver
calls:
darosior commented at 8:27 am on June 30, 2025:In real usageAreInputsStandard
is always called with a warm up cache so what you check here is not representative.
l0rinc commented at 8:37 am on June 30, 2025:Can you update the benchmark to make it representative?
TheCharlatan commented at 9:09 am on June 30, 2025:This is still a micro benchmark, because the input standardness check is just a tiny fraction of stuff that gets actually checked when processing a new transaction. This could probably just be deleted, or replaced with a more comprehensive check of ProcessTransaction/ProcessNewPackage.
darosior commented at 12:05 pm on June 30, 2025:Exactly. I think it’s good to keep the benchmark but its results should not be over interpreted. It is focused on a tiny portion of the total work performed when processing a transaction. I locally have benchmarks ofProcessTransaction
which i may share in the future. To give you an idea, they suggest (as can be expected from just reading the code) that total work in processing a transaction is several orders of magnitude higher than the work done inAreInputsStandard
. And that’s not even in the worst case.
l0rinc commented at 12:42 pm on June 30, 2025:You can also add a warmup round if you think that’s more representative:
0 bench.warmup(1).run([&] { 1 bool success{AreInputsStandard(tx_1, coins)}; 2 assert(success); 3 });
Though that doesn’t change the numbers for me:
Before:
ns/op op/s err% total benchmark 140.41 7,122,118.63 0.4% 1.10 CCoinsCaching
After:
ns/op op/s err% total benchmark 210.86 4,742,579.56 0.3% 1.10 CCoinsCaching
Or we could iterate over a known block to make it more representative, something like:
0#include <bench/bench.h> 1#include <bench/data/block413567.raw.h> 2#include <coins.h> 3#include <policy/policy.h> 4#include <primitives/transaction.h> 5#include <script/signingprovider.h> 6#include <streams.h> 7 8#include <cassert> 9#include <vector> 10 11static void CCoinsCaching(benchmark::Bench& bench) 12{ 13 CCoinsView coins_dummy; 14 CCoinsViewCache coins(&coins_dummy); 15 16 DataStream stream{benchmark::data::block413567}; 17 CBlock block; 18 stream >> TX_WITH_WITNESS(block); 19 20 for (const auto& tx : block.vtx) { 21 if (tx->IsCoinBase()) continue; 22 for (const CTxIn& in : tx->vin) { 23 // TODO: Add a matching Coin for in.prevout to `coins` 24 } 25 } 26 27 bench.warmup(1).run([&] { // we don't even need warmup round anymore 28 for (const auto& tx : block.vtx) { 29 if (tx->IsCoinBase()) continue; 30 assert(AreInputsStandard(*tx, coins)); 31 } 32 }); 33} 34 35BENCHMARK(CCoinsCaching, benchmark::PriorityLevel::HIGH);
Or maybe we could customize the
ProcessTransactionBench
I added to a different PR to the current scenario so that it’s more representative. Or yours, if you think that’s better, I don’t mind either way.
darosior commented at 4:37 pm on July 1, 2025:I think the benchmark is fine as it stands, for the purpose it serves (which is not to measure transaction processing time). I don’t think more needs to be done here and will resolve this thread.in src/policy/policy.cpp:1 in 27e54beff7 outdated
l0rinc commented at 10:08 am on June 27, 2025:nit: multiple typos in commit message 27e54beff7d1c9ac68bee379bb6d971a775b9841
darosior commented at 12:05 pm on June 30, 2025:Will address if i need to retouch.
darosior commented at 3:33 pm on July 1, 2025:As i am retouching i re-read this commit message and i didn’t find any such typo. Could you be more precise? (Although i only plan on addressing if i retouch again.)
l0rinc commented at 3:37 pm on July 1, 2025:In addition, this limit may only be run into by pathological transactions which pad the Script with sigops but do not use actual signatures when spending, as otherwise they would run into the standard transaction size limit.
l0rinc commented at 1:35 pm on July 3, 2025:
darosior commented at 6:57 pm on July 9, 2025:Done. Thanks for the screenshot.in src/test/transaction_tests.cpp:1096 in 2c636ba74c outdated
1091+ for (unsigned i{0}; i < p2sh_inputs_count; ++i) { 1092+ tx_max_sigops.vin[i] = CTxIn(COutPoint(prev_txid, i), max_sigops_redeem_script); 1093+ } 1094+ tx_max_sigops.vin.emplace_back(COutPoint(prev_txid, p2sh_inputs_count), max_sigops_redeem_script); 1095+ AddCoins(coins, CTransaction(tx_create), 0, false); 1096+ BOOST_CHECK((p2sh_inputs_count + 1) * MAX_P2SH_SIGOPS > MAX_TX_LEGACY_SIGOPS);
l0rinc commented at 10:09 am on June 27, 2025:nit: if we want better error messages showing both sides on a failure, we could do:
0 BOOST_CHECK_GT((p2sh_inputs_count + 1) * MAX_P2SH_SIGOPS, MAX_TX_LEGACY_SIGOPS);
darosior commented at 8:43 am on June 30, 2025:Sure. I updated all usage ofBOOST_CHECK
that was checking for anything else than a boolean return value in my unit test to instead useBOOST_CHECK_GT
,BOOST_CHECK_LT
andBOOST_CHECK_EQUAL
.in test/functional/mempool_sigoplimit.py:30 in 2ead56f0d2 outdated
24@@ -24,16 +25,23 @@ 25 OP_IF, 26 OP_RETURN, 27 OP_TRUE, 28+ OP_2DUP, 29+ OP_DROP, 30+ OP_NOT,
l0rinc commented at 10:12 am on June 27, 2025:nit: these were in alphabetic order before:
0from test_framework.script import ( 1 CScript, 2 OP_2DUP, 3 OP_CHECKMULTISIG, 4 OP_CHECKSIG, 5 OP_DROP, 6 OP_ENDIF, 7 OP_FALSE, 8 OP_IF, 9 OP_NOT, 10 OP_RETURN, 11 OP_TRUE, 12)
To reduce further possible merge conflicts, please consider keeping the import section sorted
darosior commented at 8:43 am on June 30, 2025:Sure, done.in src/test/transaction_tests.cpp:1110 in 2ead56f0d2 outdated
1105+ tx_create_p2pk.vout.emplace_back(212121 + i, p2pk_script); 1106+ } 1107+ prev_txid = tx_create_p2pk.GetHash(); 1108+ tx_max_sigops.vin.resize(p2sh_inputs_count); 1109+ for (unsigned i{0}; i < p2pk_inputs_count; ++i) { 1110+ tx_max_sigops.vin.emplace_back(COutPoint(prev_txid, i));
l0rinc commented at 10:45 am on June 27, 2025:similarly to other
emplace_back
calls here, we don’t need the explicit constructor (it also avoids constructing a temporary):0 tx_max_sigops.vin.emplace_back(prev_txid, i);
darosior commented at 8:43 am on June 30, 2025:Sure, dropped all unnecessary temporaries inemplace_back
s from my unit test.in src/test/transaction_tests.cpp:1146 in 2ead56f0d2 outdated
1149+ for (unsigned i{0}; i < p2pk_inputs_count; ++i) { 1150+ BOOST_REQUIRE(SignSignature(keystore, CTransaction(tx_create_p2pk), tx_max_sigops, p2sh_inputs_count + i, SIGHASH_ALL, dummy_sigdata)); 1151+ } 1152+ AddCoins(coins, CTransaction(tx_create_p2pk), 0, false); 1153+ BOOST_CHECK(p2sh_inputs_count * MAX_P2SH_SIGOPS + p2pk_inputs_count * 1 > MAX_TX_LEGACY_SIGOPS); 1154+ BOOST_CHECK(!::AreInputsStandard(CTransaction(tx_max_sigops), coins));
l0rinc commented at 10:47 am on June 27, 2025:👍 this is where the test fails correctly without the
!CheckSigopsBIP54(tx, mapInputs)
check.
Would it make sense to add a coinbase tx and vouts to this test to make it more realistic so that it passes
CheckTransaction
as well? Otherwise it’s harder to tell if it fails for the right reasons…And to make sure this only affects policy and not consensus: do we already have tests that check that e.g. 3000 sigops could still be mined? If not, can we add a clause here to make sure that we haven’t accidentally changed the consensus, i.e. that we could still mine these non-standard blocks?
0 TxValidationState state; 1 BOOST_CHECK(CheckTransaction(CTransaction(tx_max_sigops), state)); 2 BOOST_CHECK(state.IsValid());
darosior commented at 8:47 am on June 30, 2025:Otherwise it’s harder to tell if it fails for the right reasons…
This is demonstrated through bounds checking. We check that
AreInputsStandard
passes, and check that it then fails by adding one more identical input.i.e. that we could still mine these non-standard blocks?
We would not mine blocks with such transactions anymore, this is the whole point of this PR.
l0rinc commented at 11:32 am on June 30, 2025:We would not mine blocks with such transactions anymore, this is the whole point of this PR.
I don’t see how this PR is doing that currently. I understand that the point is to starve the miners of those pathological transactions before a future soft-fork makes the limit consensus-critical - but for now we have to make absolutely sure we don’t accidentally do that and that it can still be mined, right? My question was if we can add that to the end of the test (which we can remove after we actually soft-fork).
darosior commented at 12:12 pm on June 30, 2025:I don’t see how this PR is doing that currently.
We only create block templates from the mempool. This PR prevents these transactions from being included in our mempool. As a result, we would never mine a block which include such transactions.
but for now we have to make absolutely sure we don’t accidentally do that and that it can still be mined, right?
I think what you are trying to say here is “that we would still accept a block containing them”. That is, that we haven’t accidentally made it invalid by consensus. This PR makes sure this is not the case by isolating these new checks: only a function that is never called during block validation is modified. You can verify this through code review. Further, this PR bundles a functional test which checks newly-non-standard transactions are still accepted in blocks.
l0rinc commented at 12:42 pm on June 30, 2025:I think what you are trying to say here is “that we would still accept a block containing them”.
I wasn’t just “trying”, it’s what I said.
You can verify this through code review
I’d like a unit test to verify that instead, that’s what I’ve been asking if we can extend the unit test to makes sure we still accept these non-standard transactions.
functional test which checks newly-non-standard transactions are still accepted in blocks
Yes, that’s what I was asking to add to the unit test as well. https://github.com/bitcoin/bitcoin/pull/32521/files#diff-7a25310a1ae3883aca989cd736212222e6abca6eea25e7d59ea7059b38d76ea0R220
Make sure the original, non-standard, transaction can be mined
I can accept if you think that’s not necessary, but not sure I understand why it’s difficult request…
TheCharlatan commented at 2:02 pm on June 30, 2025:More accurate error/standardness failure reporting can be done later through #29060.
darosior commented at 2:37 pm on June 30, 2025:That this does not touch consensus can be ensured through code review and is also validated through a functional test. I don’t think anything else is necessary to do here, so i’m going to resolve this thread.in src/test/transaction_tests.cpp:1064 in 2ead56f0d2 outdated
1057+ CKey key; 1058+ key.MakeNewKey(true); 1059+ BOOST_REQUIRE(keystore.AddKey(key)); 1060+ 1061+ // Create a pathological P2SH script padded with as many sigops as is standard. 1062+ CScript max_sigops_redeem_script{CScript() << std::vector<unsigned char>{} << key.GetPubKey()};
l0rinc commented at 11:31 am on June 27, 2025:In most docs I saw (e.g. https://learnmeabitcoin.com/technical/script/p2sh/#scriptsig), P2SH starts with an
OP_0
- I understand that this corresponds to an empty vector, but maybe we can simplify it in the tests:0 CScript max_sigops_redeem_script{CScript() << OP_0 << key.GetPubKey()};
0 CScript max_sigops_redeem_script{CScript() << std::vector<unsigned char>{} << key.GetPubKey()}; 1 CScript max_sigops_redeem_script2{CScript() << OP_0 << key.GetPubKey()}; 2 assert(max_sigops_redeem_script == max_sigops_redeem_script2); 3 assert(ScriptToAsmStr(max_sigops_redeem_script) == ScriptToAsmStr(max_sigops_redeem_script2));
darosior commented at 8:57 am on June 30, 2025:This is unrelated. My code above is building the P2SH redeem script, i.e. the program being executed. You are referring to a specific instance of a scriptSig spending a multisig wrapped into a P2SH. The example you link pushes 2 signatures plus an empty push because of the off-by-one CHECKMULTISIG bug.
Besides, your code suggestion is strictly functionally equivalent as
OP_0
pushes an empty vector on the stack. This diff will also result in the very same program sinceOP_0
is0x00
and pushing an empty vector will write a0x00
size.
l0rinc commented at 11:34 am on June 30, 2025:your code suggestion is strictly functionally equivalent
Yes, that’s why I have suggested it:
I understand that this corresponds to an empty vector, but maybe we can simplify it in the test
darosior commented at 12:13 pm on June 30, 2025:I won’t be making this change.in src/test/transaction_tests.cpp:1108 in 2ead56f0d2 outdated
1103+ unsigned p2pk_inputs_count{10}; // From 2490 to 2500. 1104+ for (unsigned i{0}; i < p2pk_inputs_count; ++i) { 1105+ tx_create_p2pk.vout.emplace_back(212121 + i, p2pk_script); 1106+ } 1107+ prev_txid = tx_create_p2pk.GetHash(); 1108+ tx_max_sigops.vin.resize(p2sh_inputs_count);
l0rinc commented at 12:12 pm on June 27, 2025:It’s not immediately obvious that this is reducing the size of the existing inputs (especially since in other cases we did
reserve
), in this case maybe we could:0 tx_max_sigops.vin.pop_back(); 1 // assert(tx_max_sigops.vin.size() == p2sh_inputs_count);
or add a comment here
0 // Drop the extra input 1 tx_max_sigops.vin.resize(p2sh_inputs_count);
or do the deletion explicitly:
0 tx_max_sigops.vin.erase(tx_max_sigops.vin.begin() + p2sh_inputs_count, tx_max_sigops.vin.end());
darosior commented at 8:59 am on June 30, 2025:Added a comment, thanks.l0rinc changes_requestedl0rinc commented at 12:21 pm on June 27, 2025: contributorConcept ACK
Regardless of whether the soft-fork will be accepted, it’s important to guard against this early.
However, the current implementation introduces a significant slowdown and I think we could make the testing more solid by explicitly validating that we didn’t soft-fork yet. Therefore it’s an Approach NACK from me, please see the detailed explanation below.
Sjors commented at 1:34 pm on June 27, 2025: memberdoing once more shouldn’t introduce any noticeable overhead
Looking at
CCoinsViewCache::AccessCoin()
, it callsFetchCoin
. This either returns a previously cached coin, or fetches it from its base (possibly on disk) and then adds it to the cache.So the worst case overhead for a second
AccessCoin()
call is having to perform anothertry_emplace
on this cache.The
CCoinsCaching
bench @l0rinc points to testsAreInputsStandard
using a very small dummy coin cache. That’s probably representative becauseMemPoolAccept
creates a fresh cache specifically for a single transaction or package.I ran the bench on 27e54beff7d1c9ac68bee379bb6d971a775b9841:
0| ns/op | op/s | err% | total | benchmark 1|--------------------:|--------------------:|--------:|----------:|:---------- 2| 207.29 | 4,824,125.44 | 0.4% | 110.93 | `CCoinsCaching`
as well the previous approach 005d6434ae9ebb1a66b235631489d1b0b9bb6469:
0| ns/op | op/s | err% | total | benchmark 1|--------------------:|--------------------:|--------:|----------:|:---------- 2| 159.59 | 6,265,865.26 | 0.5% | 111.08 | `CCoinsCaching`
So there does does seem to be a significant slowdown.
For comparison, not doing the new check at all:
0| ns/op | op/s | err% | total | benchmark 1|--------------------:|--------------------:|--------:|----------:|:---------- 2| 129.58 | 7,717,122.10 | 0.4% | 109.35 | `CCoinsCaching`
The bigger question is whether this slow down is a problem. @instagibbs thinks it’s not: #32521 (review)
We can distinguish between the typical and the worst case.
For the latter this is obviously an improvement. Consider validating a standard standard transaction, where the last signature check fails. The
VerifyScriptBench
verifies a single signature and take 12k ns/op. So clearly the sigops counting part of verification is trivial relative to the signature checks. Even for a single signature transaction. Since the extra check introduced by the PR reduces the max number of signatures we have to validate, it’s clearly a big improvement for the worst case.But what about the typical case?
#28592 proposed to increase the transaction relay rate to 35tx/second for outbound peers, so that’s one every 30 milliseconds. This is controlled through
INVENTORY_BROADCAST_PER_SECOND
. But we might receive (many) more candidate transactions than we relay.Or we can look at the other steps involved in receiving a transaction, validating it, storing in the mempool and relaying.
Afaik it’s free to send us consensus invalid transactions with too many sigops. This change slightly increases the time it takes us to determine that. But does it matter?
darosior commented at 9:07 am on June 30, 2025: member@SjorsAreInputsStandard
is always called with a warm cache.The benchmark is not.AdditionalAccessCoin
calls to fetch coins from the cache do not introduce any meaningful cost. @Sjors @l0rinc please let’s avoid rushing to performance conclusions on the basis of a microbenchmark that is not representative of how functions are actually used in reality. This PR does not introduce any meaningful slowdown.darosior force-pushed on Jun 30, 2025Sjors commented at 10:35 am on June 30, 2025: memberAreInputsStandard
is always called with a warm cache. The benchmark is not.The
bench.run
call does not clearcoins
, so it’s warm after the first iteration. However it’s not measuring loading coins from disk. If that’s only determined by the disk seek time [0], you’re looking at 80 to 160 nanoseconds. My understanding is that our UTXO cache is mostly there to prevent writes, which are much slower than reads.[0] https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics
darosior commented at 2:06 pm on June 30, 2025: memberThe
bench.run
call does not clearcoins
, so it’s warm after the first iteration.Actually the benchmark does populate the cache first through
SetupDummyInputs
(otherwise it couldn’t work since it is using a dummy backend). But the main point really is that this micro benchmark is meaningless to demonstrate a slow down in transaction processing.AreInputsStandard
should represent a fraction of a percent of the total work performed when processing a transaction. Unless we make it 10'000 times slower i don’t think this needs to be discussed at all, or only to point out this approach only result in a 70% slowerAreInputsStandard
(i.e. only a ~100 additional ns spent there) and is therefore worth taking for the future gains of a simpler consensus patch.in src/test/transaction_tests.cpp:1134 in dfb4bc73be outdated
1129+ for (unsigned i{0}; i < tx_create_segwit.vout.size(); ++i) { 1130+ tx_max_sigops.vin.emplace_back(prev_txid, i); 1131+ } 1132+ keystore.AddCScript(witness_script); 1133+ for (unsigned i{0}; i < tx_create_segwit.vout.size(); ++i) { 1134+ BOOST_REQUIRE(SignSignature(keystore, CTransaction(tx_create_segwit), tx_max_sigops, tx_max_sigops.vin.size() - 1 - i, SIGHASH_ALL, dummy_sigdata));
TheCharlatan commented at 2:38 pm on June 30, 2025:Could we just skip signing them?
darosior commented at 3:41 pm on July 1, 2025:Right, the witnesses are never inspected byAreInputsStandard
. Dropped the unnecessary signing logic, to get a neat 13 lines reduction in the diff.Sjors commented at 4:05 pm on June 30, 2025: memberFrom inline discussion: #32521 (review)
I think it’s good to keep the [
CCoinsCaching
] benchmark but its results should not be over interpreted. It is focused on a tiny portion of the total work performed when processing a transaction. I locally have benchmarks ofProcessTransaction
which i may share in the future. To give you an idea, they suggest (as can be expected from just reading the code) that total work in processing a transaction is several orders of magnitude higher than the work done inAreInputsStandard
. And that’s not even in the worst case.Ok, that’s what I wanted to know. Would definitely be useful as a PR, but shouldn’t block this.
I was thinking because the standardness checks are cascading, i.e. we perform the cheapest checks first, perhaps there would be an attack that targets these weaker tests. However it’s always free to create a huge standard transaction with an invalid signature and hit a node with that. So at least in the adversarial context the performance of these cheap checks isn’t very important.
(I still need to review the changes since my last review, but in general the helper function seems like an improvement).
in src/policy/policy.cpp:179 in 27e54beff7 outdated
174+static bool CheckSigopsBIP54(const CTransaction& tx, const CCoinsViewCache& inputs) 175+{ 176+ Assert(!tx.IsCoinBase()); 177+ 178+ unsigned int sigops{0}; 179+ for (unsigned i{0}; i < tx.vin.size(); ++i) {
Sjors commented at 12:46 pm on July 1, 2025:Since it’s a brand new helper function and you’re not moving code, let’s embrace the future:
0diff --git a/src/policy/policy.cpp b/src/policy/policy.cpp 1index 48dfa14a8f..327aa5a5cf 100644 2--- a/src/policy/policy.cpp 3+++ b/src/policy/policy.cpp 4@@ -176,15 +176,15 @@ static bool CheckSigopsBIP54(const CTransaction& tx, const CCoinsViewCache& inpu 5 Assert(!tx.IsCoinBase()); 6 7 unsigned int sigops{0}; 8- for (unsigned i{0}; i < tx.vin.size(); ++i) { 9- const auto& prev_txo{inputs.AccessCoin(tx.vin[i].prevout).out}; 10+ for (const CTxIn& input : tx.vin) { 11+ const auto& prev_txo{inputs.AccessCoin(input.prevout).out}; 12 13 // Unlike the existing block wide sigop limit, BIP54 counts sigops when they are actually executed. 14 // This means sigops in the spent scriptpubkey count toward the limit. 15 // `fAccurate` means correctly accounting sigops for CHECKMULTISIGs with 16 pubkeys or less. This 16 // method of accounting was introduced by BIP16, and BIP54 reuses it. 17- sigops += tx.vin[i].scriptSig.GetSigOpCount(/*fAccurate=*/true); 18- sigops += prev_txo.scriptPubKey.GetSigOpCount(tx.vin[i].scriptSig); 19+ sigops += input.scriptSig.GetSigOpCount(/*fAccurate=*/true); 20+ sigops += prev_txo.scriptPubKey.GetSigOpCount(input.scriptSig); 21 22 if (sigops > MAX_TX_LEGACY_SIGOPS) { 23 return false;
darosior commented at 3:41 pm on July 1, 2025:Sure, makes sense.Sjors approvedSjors commented at 12:57 pm on July 1, 2025: memberACK ceeb1ebb2f463919a44b0bf3ba24f9b7f814bb5eDrahtBot requested review from mabu44 on Jul 1, 2025DrahtBot requested review from l0rinc on Jul 1, 2025DrahtBot requested review from maflcko on Jul 1, 2025DrahtBot requested review from instagibbs on Jul 1, 2025glozow commented at 1:14 pm on July 1, 2025: memberStrong concept ACK. However, it seems that “this might be a soft fork in the future” is only a secondary motivation (and a bit of a presumptuous one). The primary motivation should be the same as what’s in BIP 54: to avoid long validation times.
Generally, a PR that restricts default policy should have a corresponding post on the mailing list. I’m not that concerned that there is someone who needs to relay pathological transactions, but it would feel weird for Core to merge this without any attempt to notify or solicit discussion in the public.
l0rinc approvedl0rinc commented at 3:28 pm on July 1, 2025: contributorHave a few remaining concerns, but removing my nack. Thanks for the explanations. Concept ACKSjors commented at 3:37 pm on July 1, 2025: membera PR that restricts default policy should have a corresponding post on the mailing list
Indeed.
The primary motivation should be the same as what’s in BIP 54: to avoid long validation times.
That’s not necessarily true though. Transactions that exceed this limit, but are still below the maximum standardness size, are very expensive to make. It’s not obvious that such an expense is worth it just to slow down a few nodes on the network once. But once the proposed soft fork is active, there’s zero cost to the attacker.
Imo the argument for this change is that it makes the proposed BIP54 soft fork safer for miners, while not harming anyone and while not introducing an economic incentive to bypass the mempool.
It also doesn’t preclude alternative consensus proposals that fix slow block validation, though afaik there haven’t been any.
darosior commented at 3:42 pm on July 1, 2025: memberI agree this needs a ML post. Will take care of this tomorrow.
However the motivation for this PR is not to reduce validation times, as it only marginally reduces the worst case for standard transactions. I don’t think it’s fair to qualify the motivation given in OP of “presumptuous” and it’s certainly not a secondary motivation: if a soft fork is activated and such transactions are still standard it would be trivial to DoS any unupgraded miner into producing a block invalid according to the new rules. This seems to me to be a pretty strong motivation, which comes at virtually no cost since this only makes non-standard some far-fetched pathological transactions (which is the reason why this limit was picked in the first place).
darosior force-pushed on Jul 1, 2025Sjors commented at 3:51 pm on July 1, 2025: memberre-ACK 65b7deda12338b17f342c1d37f44e7c6da850771
DrahtBot requested review from l0rinc on Jul 1, 2025DrahtBot requested review from glozow on Jul 1, 2025in src/policy/policy.cpp:182 in 65b7deda12 outdated
177+ 178+ unsigned int sigops{0}; 179+ for (const auto& txin: tx.vin) { 180+ const auto& prev_txo{inputs.AccessCoin(txin.prevout).out}; 181+ 182+ // Unlike the existing block wide sigop limit, BIP54 counts sigops when they are actually executed.
luke-jr commented at 1:13 pm on July 2, 2025:That’s not what’s in the current BIP, nor what is implemented here…
instagibbs commented at 1:27 pm on July 2, 2025:when the script is executed*?
darosior commented at 1:41 pm on July 2, 2025:Yeah it’s just a typo when/where:
0 // Unlike the existing block wide sigop limit, BIP54 counts sigops where they are actually executed.
darosior commented at 1:43 pm on July 2, 2025:Done, thanks for spotting the typo.
instagibbs commented at 1:43 pm on July 2, 2025:I think that’s still ambiguous as it does read like a taproot sigops budget sense where the opcode itself must be executed to countluke-jr changes_requesteddarosior force-pushed on Jul 2, 2025in src/policy/policy.cpp:177 in 29b9f7f207 outdated
172+ 173+ unsigned int sigops{0}; 174+ for (const auto& txin: tx.vin) { 175+ const auto& prev_txo{inputs.AccessCoin(txin.prevout).out}; 176+ 177+ // Unlike the existing block wide sigop limit, BIP54 counts sigops where they are actually executed.
luke-jr commented at 1:44 pm on July 2, 2025:This is still wrong. If you bury a sigop in (false) OP_IF, they are still counted even though not actually executed
darosior commented at 3:03 pm on July 2, 2025:I’m aware. “Where” refers to the scripts. This is in opposition to the current sigops counting which, as you know, counts sigops in unexecuted scripts.
I think the wording is clear as such, but if you have a better one please suggest it.
luke-jr commented at 3:04 pm on July 2, 2025:0 // Unlike the existing block wide sigop limit which counts sigops present in the block itself (including the scriptPubKey which is not executed until spending later), BIP54 counts sigops in the block where they are potentially executed (only).
darosior commented at 3:09 pm on July 2, 2025:Taken, thanks.darosior force-pushed on Jul 2, 2025DrahtBot added the label CI failed on Jul 2, 2025darosior force-pushed on Jul 2, 2025DrahtBot removed the label CI failed on Jul 2, 2025darosior force-pushed on Jul 3, 2025in src/policy/policy.cpp:175 in b60f598b13 outdated
170+{ 171+ Assert(!tx.IsCoinBase()); 172+ 173+ unsigned int sigops{0}; 174+ for (const auto& txin: tx.vin) { 175+ const auto& prev_txo{inputs.AccessCoin(txin.prevout).out};
l0rinc commented at 11:33 am on July 3, 2025:Can we move this closer to its usage - to obviate that the first check doesn’t need this yet.
darosior commented at 6:57 pm on July 9, 2025:I prefer it this way.in src/policy/policy.cpp:181 in b60f598b13 outdated
176+ 177+ // Unlike the existing block wide sigop limit which counts sigops present in the block 178+ // itself (including the scriptPubKey which is not executed until spending later), BIP54 179+ // counts sigops in the block where they are potentially executed (only). 180+ // This means sigops in the spent scriptpubkey count toward the limit. 181+ // `fAccurate` means correctly accounting sigops for CHECKMULTISIGs with 16 pubkeys or less. This
l0rinc commented at 11:36 am on July 3, 2025:0 // `fAccurate` means correctly accounting sigops for CHECKMULTISIG(VERIFY)s with 16 pubkeys or fewer. This
darosior commented at 6:57 pm on July 9, 2025:Done.in src/policy/policy.cpp:180 in b60f598b13 outdated
175+ const auto& prev_txo{inputs.AccessCoin(txin.prevout).out}; 176+ 177+ // Unlike the existing block wide sigop limit which counts sigops present in the block 178+ // itself (including the scriptPubKey which is not executed until spending later), BIP54 179+ // counts sigops in the block where they are potentially executed (only). 180+ // This means sigops in the spent scriptpubkey count toward the limit.
l0rinc commented at 11:37 am on July 3, 2025:Nit: this seems a bit repetitive now that the above lines were edited - if we keep it, consider unifying how we write
scriptPubKey
:0 // This means sigops in the spent scriptPubKey count toward the limit.
darosior commented at 6:57 pm on July 9, 2025:Done.policy: make pathological transactions packed with legacy sigops non-standard.
The Consensus Cleanup soft fork proposal includes a limit on the number of legacy signature operations potentially executed when validating a transaction. If this change is to be implemented here and activated by Bitcoin users in the future, we should prevent the ability for someone to broadcast a transaction through the p2p network that is not valid according to the new rules. This is because if it was possible it would be a trivial DoS to potentially unupgraded miners after the soft fork activates. We do not know for sure whether users will activate the Consensus Cleanup. However if they do such transactions must have been made non-standard long in advance, due to the time it takes for most nodes on the network to upgrade. In addition this limit may only be run into by pathological transactions which pad the Script with sigops but do not use actual signatures when spending, as otherwise they would run into the standard transaction size limit.
qa: unit test standardness of inputs packed with legacy sigops
Check bounds and different output types.
qa: functional test a transaction running into the legacy sigop limit
It's useful to have an end-to-end test in addition to the unit test to sanity check the RPC error as well as making sure the transaction is otherwise fully standard.
darosior force-pushed on Jul 9, 2025ariard commented at 4:31 am on July 12, 2025: noneI agree this needs a ML post. Will take care of this tomorrow.
Answered back to the latest post of the PR author on the ML.
I do think it’s introducing a new DoS vector for multi-party construction, as now one of your prospective counterparty in the flow can maliciously contribute an input hitting this policy limit. The lightning flow should be safe as it’s sanitizing out non-segwit inputs (here for
tx_add_input
), though I have absolutely no idea for any other multi-party use-cases like coinjoin or swaps.I strong believe it’s still worthy to deploy this limit in abstracto to protect miners w.r.t DoS, though it would be worthy to communicate more on this towards potentially affected use-cases maintainers, devs and operators. That kind of “jam-the-propagation” of a multi-party tx it’s the initial reason why opt-in rbf was proposed btw. Writing this, I realize the lightning checks themselves aren’t robust as in
src/policy/policy.cpp
AreInputStandard()
’s “prevents mempool acceptance of spends of future segwit versions we don’t know how to validate”…
github-metadata-mirror
This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-07-12 12:13 UTC
This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me