TestBlockValidity()
is also run on the block template at the end of CreateNewBlock()
, so any invalid block would be caught there.
TestBlockValidity()
is also run on the block template at the end of CreateNewBlock()
, so any invalid block would be caught there.
Given the low possibility of a reorg reverting the segwit soft fork,
there is no need to check whether segwit is active here. Also,
TestBlockValidity is run on the block template after it has been
created.
196- self.log.info("Verify witness txs are skipped for mining before the fork")
197- self.skip_mine(self.nodes[2], wit_ids[NODE_2][P2WPKH][0], True) # block 424
198- self.skip_mine(self.nodes[2], wit_ids[NODE_2][P2WSH][0], True) # block 425
199- self.skip_mine(self.nodes[2], p2sh_ids[NODE_2][P2WPKH][0], True) # block 426
200- self.skip_mine(self.nodes[2], p2sh_ids[NODE_2][P2WSH][0], True) # block 427
201+ self.generate(self.nodes[0], 264) # block 427
I guess this way we have test coverage for “unexpected witness”
That’s a very indirect way of testing a failure condition in ContextualCheckBlock()
. If we want a test for that condition, then we should have a functional test where a non-witness block (which is possible to create even after segwit activation) is stuffed with witness data. That’s a condition that could happen on mainnet, and so isn’t relying on impossible-to-hit-in-production mining code. Even better would be a unit test for ContextualCheckBlock()
.
I think it’s inconsistent to remove production code that is “impossible to hit” and then add a test that hits that exact condition.
I see the point why this practically doesn’t need test coverage. I also think the changes here are safe to make. However, I think we should still think about the case where our assumptions fail or our expectations aren’t met. It is minimally useful for the test to remind reviewers what will happen in the unexpected case.
For example, I disagree about:
failing to mine a block is strictly worse behaviour than mining a block without the segwit transaction.
If this code path were to be triggerable by an attacker, it would be a strictly better outcome to fail to mine a block before witness than to mine a block. First, a block mined far behind the tip will never be accepted into the main chain and thus never financially rewarded, so there is no benefit of mining it. Second, returning an error early may tell the attacked miner that they have been eclipsed earlier. Third, mining a block (and submitting it, then later reorging it (if it was accepted)) is going to execute more code during the eclipse attack than not mining a block. If there are bugs in the executed code, it gives more tools to the attacker for no benefit. So I think it also serves as a belt-and-suspender.
Another example (unrelated to this pull request):
If an attacker could make the blocksdir unusable (until a manual -reindex
) by executing a reorg that is large enough that we think it won’t happen, I still think we should fix the bug so that the blocksdir remains usable after any states the software went through.
212 self.log.info("Verify unsigned p2sh witness txs without a redeem script are invalid")
213 self.fail_accept(self.nodes[2], "mandatory-script-verify-flag-failed (Operation not valid with the current stack size)", p2sh_ids[NODE_2][P2WPKH][1], sign=False)
214 self.fail_accept(self.nodes[2], "mandatory-script-verify-flag-failed (Operation not valid with the current stack size)", p2sh_ids[NODE_2][P2WSH][1], sign=False)
215
216- self.generate(self.nodes[2], 4) # blocks 428-431
217+ self.generate(self.nodes[0], 4) # blocks 428-431
No. This is only used to increase the block height, so that the activation height is hit. See also:
0 "-testactivationheight=segwit@432",
LGTM.
I think there is a small theoretical concern that this will make Bitcoin Core internally minimally more inconsistent. See the comment why I initially held back on proposing this pull.
It is safe to make the changes here, because we assume that the nMinimumChainWork
bundled in any release that will include this patch is sufficiently large to make reorg-attacks during IBD while eclipsed prohibitively expensive. The same assumption was made in commit https://github.com/bitcoin/bitcoin/commit/0a8b7b4b33c9d78574627fc606267e2d8955cd1c and https://github.com/bitcoin/bitcoin/pull/23536
I’m sometimes seeing
0test_framework.authproxy.JSONRPCException: CreateNewBlock: TestBlockValidity failed: unexpected-witness, ContextualCheckBlock : unexpected witness data found (-1)
from feature_segwit.py, generated by
0 File "/home/aj/P/bitcoin/bitcoin/chk/test/functional/./feature_segwit.py", line 210, in run_test
1 self.generate(self.nodes[0], 4) # blocks 428-431
that looks like it might be due to this PR? (Took about 80 runs to reproduce)
sync_mempools
call or maybe even just with the noban
permission flag for faster relay.
The problem doesn’t appear to get hit prior to this PR (after ~250 runs). Adding self.sync_mempools()
prior to generate for blocks 428 appears to make it fail reliably with this PR, while the same change prior to this PR doesn’t seem to cause failures. Adding -whitelist=127.0.0.1
makes the failure more frequent but not quite reliable.
Seems to me that random test suite failures makes this worth reverting…
Probably happens because the witness txs from node 2 are sent to node 0?
That’s my guess as well. Can probably fix by disconnecting node0/node2 before generate, and then reconnecting after 431 is mined. But definitely feel free to revert, sorry for missing this!
I think there are two options:
I think there are two options:
- Remove the test (https://github.com/bitcoin/bitcoin/pull/24421#discussion_r812615236)
- disconnect/reconnect (https://github.com/bitcoin/bitcoin/pull/24421#issuecomment-1067728013)
My vote would be for removing the invalid test :)