1043@@ -1044,7 +1044,7 @@ def big_spend_inputs(ctx):
1044 # Test that an input stack size of 1000 elements is permitted, but 1001 isn't.
1045 add_spender(spenders, "tapscript/1000inputs", leaf="t23", **common, inputs=[getter("sign")] + [b'' for _ in range(999)], failure={"leaf": "t24", "inputs": [getter("sign")] + [b'' for _ in range(1000)]}, **ERR_STACK_SIZE)
1046 # Test that pushing a MAX_SCRIPT_ELEMENT_SIZE byte stack element is valid, but one longer is not.
1047- add_spender(spenders, "tapscript/pushmaxlimit", leaf="t25", **common, **SINGLE_SIG, failure={"leaf": "t26"}, **ERR_PUSH_LIMIT)
1048+ add_spender(spenders, "tapscript/pushmaxlimit", standard=False, leaf="t25", **common, **SINGLE_SIG, failure={"leaf": "t26"}, **ERR_PUSH_LIMIT)
I think this change is what’s causing CI to fail – nothing in this PR changes what’s standard by default, so when this testcase turns out to not be rejected from the mempool, it reports it as an error. Changing this to False would only be correct if you were also adding -limitdummyscriptdatasize=80or similar to
extra_args`.
I don’t think you should be changing this test case at all, but rather should be adding a test to mempool_datacarrier.py
.
I have undo all changes on feature_taproot.py
Implementing tests are way above want I can do, so it’s up for grab.