Parallelize CheckInputs() in AcceptToMemoryPool() #15169

pull sdaftuar wants to merge 4 commits into bitcoin:master from sdaftuar:2018-12-parallel-mempool-scriptchecks changing 18 files +129 −123
  1. sdaftuar commented at 3:01 am on January 15, 2019: member

    This PR takes advantage of our script check threads (used for block validation) in order to speed up transaction validation when adding to the mempool.

    The motivation is to minimize the amount of time we might spend validating a single transaction. I think there are several approaches we could take to mitigate this concern, but this approach seemed straightforward to implement, and would be easy to revert if we later opted for a different solution (such as parallelizing our message handler or dropping transactions into some sort of queue for background processing).

    This PR does introduce some behavior changes:

    • we would no longer ban peers for transactions with invalid scripts (however, the status quo behavior is spotty and inconsistent in how it bans anyway, since the mandatory consensus flags set is years old and we stop at the first script execution error)
    • we could relay transactions from a whitelisted peer that would cause us to be banned by an older peer (but -whitelistforcerelay now defaults to off, mitigating this concern)
  2. fanquake added the label Validation on Jan 15, 2019
  3. sdaftuar force-pushed on Jan 15, 2019
  4. DrahtBot commented at 7:00 am on January 15, 2019: member

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #16486 ([consensus] skip bip30 checks when assumevalid is set for the block by pstratem)
    • #16400 ([refactor] Rewrite AcceptToMemoryPoolWorker() using smaller parts by sdaftuar)
    • #15713 (refactor: Replace chain relayTransactions/submitMemoryPool by higher method by ariard)
    • #15192 (Add missing cs_main locks in ThreadImport(…)/Shutdown(…)/gettxoutsetinfo(…)/InitScriptExecutionCache(). Add missing locking annotations. by practicalswift)
    • #13868 (Remove unused fScriptChecks parameter from CheckInputs by Empact)
    • #13204 (Faster sigcache nonce by JeremyRubin)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  5. gmaxwell commented at 8:34 pm on January 15, 2019: contributor

    we could relay transactions from a whitelisted peer that would cause us to be banned by an older peer

    This seems bad. The original motivation for making whitelist relay is long long long since dead (*). Can we just eliminate this behaviour, as it makes whitelisting less useful and isn’t useful itself?

    If there is still any need for forced-relay transactions it could be accomplished by introducing a new P2P message that only whitelisted peers are allowed to use, that carries with it a TX whos relay is forced.

    (*) my recollection is that the original reason that behaviour was added was because people using armory would start armory and their bitcoin node at the same time, and their armory wallet would blast txn to send at bitcoind which would end up in its mempool but not relayed on the network because it had no actual peers up. … then later attempts by armory to rebroadcast were ineffectual because the txn were already in the node’s mempool. However, because it causes all your peers to be flooded by txn relayed by your whitelisted neighbours it makes whitelisting useless outside of that armory usecase, unless you add the second option to force relay off.

  6. gmaxwell commented at 8:35 pm on January 15, 2019: contributor
    On the overall change, sounds interesting but without benchmarks I dunno how exciting it is. I guess I’m kinda surprised that there is a speedup at all considering that IBD doesn’t seem to scale past 15/16 cores… which I’d assumed was due to synchronization overheads in the script validation thread dispatch.
  7. sdaftuar commented at 10:13 pm on January 15, 2019: member

    I would love to get rid of the whitelist relay behavior. Any opinions about whether I should open a new PR for that, or just do it here?

    My initial benchmark is on a very large transaction that has 640 P2PKH inputs, 2 outputs, and is policy-valid. I compared this PR with master, and I tried this on two computers, a Power9 workstation (which doesn’t have the SHA optimizations) and my laptop (a recent macbook pro).

    With master (verification threads are unused in mempool acceptance):

    Computer Verification Threads average scriptcheck time average total atmp time
    power9 16 283.0ms 285.6ms
    macbook pro 8 176.7ms 178.5ms

    The “scriptcheck time” is a timing I added from just before the first call to CheckInputs(), to just after the second call. (I compare with the total ATMP time to sanity check that this is a meaningful optimization.)

    With this PR:

    Computer Verification Threads average scriptcheck time average total atmp time
    power9 16 35.7ms 38.2ms
    macbook pro 8 40.7ms 42.6ms

    I’m actually somewhat more interested in benchmarking transactions that use maximal CPU resources but don’t get into the mempool, since such transactions are costless for someone to generate (and because they’re slower for us to process, at least in this PR as it stands) – I will try to post those numbers later. Also, if you have any suggestions for transactions that might be more costly to validate than what I’ve done so far, please let me know.

  8. MarcoFalke commented at 10:29 pm on January 15, 2019: member
    It would be nice to have the benchmark in master before the changes are made here so that it is easier to see how much each of the two commits change the performance.
  9. in src/validation.cpp:1689 in 8cd9e2f0a0 outdated
    1689+        }
    1690+    }
    1691+    bool parallel_verify = false;
    1692+    if (legacy_inputs >= 15 && nScriptCheckThreads) {
    1693+        parallel_verify = true;
    1694+    }
    


    JeremyRubin commented at 10:49 pm on January 15, 2019:

    Perhaps marginally less clear, but as follows, we skip the legacy input count if we don’t have parallel threads & we abort the check as soon as we trigger the legacy count condition.

    0    bool parallel_verify = nScriptCheckThreads > 0;
    1    size_t legacy_inputs_threshold = parallel_verify ? 15 : 0;
    2    for (size_t i = 0; i < tx.vin.size() && legacy_input_threshold; i++) {
    3        legacy_inputs_threshold -= tx.vin[i].scriptWitness.IsNull();
    4    }
    5    parallel_verify &= legacy_input_threshold == 0; 
    

    sdaftuar commented at 10:19 pm on May 13, 2019:
    I think you’re right that it’s not worth the complexity of gating the logic on the number of legacy inputs. I did some microbenchmarking on a couple of my computers to see how long it takes to validate small transactions at various thresholds, and the effect seems very small.
  10. JeremyRubin commented at 11:16 pm on January 15, 2019: contributor

    concept ack.

    I think I’d rather see the parallelization determination be abstracted into the CheckQueue itself; although I think it’s difficult to envision how that could be done neatly without incurring the overhead of constructing the check jobs. Maybe what would be best is to make it so that the overhead of using the queue is sufficiently low such that we don’t need to worry about only using it for a certain number of inputs. IE, someday if the queue is reworked such that this overhead goes away, it would be weird that code gating the use of the queue dependent on job parallelization exists outside the queue.

    That said, can you provide a benchmark detailing how bad it is to just always use the queue as-is?

  11. gmaxwell commented at 1:38 am on January 16, 2019: contributor
    @sdaftuar on forced relay, I think a PR that defaulted it to off could just be open separately and make it in faster than this work… and this would could continue assuming that issue is resolved elsewhere.
  12. DrahtBot added the label Needs rebase on Jan 24, 2019
  13. sdaftuar force-pushed on May 7, 2019
  14. DrahtBot removed the label Needs rebase on May 7, 2019
  15. sdaftuar renamed this:
    WIP: Parallelize CheckInputs() in AcceptToMemoryPool()
    Parallelize CheckInputs() in AcceptToMemoryPool()
    on May 7, 2019
  16. sdaftuar force-pushed on May 14, 2019
  17. sdaftuar commented at 1:14 pm on May 14, 2019: member

    Updated to always use parallel validation for mempool transactions if scriptcheck threads are available, and also updated the OP to describe the current behavior.

    One remaining open question in my mind is around the new behavior where a peer sending an invalid-according-to-consensus rules transaction would no longer be disconnected. The motivation for this is to mitigate adversarial behavior: in order to determine whether a transaction is invalid according to consensus rules (rather than policy) we end up validating a transaction’s scripts twice, so an adversary trying a CPU DoS could construct transactions that only fail policy checks, but otherwise tie up our CPU for twice as long.

    However, in a non-adversarial case, where our peer is merely broken (or intended to operate on a different network) but non-malicious, this means that we will stay connected to the peer longer than we should. This is also not ideal. I have two ideas for addressing this, but I welcome other suggestions:

    (1) probabilistically evaluate failing transactions with consensus flags, and using the result to decide whether to disconnect a peer – say we had a 1% chance of validating a failed transaction a second time, so that if a peer is giving us garbage we can figure it out relatively quickly while still limiting CPU usage for a bad transaction in the general case.

    (2) @TheBlueMatt suggested I refactor the checkqueue logic to pass back the reason for a script failure (say, the first script that fails when doing parallel validation), and then we use that to determine whether to disconnect a peer based on a single validation of scripts. Assuming that there are no conceptual issues with this, then this should be optimal behavior for us. This also seems a little invasive and IMO the gain here is not obviously worth the effort, but Matt suggested we might want this the next time a consensus change is proposed anyway.

    Any thoughts on whether the behavior change proposed here is ok, or if I should pursue an alternate approach, such as one of the two ideas above?

  18. JeremyRubin commented at 4:27 pm on May 14, 2019: contributor

    The check queue approach is IMO a bad idea because we may fail for different reasons on different executions.

    If you want a reason, I would re-run validation in single core and reproduce the first failure.

    Otherwise maybe you can produce all failure tuples (index, failure) and then take the sorted first one?

  19. JeremyRubin commented at 4:32 pm on May 14, 2019: contributor
    You can even track the min txn that any worker has attempted and then just restart from there – that might save you some work against blocks which have their last txn fail.
  20. sdaftuar commented at 1:28 pm on May 15, 2019: member
    @JeremyRubin For a transaction containing multiple failing script inputs, I don’t think it should matter whether the failure we associate with it is deterministic, since trying to find the reason for validation failure is only something we’re trying to do for a non-adversarial case anyway. Do you have a scenario in mind where it would be an issue?
  21. sdaftuar force-pushed on Jun 6, 2019
  22. sdaftuar commented at 8:02 am on June 7, 2019: member
    I added a commit (to be squashed) which uses single-threaded validation for transactions submitted over RPC, since we’re not worried about DoS there and users can get more detailed error information for invalid transactions when we scripts are evaluated serially. @jnewbery – thanks for the suggestion.
  23. in src/validation.cpp:790 in dd236e45b0 outdated
    894@@ -895,6 +895,20 @@ static bool AcceptToMemoryPoolWorker(const CChainParams& chainparams, CTxMemPool
    895         // This is done last to help prevent CPU exhaustion denial-of-service attacks.
    896         PrecomputedTransactionData txdata(tx);
    897         if (!CheckInputs(tx, state, view, true, scriptVerifyFlags, true, false, txdata)) {
    898+            // Assume that this is due to a policy violation, rather than
    


    TheBlueMatt commented at 7:01 pm on July 10, 2019:
    Hmm, one thing this potentially protects us from is inbound peers on fork chains where the chain doesn’t (yet) have any blocks to relay us (so that we ban them for invalid headers). We may want to broaden the ConsiderEviction logic to apply some similar (but relaxed - don’t want to disconnect peers which are in IBD and making progress) rules to inbound peers. No need to do it in this PR, though - I think our inbound peer eviction logic could use some tuning independently and no huge risk even in the case of more fork chains given the outbound peer eviction logic robustness.
  24. in src/validation.cpp:799 in dd236e45b0 outdated
    901+            // support old peers who may not be aware of recent consensus
    902+            // changes; returning NOT_STANDARD here is maximally permissive.
    903+            // Anyway re-checking the scripts with a different set of flags
    904+            // wastes CPU as well (particularly if an attacker is constructing
    905+            // transactions designed to waste our CPU).
    906+            state.Invalid(ValidationInvalidReason::TX_NOT_STANDARD, false, REJECT_NONSTANDARD, state.GetRejectReason(), state.GetDebugMessage());
    


    TheBlueMatt commented at 7:04 pm on July 10, 2019:
    nit: maybe include in the comment why we have to call this function anyway (that CheckInputs always returns CONSENSUS so we have to overwrite it).

    sdaftuar commented at 6:43 pm on July 22, 2019:
    Done
  25. TheBlueMatt approved
  26. TheBlueMatt commented at 7:19 pm on July 10, 2019: member
    utACK b689d4de467cf4031a2473afc08d2a0fe1b6c71b. Some comments but none all that important, really think this is a more important change than it gets credit for.
  27. sdaftuar commented at 3:40 pm on July 12, 2019: member
    Thanks for the review @TheBlueMatt. I’ll hold off on the comment nit unless I end up updating this PR for something else.
  28. in test/functional/feature_dersig.py:105 in b689d4de46 outdated
    100@@ -101,8 +101,8 @@ def run_test(self):
    101         # First we show that this tx is valid except for DERSIG by getting it
    102         # rejected from the mempool for exactly that reason.
    103         assert_equal(
    104-            [{'txid': spendtx.hash, 'allowed': False, 'reject-reason': '64: non-mandatory-script-verify-flag (Non-canonical DER signature)'}],
    105-            self.nodes[0].testmempoolaccept(rawtxs=[spendtx.serialize().hex()], maxfeerate=0)
    106+            [{'txid': spendtx.hash, 'allowed': False, 'reject-reason': '64: script-verify-flag-failed (Non-canonical DER signature)'}],
    107+            self.nodes[0].testmempoolaccept(rawtxs=[spendtx.serialize().hex()], allowhighfees=0)
    


    jnewbery commented at 9:43 pm on July 15, 2019:
    maxfeerate -> allowhighfees seems like a bad rebase. The new argument is maxfeerate, but allowhighfees is retained for backwards compatibility and may be removed in a future version.

    sdaftuar commented at 2:24 pm on July 16, 2019:
    Fixed
  29. in test/functional/feature_cltv.py:116 in b689d4de46 outdated
    111@@ -112,16 +112,16 @@ def run_test(self):
    112         # First we show that this tx is valid except for CLTV by getting it
    113         # rejected from the mempool for exactly that reason.
    114         assert_equal(
    115-            [{'txid': spendtx.hash, 'allowed': False, 'reject-reason': '64: non-mandatory-script-verify-flag (Negative locktime)'}],
    116-            self.nodes[0].testmempoolaccept(rawtxs=[spendtx.serialize().hex()], maxfeerate=0)
    117+            [{'txid': spendtx.hash, 'allowed': False, 'reject-reason': '64: script-verify-flag-failed (Negative locktime)'}],
    118+            self.nodes[0].testmempoolaccept(rawtxs=[spendtx.serialize().hex()], allowhighfees=0)
    


    jnewbery commented at 9:43 pm on July 15, 2019:
    maxfeerate -> allowhighfees seems like a bad rebase. The new argument is maxfeerate, but allowhighfees is retained for backwards compatibility and may be removed in a future version.

    sdaftuar commented at 2:24 pm on July 16, 2019:
    Fixed
  30. in test/functional/p2p_segwit.py:187 in b689d4de46 outdated
    183@@ -184,7 +184,7 @@ def set_test_params(self):
    184         self.setup_clean_chain = True
    185         self.num_nodes = 3
    186         # This test tests SegWit both pre and post-activation, so use the normal BIP9 activation.
    187-        self.extra_args = [["-whitelist=127.0.0.1", "-vbparams=segwit:0:999999999999"], ["-whitelist=127.0.0.1", "-acceptnonstdtxn=0", "-vbparams=segwit:0:999999999999"], ["-whitelist=127.0.0.1", "-vbparams=segwit:0:0"]]
    188+        self.extra_args = [["-whitelist=127.0.0.1", "-vbparams=segwit:0:999999999999", "-whitelistforcerelay=0"], ["-whitelist=127.0.0.1", "-acceptnonstdtxn=0", "-vbparams=segwit:0:999999999999", "-whitelistforcerelay=0"], ["-whitelist=127.0.0.1", "-vbparams=segwit:0:0", "-whitelistforcerelay=0"]]
    


    jnewbery commented at 9:46 pm on July 15, 2019:
    This change is not necessary now that #15193 has been merged.

    sdaftuar commented at 2:24 pm on July 16, 2019:
    Removed
  31. in test/functional/feature_segwit.py:160 in b689d4de46 outdated
    153@@ -154,8 +154,8 @@ def run_test(self):
    154         self.skip_mine(self.nodes[2], p2sh_ids[NODE_2][WIT_V1][0], True)  # block 427
    155 
    156         self.log.info("Verify unsigned p2sh witness txs without a redeem script are invalid")
    157-        self.fail_accept(self.nodes[2], "mandatory-script-verify-flag", p2sh_ids[NODE_2][WIT_V0][1], False)
    158-        self.fail_accept(self.nodes[2], "mandatory-script-verify-flag", p2sh_ids[NODE_2][WIT_V1][1], False)
    159+        self.fail_accept(self.nodes[2], "script-verify-flag", p2sh_ids[NODE_2][WIT_V0][1], False)
    


    jnewbery commented at 11:12 pm on July 15, 2019:
    The failure string can be “script-verify-flag-failed” here. Even better, with the specific error string “script-verify-flag-failed (Operation not valid with the current stack size)”

    sdaftuar commented at 2:25 pm on July 16, 2019:
    Leaving this one alone – someone interested in improving this test can take it on in a separate PR.
  32. jnewbery commented at 11:18 pm on July 15, 2019: member

    A few small comments on the test changes.

    A few comments on the git commits:

    • I recommend removing the word ‘Refactor’ from the commit log for Refactor CheckInputs() to only validate scripts once (since it’s not a refactor - it’s a change in behaviour)
    • Are you able to update the commit log for [refactor] Eliminate default argument to ATMP to explain why you’re making this change?
    • Can you squash the squashme commit?
  33. sdaftuar force-pushed on Jul 16, 2019
  34. sdaftuar commented at 2:26 pm on July 16, 2019: member

    Addressed all but one of @jnewbery’s comments, and included the comment change that @TheBlueMatt previously suggested.

    Prior version of this PR can be compared here: 15169.1

  35. in test/functional/p2p_segwit.py:187 in cb0c42cfda outdated
    183@@ -184,7 +184,7 @@ def set_test_params(self):
    184         self.setup_clean_chain = True
    185         self.num_nodes = 3
    186         # This test tests SegWit both pre and post-activation, so use the normal BIP9 activation.
    187-        self.extra_args = [["-whitelist=127.0.0.1", "-vbparams=segwit:0:999999999999"], ["-whitelist=127.0.0.1", "-acceptnonstdtxn=0", "-vbparams=segwit:0:999999999999"], ["-whitelist=127.0.0.1", "-vbparams=segwit:0:0"]]
    188+        self.extra_args = [["-whitelist=127.0.0.1", "-vbparams=segwit:0:999999999999"], ["-whitelist=127.0.0.1", "-acceptnonstdtxn=0", "-vbparams=segwit:0:999999999999" ], ["-whitelist=127.0.0.1", "-vbparams=segwit:0:0" ]]
    


    jnewbery commented at 4:06 pm on July 16, 2019:
    Unnecessary whitespace change
  36. in src/validation.cpp:562 in cb0c42cfda outdated
    558@@ -558,7 +559,7 @@ static bool CheckInputsFromMempoolAndCache(const CTransaction& tx, CValidationSt
    559         }
    560     }
    561 
    562-    return CheckInputs(tx, state, view, true, flags, cacheSigStore, true, txdata);
    563+    return RunCheckInputsMaybeParallel(tx, state, view, flags, cacheSigStore, true, txdata);
    


    jnewbery commented at 5:42 pm on July 16, 2019:

    Note for reviewers that there’s a change in behaviour here where previously state would be set to NOT_STANDARD and now it’s set to CONSENSUS. I think that’s correct behaviour, since this check is against consensus rules.

    Also note that this should never fail. If CheckInputsFromMempoolAndCache() fails script checking it’s because a policy change has been deployed which is a hard fork against consensus rules. That’s a bug.


    jnewbery commented at 5:58 pm on July 16, 2019:
    I believe this breaks script caching - CheckInputs() will only cache script success if running non-parallel.
  37. in src/validation.cpp:1335 in cb0c42cfda outdated
    1460-                    // could be due to non-upgraded nodes which we may want to
    1461-                    // support, to avoid splitting the network (but this
    1462-                    // depends on the details of how net_processing handles
    1463-                    // such errors).
    1464-                    return state.Invalid(ValidationInvalidReason::CONSENSUS, false, REJECT_INVALID, strprintf("mandatory-script-verify-flag-failed (%s)", ScriptErrorString(check.GetScriptError())));
    1465+                    // NOTE: We return CONSENSUS as the reason here, even
    


    jnewbery commented at 5:59 pm on July 16, 2019:

    It seems slightly weird (and scary) to return that this was a consensus failure, when this is a dumb utility function which is just checking whichever rules it gets passed. Perhaps a future PR should remove the CValidationState argument and force the caller to set the validation failure reason:

    • The initial call to CheckInputs() in ATMP should always set the state to TX_NOT_STANDARD on failure
    • The subsequent call to CheckInputs() in ATMP should always set the state to TX_WITNESS_MUTATED on failure
    • ConnectBlock should update this anyway (eg by setting a BlockValidationState after #15921)
    • The call in CheckInputsFromMempoolAndCache() should set this to CONSENSUS (since it’s checking against consensus flags)
  38. jnewbery commented at 6:00 pm on July 16, 2019: member

    A few comments inline. One more:

    • can you remove MANDATORY_SCRIPT_VERIFY_FLAGS and its comment from policy.h (and replace it in the definition of STANDARD_SCRIPT_VERIFY_FLAGS with SCRIPT_VERIFY_P2SH) now that it no longer has any meaning?
  39. sdaftuar commented at 6:59 pm on July 16, 2019: member
    @jnewbery Good catch on the caching bug. I’ve pushed up a quick fix for now, but probably I should do something better here when I have a chance to revisit.
  40. in src/rpc/rawtransaction.cpp:896 in cb0c42cfda outdated
    924@@ -925,7 +925,7 @@ static UniValue testmempoolaccept(const JSONRPCRequest& request)
    925     {
    926         LOCK(cs_main);
    927         test_accept_res = AcceptToMemoryPool(mempool, state, std::move(tx), &missing_inputs,
    928-            nullptr /* plTxnReplaced */, false /* bypass_limits */, max_raw_tx_fee, /* test_accept */ true);
    929+            nullptr /* plTxnReplaced */, false /* bypass_limits */, max_raw_tx_fee, /* test_accept */ true, /* single_threaded_validation */ true);
    


    fjahr commented at 4:02 pm on July 17, 2019:
    Style-Nit: Would be a bit nicer to have consistency in the ordering of argument and comment.

    MarcoFalke commented at 5:29 pm on July 17, 2019:
    The arguments might be wrapped into a struct before they are passed at some point in the future, so I guess this style inconsistency will be solved at that point
  41. in src/validation.cpp:1562 in 775b71d2c4 outdated
    1674@@ -1672,6 +1675,23 @@ void ThreadScriptCheck(int worker_num) {
    1675     scriptcheckqueue.Thread();
    1676 }
    1677 
    1678+bool RunCheckInputsMaybeParallel(const CTransaction& tx, CValidationState &state, const CCoinsViewCache &inputs, unsigned int flags, bool cacheSigStore, bool cacheFullScriptStore, PrecomputedTransactionData& txdata, bool single_threaded_validation) EXCLUSIVE_LOCKS_REQUIRED(cs_main)
    


    fjahr commented at 8:07 pm on July 17, 2019:
    Question on code structure: I would like to understand the reasoning for wrapping CheckInputs() in RunCheckInputsMaybeParalell(). Rather than that I would probably have put that logic into CheckInputs() itself and then have it delegate to CheckInputsParalell() or so. Are there other factors at play that I am not seeing, like keeping the changes in CheckInputs() as minimal as possible for example?

    sdaftuar commented at 2:25 pm on July 22, 2019:
    CheckInputs() is also invoked directly from ConnectBlock(), which handles the parallel scriptcheck threads itself.
  42. in src/validation.cpp:1574 in 775b71d2c4 outdated
    1685+    }
    1686+    control.Add(vChecks);
    1687+    if (!control.Wait()) {
    1688+        return state.Invalid(ValidationInvalidReason::CONSENSUS, false, REJECT_INVALID, "script-verify-flag-failed");
    1689+    }
    1690+    if (nScriptCheckThreads && !single_threaded_validation && cacheFullScriptStore) {
    


    hugohn commented at 2:14 pm on July 18, 2019:
    nit: since (nScriptCheckThreads && !single_threaded_validation) is used several times throughout this function, maybe declare it at the beginning of the function? (e.g. bool use_multiple_threads = (nScriptCheckThreads && !single_threaded_validation).

    sdaftuar commented at 6:43 pm on July 22, 2019:
    Done
  43. Change CheckInputs() to only validate scripts once
    Previously CheckInputs() could invoke a script twice to determine if a script
    failure was due to standardness rules. However, it only performed this in
    single-threaded validation mode, and it directly accessed policy variables to
    make this determination (even though the caller was passing in the desired
    script flags to use).
    
    The policy vs consensus script flags comparison seems out of touch with current
    network deployments; the only MANDATORY script flag is P2SH, even though many
    other consensus changes have been deployed since. However, adding further flags
    to the MANDATORY set risks making the p2p behavior overly restrictive by
    disconnecting unupgraded peers unnecessarily.
    
    Moreover, AcceptToMemoryPoolWorker() already invokes CheckInputs multiple
    times, with different script flags, to detect certain kinds of segwit
    validation errors. Given that the caller is responsible for passing in the
    correct validation flags, it feels like a layer violation for CheckInputs() to
    be re-running checks with different script flags by itself.
    
    This patch moves all responsibility for determining which script flags are
    mandatory to the caller, so CheckInputs() will now always return CONSENSUS for
    any script failures. The caller can rewrite the error code itself, eg if
    policy-flags were included.
    
    This patch also changes the ban behavior for peers sending transactions with
    invalid scripts; we will no longer disconnect such peers. It was already
    trivial for an adversary to mutate a transaction to violate a policy rule,
    previously, in order to relay a transaction and avoid a ban, so we were not
    relying on the ban behavior previously.
    
    As this results in changes to the reject reasons and codes we can return, many
    of the tests have been updated as well.
    1a607e7a8d
  44. [refactor] Eliminate default argument to ATMP
    Default arguments make review easier when adding a single new argument to a
    widely called function, but then are harder to keep track of (and ensure
    correctness of callers) when adding new arguments in the future.
    37652b0b62
  45. sdaftuar force-pushed on Jul 22, 2019
  46. sdaftuar commented at 5:29 pm on July 22, 2019: member
    There was a silent merge conflict in the prior version of this PR (15169.2) so I’ve rebased – still intend to revisit the fix for the scriptcache and outstanding nits.
  47. sdaftuar force-pushed on Jul 22, 2019
  48. sdaftuar commented at 6:42 pm on July 22, 2019: member
    Updated with an improved fix to the script caching issue; @jnewbery @TheBlueMatt care to take a look?
  49. Parallelize script validation in AcceptToMemoryPool()
    When receiving transactions over the network, use script check threads (if
    available) to reduce worst-case message handling time.
    
    Continue to use single-threaded validation for rpc invocations (which results
    in more descriptive error messages, on failure), where DoS is not a significant
    concern.
    bae3f0f8ea
  50. Remove unused MANDATORY_SCRIPT_VERIFY_FLAGS
    This variable is no longer used, so remove it.
    
    Previously, this was used by validation logic to determine whether a
    transaction with invalid scripts was violating a policy rule or a consensus
    rule -- the idea being that we might ban a peer that relays an
    invalid-according-to-consensus rule, but not an invalid-according-to-policy
    one.
    
    However, consensus rule changes are designed so that nodes need not all upgrade
    at once. Consequently, banning a peer for relaying a transaction that is newly
    invalid as a result of a recent soft-fork would risk partitioning old nodes
    from the network needlessly. Because of this type of concern, the only consensus
    flag every added to the MANDATORY set was P2SH.
    8b8148dbb3
  51. sdaftuar force-pushed on Aug 1, 2019
  52. DrahtBot added the label Needs rebase on Aug 2, 2019
  53. DrahtBot commented at 1:21 pm on August 2, 2019: member
  54. sdaftuar closed this on Sep 3, 2019

  55. laanwj removed the label Needs rebase on Oct 24, 2019
  56. DrahtBot locked this on Dec 16, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-07-03 10:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me