p2p: never check tx rejections by txid #33066

pull glozow wants to merge 1 commits into bitcoin:master from glozow:2025-07-wtxid-only-rej changing 7 files +46 −222
  1. glozow commented at 8:47 pm on July 25, 2025: member

    This PR removes the rejection cache filtering part of all txid-based tx requests. In practice, this seems to only be used for orphan resolution because everybody supports wtxidrelay. And so in essence, this PR removes the logic that stops us from attempting orphan resolution when parents are found in the rejection filter.

    Background: We have 2 bloom filters for remembering transactions that we’ve rejected so we don’t redownload them, RecentRejectsFilter and RecentRejectsReconsiderableFilter. This lets us save a little bit of bandwidth on rejected transactions, particularly if we have policy differences. It’s not designed to stop attackers from wasting our download bandwidth, as they can create as many policy-invalid transactions with different witnesses as they want. We generally only put wtxids and not txids in them (see #18044), except for these cases: (A) When we specifically know that the rejection reason is not due to the witness (however, this is only done for !AreInputsStandard, even though there are other cases we could apply) (B) When the transaction has no witness, so its txid == wtxid (C) (With #32379) When the transaction’s witness has been stripped, so txid == wtxid We check the filters to decide whether to send a getdata for an inv, whether we should validate a tx we just downloaded, and whether we should keep an orphan: we look the missing parents up by (prevout) txid and throw the orphan away if its parents were already rejected. That means there are very few cases where we save bandwidth by finding a txid in the filter.

    Rationale: mainly that the additional complexity is not worth the bandwidth savings. TLDR: this case does not seem to ever hit in practice. Even if that is circumstantial, the number of transactions this filter could apply to is extremely small.

    I collected some stats over a couple of weeks, setting mempoolexpiry to 1 hour to try to artificially increase orphan rates (though I don’t know whether the effect is significant):

    • I’m seeing 100% of peers support wtxidrelay. Bitcoin Core has been doing this since ~5 years ago. I’m sure there are some nodes that don’t do it, but it’ll probably be rare that we are surrounded by txidrelay peers and that we have a policy difference causing us to reject the transaction that they all send us.
    • Orphan parent fetching was 3.64% of my transaction requests.
    • 93.77% of all transactions I receive have witnesses (includes accepted and rejected ones). 96.20% of orphan parents I fetched had witnesses.
    • I found no cases of already-rejected parent txids in the past week. This could be because all my peers have the same policy as me (which might not always be true), but the number of nonsegwit orphan parents in general is just so tiny - a few dozen requests per day.
  2. DrahtBot added the label P2P on Jul 25, 2025
  3. DrahtBot commented at 8:47 pm on July 25, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33066.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    Concept ACK darosior, sipa
    User requested bot ignore cedwies

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #33116 (refactor: Convert uint256 to Txid by marcofleon)
    • #29060 (Policy: Report reason inputs are non standard by ismaelsadeeq)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  4. glozow commented at 8:47 pm on July 25, 2025: member
  5. darosior commented at 9:27 pm on July 25, 2025: member
    Concept ACK
  6. cedwies commented at 10:40 pm on July 25, 2025: none

    Built PR 33066 on macOS 14 (Apple Silicon) with CMake + Ninja, Debug.

    • unit tests: 141/141 pass, 9 min wall time

    what I checked: ‒ AlreadyHaveTx(): tx-id path now skips both reject filters; wtx-id path unchanged.
    ‒ MempoolRejectedTx(): parent-txid rejection bail-out removed → orphans kept even if parents sit in a reject cache ‒ updated tests (expected_behaviors table + orphan-handling cases) run green.

    Question (still wrapping my head around this): MempoolRejectedTx still inserts a tx-id into RecentRejectsFilter for TX_INPUTS_NOT_STANDARD, but After this patch AlreadyHaveTx no longer consults the filter by tx-id. Is that entry now redundant, or do other call-sites still depend on it?

    (Second review, so no (N)ACK yet. just confirming I understand the (PR) flow.)

  7. glozow commented at 7:31 pm on July 28, 2025: member

    MempoolRejectedTx still inserts a tx-id into RecentRejectsFilter for TX_INPUTS_NOT_STANDARD, but After this patch AlreadyHaveTx no longer consults the filter by tx-id. Is that entry now redundant, or do other call-sites still depend on it?

    Good question. I think we can remove that actually, since we are no longer interested in whether a transaction’s txid is in the cache 🤔

  8. ajtowns commented at 8:03 am on July 30, 2025: contributor

    TLDR, this PR removes the rejection cache filtering part of all txid-based tx requests.

    There’s three cases here I think:

    • orphan resolution from wtxidrelay peers (WTXIDRELAY)
    • tx relay from segwit-supporting, non-wtxidrelay peers (NODE_WITNESS, no WTXIDRELAY, 0.13.1 to 0.20.x, Jan 2021)
    • tx relay from non-segwit peers

    The last case case is trivial – non-segwit peers can be treated as doing wtxid relay for caching because all the txs they relay will have the same value for txid and wtxid.

    For orphan resolution, caching bad txids seems mostly undesirable: we can cache INPUTS_NON_STANDARD since that will fail repeatedly, but can’t cache anything much else, as different witness could make an acceptable transaction. I think it should be possible to detect INPUTS_NON_STANDARD versus WITNESS_STRIPPED or other failures, so I think that behaviour could actually be retained? Something like:

     0static bool RequiresWitnessData(TxoutType txo)
     1{
     2    switch(txo) {
     3    case WITNESS_V0_SCRIPTHASH:
     4    case WITNESS_V0_KEYHASH:
     5    case WITNESS_V1_TAPROOT:
     6        return true;
     7    default: // should list all cases for completeness and compiler checks
     8         break;
     9    }
    10    return false;
    11}
    12
    13bool AreInputsStandard(const CTransaction& tx, const CCoinsViewCache& mapInputs, bool& is_stripped)
    14{
    15    is_stripped = false;
    16    ...
    17    const bool has_witness_data = tx.HasWitness();
    18    for (unsigned int i = 0; i < tx.vin.size(); i++) {
    19        const CTxOut& prev = mapInputs.AccessCoin(tx.vin[i].prevout).out;
    20
    21        std::vector<std::vector<unsigned char> > vSolutions;
    22        TxoutType whichType = Solver(prev.scriptPubKey, vSolutions);
    23        if (!has_witness_data && !is_stripped && RequiresWitnessData(whichType)) {
    24            is_stripped = true;
    25        }
    26        if (whichType == TxoutType::NONSTANDARD || whichType == TxoutType::WITNESS_UNKNOWN) {
    27        ...
    28    }
    29    if (is_stripped) return false;
    30    return true;
    31}
    32
    33    bool is_stripped;
    34    if (m_pool.m_opts.require_standard && !AreInputsStandard(tx, m_view, is_strippped)) {
    35        if (is_stripped) {
    36            return state.Invalid(TxValidationResult::TX_WITNESS_STRIPPED, "bad-txns-missing-witness");
    37        } else { 
    38            return state.Invalid(TxValidationResult::TX_INPUTS_NOT_STANDARD, "bad-txns-nonstandard-inputs");
    39         }
    40    }
    

    (just passing state through probably would be better than the bool ref)

    WITNESS_STRIPPED would indicate you should retry requesting by txid for orphan resolution; INPUTS_NOT_STANDARD means there’s no point retrying. Other errors also indicate you should retry – a tx with the same txid but different witness data may pass validation.

    In that case, I believe adding the wtxid to the reject filter, and INPUTS_NOT_STANDARD txids to the reject filter, and checking the reject filter when requestion both by wtxid and txid would give good/correct behaviour for wtxidrelay and non-segwit peers, and I think best-possible behavour for non-wtxidrelay segwit peers?

  9. darosior commented at 6:38 pm on July 30, 2025: member

    Yes, we could do that. I suppose the minimal way to do so today on master would be to simply check whether we are spending any Segwit input while the transaction has no witness after CheckInputScripts failed:

     0diff --git a/src/validation.cpp b/src/validation.cpp
     1index 09e04ff0ddb..8f86b630ef5 100644
     2--- a/src/validation.cpp
     3+++ b/src/validation.cpp
     4@@ -1254,13 +1254,17 @@ bool MemPoolAccept::PolicyScriptChecks(const ATMPArgs& args, Workspace& ws)
     5     // Check input scripts and signatures.
     6     // This is done last to help prevent CPU exhaustion denial-of-service attacks.
     7     if (!CheckInputScripts(tx, state, m_view, scriptVerifyFlags, true, false, ws.m_precomputed_txdata, GetValidationCache())) {
     8-        // SCRIPT_VERIFY_CLEANSTACK requires SCRIPT_VERIFY_WITNESS, so we
     9-        // need to turn both off, and compare against just turning off CLEANSTACK
    10-        // to see if the failure is specifically due to witness validation.
    11-        TxValidationState state_dummy; // Want reported failures to be from first CheckInputScripts
    12-        if (!tx.HasWitness() && CheckInputScripts(tx, state_dummy, m_view, scriptVerifyFlags & ~(SCRIPT_VERIFY_WITNESS | SCRIPT_VERIFY_CLEANSTACK), true, false, ws.m_precomputed_txdata, GetValidationCache()) &&
    13-                !CheckInputScripts(tx, state_dummy, m_view, scriptVerifyFlags & ~SCRIPT_VERIFY_CLEANSTACK, true, false, ws.m_precomputed_txdata, GetValidationCache())) {
    14-            // Only the witness is missing, so the transaction itself may be fine.
    15+        // CheckInputScripts filled the spent outputs. Detect whether this transaction's witness was stripped by checking
    16+        // whether this transaction spends a Segwit output but does not have a witness.
    17+        Assert(ws.m_precomputed_txdata.m_spent_outputs_ready);
    18+        const auto& spent_txos{ws.m_precomputed_txdata.m_spent_outputs};
    19+        Assert(spent_txos.size() == tx.vin.size());
    20+        int ver;
    21+        std::vector<uint8_t> prog;
    22+        const bool spends_segwit{std::any_of(spent_txos.begin(), spent_txos.end(), [&ver, &prog](const CTxOut& txo) {
    23+            return txo.scriptPubKey.IsWitnessProgram(ver, prog);
    24+        })};
    25+        if (!tx.HasWitness() && spends_segwit) {
    26             state.Invalid(TxValidationResult::TX_WITNESS_STRIPPED,
    27                     state.GetRejectReason(), state.GetDebugMessage());
    28         }
    

    However it seemed cleaner to me to get rid of the WITNESS_STRIPPED edge case detection in the first place, which was always meant to go?

  10. glozow commented at 6:43 pm on July 30, 2025: member

    However it seemed cleaner to me to get rid of the WITNESS_STRIPPED edge case detection in the first place, which was always meant to go?

    I didn’t realize it was always meant to go - how do you know that?

    Fwiw, I do think that being able to detect WITNESS_STRIPPED without triple validation is the main thing we want, so would prioritize that kind of solution. At the same time, my main point in the OP is that being able to cache INPUTS_NOT_STANDARD isn’t very helpful to us in practice, so I would still be weakly in favor of this PR.

  11. darosior commented at 6:55 pm on July 30, 2025: member

    I didn’t realize it was always meant to go - how do you know that?

    It was introduced in #18044 as a hack until the network upgrades to wtxid relay with an explicit mention it can be (according to author, should be) removed afterward: https://github.com/bitcoin/bitcoin/blob/8a94cf8efebc3177effcfc1160560735b8caf34b/src/node/txdownloadman_impl.cpp#L452-L454

    Of course that it was meant to go then does not implies it needs to go now. But i think this still holds. It is a weird special case that is not necessary post wtxid relay (and post this PR).

  12. darosior commented at 8:19 pm on July 30, 2025: member
    I opened #33105, which ended up implementing a variant of @ajtowns’ suggestion, because as pointed out the resource consumption gains can be achieved independently of this work (which i still think is desirable) as a much smaller patch which i think would be nice to get in before the feature freeze.
  13. ajtowns commented at 1:07 am on July 31, 2025: contributor

    However it seemed cleaner to me to get rid of the WITNESS_STRIPPED edge case detection in the first place, which was always meant to go?

    I don’t think it makes sense to get rid of WITNESS_STRIPPED while we still care about resolving orphans via their missing parents’ txids. If/when we have protocol support for receiver-initiated package relay by wtxid, then I think it could make sense to treat every request as being by wtxid – so a witness-stripped response from a non-wtxid-relay peer would mean not requesting that tx from any other non-wtxid-relay peer, but wouldn’t prevent requesting the same tx from wtxid-relay peers.

  14. [p2p] never check rejections by txid in AlreadyHaveTx
    While this may result in some wasted bandwidth for orphans that have
    invalid non-witness parents, we expect this case to be rare. Allows us
    to cache witness-stripped transactions in recent rejections without
    worrying about blocking 1p1cs.
    0f919d01db
  15. in src/node/txdownloadman_impl.cpp:379 in 86a03b167f outdated
    382-            std::optional<Txid> rejected_parent_reconsiderable;
    383-            for (const Txid& parent_txid : unique_parents) {
    384-                if (RecentRejectsFilter().contains(parent_txid.ToUint256())) {
    385-                    fRejectedParents = true;
    386-                    break;
    387-                } else if (RecentRejectsReconsiderableFilter().contains(parent_txid.ToUint256()) &&
    


    Crypt-iQ commented at 3:20 pm on August 4, 2025:
    I get that if this is an orphan, we only have the parent txids. In most cases, it seems like this reconsiderable filter will be storing wtxid? The exceptions being a non-segwit tx and if a tx is TX_RECONSIDERABLE but is actually witness-stripped (I’m not sure if there are others?). I may be missing something here, are there more cases where this reconsiderable filter check is/was used with txid?

    glozow commented at 3:41 pm on August 4, 2025:
    Yep, we don’t ever put txids in the reconsiderable filter unless the txid is the same as the wtxid. We don’t put witness-stripped in there either (even if we do #32379, as that will put it in the normal RecentRejects filter)

    Crypt-iQ commented at 3:55 pm on August 4, 2025:
    I modified p2p_opportunistic_1p1c.py to remove the witness on a low-fee parent tx and it failed with TX_RECONSIDERABLE (as those checks are before the 3x CheckInputScripts that returns WITNESS_STRIPPED). I think my comment was a little confusing, also not sure this matters in practice?

    glozow commented at 4:02 pm on August 4, 2025:
    Ohhhh I see. Yeah, I suppose a witness-stripped can end up there if it’s also low feerate. And adding a witness can only make the feerate decrease, so any tx with the same txid would have the same problem 😅
  16. glozow force-pushed on Aug 4, 2025
  17. sipa commented at 6:31 pm on August 4, 2025: member
    Do we have numbers on how often the rejection filter catches things by txid? I suspect that in practice that will be just due to fetches of orphan parents which are (policy) invalid, due to non-witness reasons?
  18. glozow commented at 6:34 pm on August 4, 2025: member

    Do we have numbers on how often the rejection filter catches things by txid?

    I didn’t see any in a 2-week period 😅

  19. darosior commented at 3:08 pm on August 5, 2025: member
    I still think this PR is preferable to #33105. The latter will inevitably introduce more false-positive witness stripped errors, whereby we won’t add some transactions with consensus/standardness errors to the reject filter. Since we already don’t add all transactions with consensus/standardness errors to the reject filter, if we also don’t care about these additional false positives, we might as well just to the cleaner thing and get rid of this filter entirely as is done here.
  20. ajtowns commented at 10:41 am on August 6, 2025: contributor

    I still think this PR is preferable to #33105. The latter will inevitably introduce more false-positive witness stripped errors,

    I don’t think false-positive witness stripped errors are harmful, apart from the CPU they required to be detected in the first place.

    Only “misbehaving” peers will send us witness stripped txs in the first place, so the only benefit caching an error does (which is what we’d do if we replaced the “false” witness stripped result with something else) is only to prevent us from requesting the invalid tx from some other peer, but the only way any other peer would give us the tx in that case is if they were advertising and relaying invalid txs themselves. If we do have such peers, requesting the tx again by txid doesn’t cost much if we get a witness-stripped version and detect that cheaply/quickly; and if we don’t get a witness-stripped version we at least make some progress by being able to cache rejection by wtxid. If we have many peers relaying invalid txs to us, that will still be costly, but spending a lot of cpu/bandwidth doing validation is exactly what we’d expected in that scenario anyway.

    When we have almost entirely honest peers, we’ll never see witness stripped errors in the first place; and aside from that we would expect our rejection caches to only see reconsiderable rejections (due to double spends and fee rate differences), and policy rejections (when we have different policy configuration to our peers). I think the only policy rejection you’d hit here that has any significant use in the network is if you were running with p2a validation disabled. So I don’t think stats will be informative here – this code aims to lower the impact of buggy clients, so you need buggy peers for it to have any impact.

  21. sipa commented at 9:11 pm on August 6, 2025: member

    Concept ACK.

    First of all, I’m convinced by the rationale for #33050 and #33105, and if we do (at least) the latter, then this PR isn’t necessary anymore to get rid of the triple-validation costs, which would make it low-urgency. I think it’s still a nice cleanup, as I don’t think the code does much right now (see below).

    Aside, I don’t think we need to worry about the impact it may have on pre-segwit or pre-wtxidrelay peers; given how widespread wtxidrelay peers are, I think we can reasonably add/increase a fetching delay by a few seconds from non-wtxidrelay peers to minimize their impact, and/or consider preferentially peering with wtxidrelay peers (they have no service flag, but we could kick+cycle random outbound non-wtxidrelay peers if we have under some threshold). @ajtowns It’s fair to say we shouldn’t judge the quality of protection against buggy clients based on statistics, because all they might tell us that there currently are no relevant buggy clients. But @glozow is not seeing any hits on the txid rejection cache at all - not just no witness-stripped cases. And the txid rejection cache is there (I think) primarily to protect against bandwidth waste due to repeated downloading of rejected transactions from a (multitude of) policy-diverging peers, not buggy peers. The lack of such hits on the txid filter may of course also mean there are currently no such peers, but I think there are additional reasons why it doesn’t do much.

    If we exclude pre-wtxidrelay peers, the use of the txid rejection filter is limited to cases where all of these hold:

    • For consensus-valid transactions, as honest peers never create consensus-invalid transactions (because it is protection against redownloading from honest peers, not attackers, which can waste our bandwidth in much easier ways).
    • For policy-invalid transactions, where we have many peers which consider it policy-valid regardless.
    • For parents of orphans, because that’s the only context in which we just know a transaction’s txid.
    • For a specific, small subset of policy-invalidity reasons that do not involve the witness (namely: nonstandard input types, because that’s currently the only thing where we assign failure to the txid, not the wtxid) or are non-segwit.

    For it to matter, we’d need to be in a regime where there are lots of policy-diverging peers with a more liberal relay policy (… may happen), with a deployed-at-scale use case that causes such transactions (… might happen), that involve dependent transactions (… less likely), and where the invalidity is due to non-standard script input types or are non-segwit (… pretty unlikely).

    As an alternative, it is possible to expand the number of validation failures that get attributed to txids, as there are more than just non-standard script input types in theory. Others include base size > 100k, non-standard output types, min feerate not met (when counting just base size), too many sigops, and probably a few more. I think those are still unlikely candidates of being things people build at scale, in dependent transactions, but if we think that’s possible, I think expanding those to be txid-rejectable is a reasonable alternative to this PR.

    Absent that alternative, I feel the hypothetical scenario where this matters is just too obscure to have a whole piece of net_processing state dedicated too. It uses memory too; without inserting txids in the rejection filter, we could shrink the filters, or improve their reliability.

  22. glozow commented at 12:48 pm on August 7, 2025: member

    I’ve removed #32379 and mention of witness-stripping from the PR description; I think we can regard this as irrelevant for removing the triple validation (though it doesn’t hurt).

    The primary motivation now is to clean up logic that doesn’t really get used.

    Others include base size > 100k, non-standard output types, min feerate not met (when counting just base size), too many sigops, and probably a few more.

    Also: large scriptSig, nonstandard output type, dust, OP_RETURN size and count (assuming this node’s policy is more restrictive than its peers’). I would guess that if we added these, they would similarly be very rare and not really worth the added complexity.

    We can wait until we’re using something like BIP331 for orphan resolution (when txid requests are even rarer), but I think there’s sufficient evidence this is already vestigal today.

  23. ajtowns commented at 5:55 pm on August 8, 2025: contributor

    sipa wrote:

    • For a specific, small subset of policy-invalidity reasons that do not involve the witness (namely: nonstandard input types, because that’s currently the only thing where we assign failure to the txid, not the wtxid) or are non-segwit.

    I believe you’d see that for new segwit spend types when you’re not running an upgraded node (eg, taproot, p2a, perhaps p2qrh), which might help reduce your wasted bandwidth if you’re running behind network consensus/policy changes as adoption increases.

    As an alternative, it is possible to expand the number of validation failures that get attributed to txids, as there are more than just non-standard script input types in theory. Others include base size > 100k, non-standard output types, min feerate not met (when counting just base size), too many sigops, and probably a few more. I think those are still unlikely candidates of being things people build at scale, in dependent transactions, but if we think that’s possible, I think expanding those to be txid-rejectable is a reasonable alternative to this PR.

    Bumping tx version (TRUC), creating dust outputs (ephemeral dust), and increased OP_RETURN output count/size are all pretty recent examples of where we’ve loosened standardness rules in ways that could be assigned to the txid and that presumably we’re hoping might get built on at scale. I don’t see why we’d necessary stop doing that any time soon.

    Of course, all those things will only show up in normal usage if your mempool rules are stricter than what’s being commonly seen on the network, and stricter than your peers, which is a situation we generally try to avoid. So even then, I don’t think you’ll see much effect in normal usage – it’s only defensive against future changes to policy that you don’t adopt in sync with your peers, or if there’s something of a schism in the p2p network and we care about the people adopting more restrictive policies.

    It might be interesting to try getting similar stats on a Knots node with its more restrictive mempool policies – potentially both just seeing how many times rejected txs get redundantly downloaded due to orphan resolution attempts, and tweaking it so that (some of) the errors that can be assigned to the txid are (which could catch the lower datacarrier limits, though not the inscription ones)… Alternatively maybe you could get similar stats from a post-wtxidrelay but pre-taproot node who might see child spends of taproot parents.

    That doesn’t help for non-standardness issues we can’t assign to the txid – so script upgrades and forbidding inscriptions and they like aren’t helped by this mechanism, and can’t be helped without some form of explicit package relay afaics.

    The bad case there is if we get a child from many peers whose parent we’ve already probably rejected (by wtxid). If we can also reject the txid, then we save ourselves some bandwidth and perhaps validation costs; if we can’t reject the txid, we likely have to redundantly request it from each peer in case many of our peers are dishonest, but one is honest and has a version of the tx that we’ll accept. So rather than downloading it once, we’re downloading it up to 8 or 120 times if all our peers have looser standardness policies than we do? We already check the wtxid against RecentRejects via AlreadyHaveTx via ReceivedTx, so we won’t revalidate afaics.

    As far as the adversarial case goes – the worst case would be that in an environment where your rules are stricter than many of your peers, an adversary could force you to redownload/revalidate a large tx from many peers with you rejecting it each time. If your peers announced the tx by wtxid, you’d only do that once, and things would be fine. To trigger it happening by txid, they would need to use orphan resolution and actually have each of your peers announce the child (in order for you to believe its worth getting the parent from them). But that’s just normal behaviour – you’re sending a real child tx through the network?

    So I think the only potential benefit here is saving ourselves from redownloading a transaction ~10x or ~100x times more than we should, and agree that this is only in the cases where we’re enforcing “special” standardness rules, and those rules can be assigned independent of the witness data, and only in cases where we have a child of the non-standard tx.

    Absent that alternative, I feel the hypothetical scenario where this matters is just too obscure to have a whole piece of net_processing state dedicated too.

    I don’t really think this change is much of a win – the original motivation of avoiding the triple-script-check stuff is, but that can be avoided just by rejecting witness-stripped txs asap, independent of this change.

    It uses memory too; without inserting txids in the rejection filter, we could shrink the filters, or improve their reliability.

    I don’t think that’s quite true in practice? The logic in txdownloadman_impl.h for the the filter size is based on 1000 txs per second for 2 minutes creating 120000 entries; but that should presumably be doubled if we’re assuming an attacker might add minimal witness data to invalid non-standard input txs as well in order to overflow the filter as quickly as possible in order to trick us into re-requesting txs from other peers? If we’re approximately never seeing spends of non-standard inputs, then we also already have the reliability benefits of never adding them. Either way, 1.3MB, 2.6MB or 650kB of filters doesn’t seem like a big deal to me. If we really cared about minimising the filter size we could probably rate-limit peers to a target of 10 or 20 rejected txs per second or similar.

    glozow wrote:

    We can wait until we’re using something like BIP331 for orphan resolution (when txid requests are even rarer), but I think there’s sufficient evidence this is already vestigal today.

    I think the main reason it seems vestigial is dependent on running a node with policy rules at least as loose as (almost) all your peers, and assuming that everyone else who isn’t doing the same now, will do so in short order.

  24. ajtowns commented at 6:01 pm on August 8, 2025: contributor

    and/or consider preferentially peering with wtxidrelay peers

    It would probably be good to be able to preferentially peer with package-relay peers when they exist, so even if peering with wtxidrelay peers is easy now, having the code in a way that’s reusable for package-relay peers in future might be worthwhile.

  25. glozow referenced this in commit f679bad605 on Aug 8, 2025
  26. DrahtBot added the label Needs rebase on Aug 13, 2025
  27. DrahtBot commented at 8:30 pm on August 13, 2025: contributor
    🐙 This pull request conflicts with the target branch and needs rebase.

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-08-15 15:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me