net: Provide block templates to peers on request #33191

pull ajtowns wants to merge 7 commits into bitcoin:master from ajtowns:202508-sendtemplate1 changing 15 files +377 −10
  1. ajtowns commented at 10:33 pm on August 14, 2025: contributor
    Implements the sending side of SENDTEMPLATE, GETTEMPLATE, TEMPLATE message scheme for sharing block templates with peers via compact block encoding/reconstruction.
  2. DrahtBot added the label P2P on Aug 14, 2025
  3. DrahtBot commented at 10:33 pm on August 14, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33191.

    Reviews

    See the guideline for information on the review process. A summary of reviews will appear here.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #33591 (Cluster mempool followups by sdaftuar)
    • #33300 (fuzz: compact block harness by Crypt-iQ)
    • #30595 (kernel: Introduce initial C header API by TheCharlatan)
    • #30442 ([IBD] precalculate SipHash constant salt XORs by l0rinc)
    • #30277 ([DO NOT MERGE] Erlay: bandwidth-efficient transaction relay protocol (Full implementation) by sr-gi)
    • #29641 (scripted-diff: Use LogInfo over LogPrintf [WIP, NOMERGE, DRAFT] by maflcko)
    • #29415 (Broadcast own transactions only via short-lived Tor or I2P connections by vasild)
    • #28676 (Cluster mempool implementation by sdaftuar)
    • #26812 (test: add end-to-end tests for CConnman and PeerManager by vasild)
    • #17783 (common: Disallow calling IsArgSet() on ALLOW_LIST options by ryanofsky)
    • #17581 (refactor: Remove settings merge reverse precedence code by ryanofsky)
    • #17580 (refactor: Add ALLOW_LIST flags and enforce usage in CheckArgFlags by ryanofsky)
    • #17493 (util: Forbid ambiguous multiple assignments in config file by ryanofsky)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  4. ajtowns force-pushed on Aug 15, 2025
  5. DrahtBot added the label CI failed on Aug 15, 2025
  6. DrahtBot commented at 0:47 am on August 15, 2025: contributor

    🚧 At least one of the CI tasks failed. Task lint: https://github.com/bitcoin/bitcoin/runs/48132397637 LLM reason (✨ experimental): The CI failure is caused by a circular dependency detected during lint checks.

    Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:

    • Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.

    • A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.

    • An intermittent issue.

    Leave a comment here, if you need help tracking down a confusing failure.

  7. DrahtBot removed the label CI failed on Aug 15, 2025
  8. gmaxwell commented at 0:11 am on August 18, 2025: contributor

    I like this, but is there a way to avoid reintroducing the mempool message transaction spying vulnerability? Construct the block skipping txn that have not yet been announced?

    What are your thoughts on the size of template, I don’t think it makes sense to just limit them to being one block in size, perhaps it would be reasonable to be 2 blocks in size so it always can run one block ahead?

  9. sipa commented at 0:25 am on August 18, 2025: member

    It may be hard to do that perfectly; for example, if a recent CPFP occurred that bumped a low-fee old parent up high enough to make it into the template, but the child has not been announced to the peer yet. The child can be skipped when sending, but the presence of the parent would stand out and reveal the sender knew about the child.

    If latency is not very important, there may be a simpler solution: compute the template, and schedule it for sending, but only send it as soon as all transactions in it have been announced.

  10. ajtowns commented at 2:31 am on August 18, 2025: contributor

    I like this, but is there a way to avoid reintroducing the mempool message transaction spying vulnerability? Construct the block skipping txn that have not yet been announced?

    Each template is annotated with the mempool’s GetSequence() and a template won’t be presented to a peer if its m_last_inv_sequence isn’t at least that value. Since we announce the highest fee txs first when updating m_last_in_sequence that should mean they’ll have already seen any txs in the template via INV (or implicitly via INV’ing a child/descendant).

    Currently it will just immediately give an old template if the current template is too new; it would also be possible to delay providing the current template until m_last_inv_sequence is bumped when the next INV is sent, but that would have been a little more complicated and I wanted to keep things as simple as possible.

    It may be hard to do that perfectly; for example, if a recent CPFP occurred that bumped a low-fee old parent up high enough to make it into the template, but the child has not been announced to the peer yet.

    I think the above should also take care of this case already.

    I guess that reveals a little more ordering info: if you received two high fee txs (tx1 and tx2) in a single INV cycle, but generate a new template after receiving tx1 but before receiving tx2, then you’ll reveal the ordering of tx1 and tx2 to your peers. So perhaps aligning template generation with INV messages to inbound peers would be a worthwhile improvement. That would perhaps be a slightly leak to outbounds, but nothing they couldn’t discover just by making an inbound connection to you.

    What are your thoughts on the size of template, I don’t think it makes sense to just limit them to being one block in size, perhaps it would be reasonable to be 2 blocks in size so it always can run one block ahead?

    That seems pretty appealing; I didn’t do anything along those lines here mostly to keep things simple, and because waiting for cluster mempool before building even bigger collections of packages seemed plausible. If miners were to enforce their own soft blocksize limits (perhaps to reduce blockchain growth and marginally improve IBD, or as a cartel-style way of limiting supply to push prices up), then a 4M weight template might already account for 2 or more blocks worth of transactions. (signet blocks are mostly limited to 1M weight, eg)

    A double-size block wouldn’t give you an ideal predictor for the next block: it would miss timelocked transactions and transactions that exceed ancestor/descendant limits for a single block, though equally a sufficiently clever peer could just include those in a template anyway, especially if consensus/locktime checks weren’t being performed. At least they could if they ever saw them in the first place, which is presumably doubtful.

    I’m not sure what level of template overlap is likely in the real world – if it’s just “everyone thinks these txs are valid, but some nodes don’t think these are, and others don’t think these other ones are”, maybe that just adds up to ~6MB of memory total across all your peers (even if there are simultaneous surges across both unusual types of transactions) plus maybe 15% to account for churn. On the other hand, at times there might conceivably be a lot of double spends in the top of varius mempools, (perhaps posted to the network by chainanalysis-type nodes in order to try to discover the p2p network’s structure to help figure out where txs came from or which nodes are most likely to be building templates for mining). If that’s the case, it might not be sensible to try to keep all your peers’ conflicting template txs in memory, and I think there’s probably a variety of different approaches that might be worth trying there.

  11. gmaxwell commented at 9:52 pm on August 18, 2025: contributor

    The timelocks could be relaxed analogous to increasing the size however. With MTP we essentially know the lock criteria for one block out.

    I don’t really think the memory usage is a concern, – because there is just not a need to constantly do this with all peers in the face of memory pressure.

    You could add an additional message that lets peers say “I have X fees in Y weight in my template” – if the max size they were targeting was standardized then this would be a pretty good indicator when someone you haven’t been syncing against has something you might want to see. Perhaps multiple breakpoints, like a little fee/weight curve communicated in a dozen bytes.

  12. ismaelsadeeq commented at 10:24 am on August 19, 2025: member

    Nice idea.

    A couple of questions:

    1. How do nodes build the block template they share with peers? When they use only their mempool to build those templates, I don’t think that the transactions with diverging mempool policy will relay through the network and improve the compact block reconstruction process via this mechanism. Only peers whose outbound connections are miners will benefit.

    2. We don’t accept transactions in the mempool that have divergent policy because we deem them non-standard. How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them? (I think this is where weak blocks are superior to this proposal because they show that work has been done previously.)

    3. Is there a limit to the memory used to store the transactions that violate a node’s mempool policy but are received from peers? How do you decide which to keep and which to evict when the limit is reached? One approach could be linearization based on incentive compatibility and then evicting low-fee transactions, although this has the drawback of then making us not saving transactions paid for via out-of-band.

  13. polespinasa commented at 9:17 pm on August 19, 2025: contributor

    If latency is not very important, there may be a simpler solution: compute the template, and schedule it for sending, but only send it as soon as all transactions in it have been announced.

    Maybe a stupid question and I’m not understanding something, but what is the point of this if we have to wait for all transactions to be announced? Isn’t the whole idea to make sure our peers know about what we think will be the next block?

  14. instagibbs commented at 6:03 pm on August 20, 2025: member

    @ismaelsadeeq

    How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them

    There’s no need to fully validate these transactions you are given. If they violate your own policy you just won’t include them in your own mempool (or include them in a block template).

  15. ajtowns commented at 4:51 am on August 21, 2025: contributor
    1. How do nodes build the block template they share with peers?

    The same way they build a block template for the getblocktemplate RPC (well, except ignoring config options that might otherwise reduce the block size). See here.

    When they use only their mempool to build those templates, I don’t think that the transactions with diverging mempool policy will relay through the network and improve the compact block reconstruction process via this mechanism.

    This doesn’t aim to improve tx relay in most cases. The scenario this helps with is when a subset of the network has a relaxed policy compared to your node (eg, the librerelay nodes on the network vs you; or the core nodes on the network vs a default knots node; or nodes that have adopted newer policy/replacement rules (eg TRUC, lower min fee, pay to anchor, etc) vs an older/non-enabled node). In that case, if you happen to peer with one of those nodes that have a relaxed policy, and you request templates from that node, you’ll be able to reconstruct blocks mined with that relaxed policy without needing a round trip.

    (The main case where it might improve relay is when a tx was evicted from many but not all mempools due to size limits, but eventually mempools clear and its eligible again, but has not already been widely rebroadcast for other reasons. In some circumstances it also might help with replacement cycling attacks, allowing the victim transaction to automatically be rebroadcast once the conflicting tx has been replaced)

    1. We don’t accept transactions in the mempool that have divergent policy because we deem them non-standard. How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them?

    Note that this PR does not include code for requesting templates, only providing them; so there’s no change here from this PR alone.

    The proof of concept code linked from https://delvingbitcoin.org/t/sharing-block-templates/1906/ that does request templates grabs templates from outbound/manual peers, and validates any txs in templates that aren’t in the mempool according to the usual standardness rules, adding them to the mempool if they pass. That’s no more harmful (CPU-wise) than receiving the same txs via INV/GETDATA, and the memory usage is limited by only keeping one template per selected peer, and rejecting templates that are more than 1MvB.

    If you were to validate txs in templates against consensus rules only (without the additional standardness rules), then that could be more costly (you’d be missing the tx size limit and BIP 54 rules so might be hitting significant amounts of hashing), though. The proof of concept code doesn’t do that.

    1. Is there a limit to the memory used to store the transactions that violate a node’s mempool policy but are received from peers? How do you decide which to keep and which to evict when the limit is reached? One approach could be linearization based on incentive compatibility and then evicting low-fee transactions, although this has the drawback of then making us not saving transactions paid for via out-of-band.

    If transactions in templates are ordered by (modified) fee rate, then you could keep the first transactions in a template, which might help you preserve transactions that were paid for out-of-band if you have a peer that’s aware of the out-of-band payments. That could also work for things like mempool.space’s accelerator product, where there’s an API that will tell you about the prioritised transactions, provided a sufficient number of nodes use the API and prioritise transactions accordingly.

  16. ajtowns commented at 4:57 am on August 21, 2025: contributor

    Maybe a stupid question and I’m not understanding something, but what is the point of this if we have to wait for all transactions to be announced? Isn’t the whole idea to make sure our peers know about what we think will be the next block?

    If you include a transaction in a template you send to a peer, that implicitly announces the tx to that peer.

    The reason we delay announcing txs to a peer is to add some uncertainty about when precisely we received a transaction, and the order in which we received similarly recent transactions – knowing when many nodes in the network first heard about transactions with precision allows you to make a very good guess about how the network is structured and who heard about a transaction first, which makes it easier to work out who created the transaction which is bad for privacy, and possibly identify weak points in the p2p network to better plan DoS attacks.

    So preserving the same timing uncertainty when adding a potentially new way to announce txs seems worthwhile.

  17. ajtowns force-pushed on Aug 21, 2025
  18. ajtowns commented at 5:52 am on August 21, 2025: contributor
    Bumped the size of templates up to 2MvB, and added latest_template_weight and latest_template_tx to gettemplateinfo to report on the size of the current template.
  19. DrahtBot added the label CI failed on Aug 21, 2025
  20. DrahtBot commented at 6:54 am on August 21, 2025: contributor

    🚧 At least one of the CI tasks failed. Task ARM, unit tests, no functional tests: https://github.com/bitcoin/bitcoin/runs/48553661676 LLM reason (✨ experimental): The failure is caused by a missing initializer for the constexpr static member ‘ALLOW_OVERSIZED_BLOCKS’ in miner.h.

    Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:

    • Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.

    • A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.

    • An intermittent issue.

    Leave a comment here, if you need help tracking down a confusing failure.

  21. ajtowns force-pushed on Aug 22, 2025
  22. DrahtBot removed the label CI failed on Aug 22, 2025
  23. DrahtBot added the label Needs rebase on Aug 28, 2025
  24. ajtowns force-pushed on Aug 29, 2025
  25. ajtowns commented at 1:05 pm on August 29, 2025: contributor
    Rebased over #33253 but not optimised to iterate over template txs efficiently.
  26. DrahtBot removed the label Needs rebase on Aug 29, 2025
  27. DrahtBot added the label CI failed on Aug 29, 2025
  28. DrahtBot commented at 2:37 pm on August 30, 2025: contributor
    from CI: [09:20:21.596] SUMMARY: MemorySanitizer: use-of-uninitialized-value /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/crypto/./crypto/siphash.cpp:132:5 in SipHashUint256(unsigned long, unsigned long, uint256 const&)
  29. ajtowns force-pushed on Sep 1, 2025
  30. DrahtBot removed the label CI failed on Sep 1, 2025
  31. ajtowns force-pushed on Sep 16, 2025
  32. ajtowns force-pushed on Sep 16, 2025
  33. ajtowns commented at 7:59 am on September 16, 2025: contributor
    Dropped the code to use our own prior templates for compact block reconstruction to keep this patchset simple.
  34. in src/rpc/net.cpp:1214 in 109a3982da outdated
    1209+                {RPCResult::Type::NUM, "latest_template_tx", "number of transactions in latest template"},
    1210+                {RPCResult::Type::NUM, "next_update", "the next update expressed in UNIX epoch time"},
    1211+            }
    1212+        },
    1213+        RPCExamples{
    1214+            HelpExampleCli("getnetworkinfo", "")
    


    waketraindev commented at 11:34 am on September 17, 2025:
    0            HelpExampleCli("gettemplateinfo", "")
    

    Fix for rpc examples string

  35. in src/rpc/net.cpp:1223 in 109a3982da outdated
    1218+            LOCK(cs_main);
    1219+            UniValue obj(UniValue::VOBJ);
    1220+            NodeContext& node = EnsureAnyNodeContext(request.context);
    1221+            TemplateStats stats;
    1222+            if (node.peerman) {
    1223+                stats = node.peerman->GetTemplateStats();
    


    waketraindev commented at 11:38 am on September 17, 2025:
    Prefer throwing an RPC exception here instead of returning default TemplateStats
  36. ajtowns force-pushed on Sep 18, 2025
  37. ajtowns force-pushed on Sep 19, 2025
  38. ajtowns commented at 3:52 pm on September 19, 2025: contributor
    Patchset to also request templates from outbound peers is at https://github.com/ajtowns/bitcoin/commits/202508-sendtemplate3/
  39. in src/net_processing.cpp:5180 in d269fdc664 outdated
    5137@@ -4960,6 +5138,8 @@ bool PeerManagerImpl::ProcessMessages(CNode* pfrom, std::atomic<bool>& interrupt
    5138     PeerRef peer = GetPeerRef(pfrom->GetId());
    5139     if (peer == nullptr) return false;
    5140 
    5141+    MaybeGenerateNewTemplate();
    


    mzumsande commented at 7:08 pm on September 19, 2025:
    Any reason to do this here instead of using the scheduler like, for example, ReattemptInitialBroadcast does?

    ajtowns commented at 6:32 am on September 21, 2025:
    Not really. When only generating the templates and not requesting them it simplifies the locking slightly, and it takes a cs_main lock to generate the block, so most other processing can’t continue anyway.
  40. mzumsande commented at 7:27 pm on September 19, 2025: contributor

    Would be helpful to link the BIP PR/Delving Post in the OP.

    I looked through the code, but will leave most low-level comments for later.

    On a conceptual level, this seems like a strong fingerprinting vector (tor/clearnet response would be identical if requests happen roughly at the same time and the same template is used). I’m not sure if we have given up on that, considering that there seems to be no lack of other existing fingerprinting methods.

  41. in src/net_processing.cpp:4423 in 27f783f339 outdated
    4407+        if (tx_relay == nullptr || !tx_relay->m_support_sendtemplate) return;
    4408+
    4409+        for (const auto& mytmp : m_templateman.my_templates) {
    4410+            if (mytmp.inv_sequence <= tx_relay->m_last_inv_sequence) {
    4411+                MakeAndPushMessage(pfrom, NetMsgType::TEMPLATE, mytmp.compact);
    4412+                break;
    


    waketraindev commented at 7:21 pm on September 20, 2025:

    NIT: Replace break; return; with just return for clarity and to simplify control flow.

    0                return;
    
  42. ajtowns commented at 6:20 am on September 21, 2025: contributor

    On a conceptual level, this seems like a strong fingerprinting vector (tor/clearnet response would be identical if requests happen roughly at the same time and the same template is used). I’m not sure if we have given up on that, considering that there seems to be no lack of other existing fingerprinting methods.

    This might be a little better than it seems at first glance, because the top 2MvB of txs is likely to be fairly common across nodes running similar mempool acceptance policies, and you don’t benefit from being able to send bait transactions at bottom-of-mempool feerates.

    It would be possible to generate separate templates at separate times for each private network you’re listening on; though the similarity between templates might still imply your two public interfaces are very nearby on the network. Just observing INV messages via a node that never announces INVs itself is probably equally effective at fingerprint a node’s different interfaces.

    It might also be that there’s a fundamental tension between trying to relay transactions and blocks as quickly as possible and trying to make your node hard to identify – a node that’s only available over tor probably has good privacy, but bad latency for example; likewise a node that only relays blocks but not transactions.

    Anyway, open to suggestions on how to improve things here.

  43. ajtowns force-pushed on Oct 9, 2025
  44. node/miner: drop node/context include f70e3c2fb4
  45. node/miner: Log via MINER debug category
    This removes "CreateNewBlock" messages from logs by default. Also changes
    "-printpriority" argument to automatically enable -debug=miner logging.
    0394980ae9
  46. node/miner: allow generation of oversized templates
    Constructing a BlockAssembler with ALLOW_OVERSIZED_BLOCKS allows oversized blocks.
    4d81aae7ff
  47. blockencodings: check for coinbase before prefilling
    When the first tx in a block is a coinbase, always prefill it. But
    also support skipping prefill when the first tx is not a coinbase,
    for template sharing.
    013b340341
  48. protocol.h: Define template sharing protocol messages
    Also covers functional test setup.
    2302e599fa
  49. net_processing: add support for template sharing
    This adds suport for SENDTEMPLATE, GETTEMPLATE, TEMPLATE p2p messages to generate
    and send block templates to our peers. It does not support requesting templates from
    our peers.
    674b3f377a
  50. rpc/net: provide experimental gettemplateinfo 18cc92b74b
  51. ajtowns force-pushed on Oct 10, 2025

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-10-10 15:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me