net: Provide block templates to peers on request #33191

pull ajtowns wants to merge 10 commits into bitcoin:master from ajtowns:202508-sendtemplate1 changing 19 files +499 −42
  1. ajtowns commented at 10:33 pm on August 14, 2025: contributor
    Implements the sending side of SENDTEMPLATE, GETTEMPLATE, TEMPLATE message scheme for sharing block templates with peers via compact block encoding/reconstruction.
  2. DrahtBot added the label P2P on Aug 14, 2025
  3. DrahtBot commented at 10:33 pm on August 14, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33191.

    Reviews

    See the guideline for information on the review process. A summary of reviews will appear here.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #33300 (fuzz: compact block harness by Crypt-iQ)
    • #33029 (net: make m_mempool optional in PeerManager by naiyoma)
    • #30595 (kernel: Introduce initial C header API by TheCharlatan)
    • #30277 ([DO NOT MERGE] Erlay: bandwidth-efficient transaction relay protocol (Full implementation) by sr-gi)
    • #29641 (scripted-diff: Use LogInfo over LogPrintf [WIP, NOMERGE, DRAFT] by maflcko)
    • #29415 (Broadcast own transactions only via short-lived Tor or I2P connections by vasild)
    • #28676 (Cluster mempool implementation by sdaftuar)
    • #26812 (test: add end-to-end tests for CConnman and PeerManager by vasild)
    • #17783 (common: Disallow calling IsArgSet() on ALLOW_LIST options by ryanofsky)
    • #17581 (refactor: Remove settings merge reverse precedence code by ryanofsky)
    • #17580 (refactor: Add ALLOW_LIST flags and enforce usage in CheckArgFlags by ryanofsky)
    • #17493 (util: Forbid ambiguous multiple assignments in config file by ryanofsky)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  4. ajtowns force-pushed on Aug 15, 2025
  5. DrahtBot added the label CI failed on Aug 15, 2025
  6. DrahtBot commented at 0:47 am on August 15, 2025: contributor

    🚧 At least one of the CI tasks failed. Task lint: https://github.com/bitcoin/bitcoin/runs/48132397637 LLM reason (✨ experimental): The CI failure is caused by a circular dependency detected during lint checks.

    Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:

    • Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.

    • A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.

    • An intermittent issue.

    Leave a comment here, if you need help tracking down a confusing failure.

  7. DrahtBot removed the label CI failed on Aug 15, 2025
  8. gmaxwell commented at 0:11 am on August 18, 2025: contributor

    I like this, but is there a way to avoid reintroducing the mempool message transaction spying vulnerability? Construct the block skipping txn that have not yet been announced?

    What are your thoughts on the size of template, I don’t think it makes sense to just limit them to being one block in size, perhaps it would be reasonable to be 2 blocks in size so it always can run one block ahead?

  9. sipa commented at 0:25 am on August 18, 2025: member

    It may be hard to do that perfectly; for example, if a recent CPFP occurred that bumped a low-fee old parent up high enough to make it into the template, but the child has not been announced to the peer yet. The child can be skipped when sending, but the presence of the parent would stand out and reveal the sender knew about the child.

    If latency is not very important, there may be a simpler solution: compute the template, and schedule it for sending, but only send it as soon as all transactions in it have been announced.

  10. ajtowns commented at 2:31 am on August 18, 2025: contributor

    I like this, but is there a way to avoid reintroducing the mempool message transaction spying vulnerability? Construct the block skipping txn that have not yet been announced?

    Each template is annotated with the mempool’s GetSequence() and a template won’t be presented to a peer if its m_last_inv_sequence isn’t at least that value. Since we announce the highest fee txs first when updating m_last_in_sequence that should mean they’ll have already seen any txs in the template via INV (or implicitly via INV’ing a child/descendant).

    Currently it will just immediately give an old template if the current template is too new; it would also be possible to delay providing the current template until m_last_inv_sequence is bumped when the next INV is sent, but that would have been a little more complicated and I wanted to keep things as simple as possible.

    It may be hard to do that perfectly; for example, if a recent CPFP occurred that bumped a low-fee old parent up high enough to make it into the template, but the child has not been announced to the peer yet.

    I think the above should also take care of this case already.

    I guess that reveals a little more ordering info: if you received two high fee txs (tx1 and tx2) in a single INV cycle, but generate a new template after receiving tx1 but before receiving tx2, then you’ll reveal the ordering of tx1 and tx2 to your peers. So perhaps aligning template generation with INV messages to inbound peers would be a worthwhile improvement. That would perhaps be a slightly leak to outbounds, but nothing they couldn’t discover just by making an inbound connection to you.

    What are your thoughts on the size of template, I don’t think it makes sense to just limit them to being one block in size, perhaps it would be reasonable to be 2 blocks in size so it always can run one block ahead?

    That seems pretty appealing; I didn’t do anything along those lines here mostly to keep things simple, and because waiting for cluster mempool before building even bigger collections of packages seemed plausible. If miners were to enforce their own soft blocksize limits (perhaps to reduce blockchain growth and marginally improve IBD, or as a cartel-style way of limiting supply to push prices up), then a 4M weight template might already account for 2 or more blocks worth of transactions. (signet blocks are mostly limited to 1M weight, eg)

    A double-size block wouldn’t give you an ideal predictor for the next block: it would miss timelocked transactions and transactions that exceed ancestor/descendant limits for a single block, though equally a sufficiently clever peer could just include those in a template anyway, especially if consensus/locktime checks weren’t being performed. At least they could if they ever saw them in the first place, which is presumably doubtful.

    I’m not sure what level of template overlap is likely in the real world – if it’s just “everyone thinks these txs are valid, but some nodes don’t think these are, and others don’t think these other ones are”, maybe that just adds up to ~6MB of memory total across all your peers (even if there are simultaneous surges across both unusual types of transactions) plus maybe 15% to account for churn. On the other hand, at times there might conceivably be a lot of double spends in the top of varius mempools, (perhaps posted to the network by chainanalysis-type nodes in order to try to discover the p2p network’s structure to help figure out where txs came from or which nodes are most likely to be building templates for mining). If that’s the case, it might not be sensible to try to keep all your peers’ conflicting template txs in memory, and I think there’s probably a variety of different approaches that might be worth trying there.

  11. gmaxwell commented at 9:52 pm on August 18, 2025: contributor

    The timelocks could be relaxed analogous to increasing the size however. With MTP we essentially know the lock criteria for one block out.

    I don’t really think the memory usage is a concern, – because there is just not a need to constantly do this with all peers in the face of memory pressure.

    You could add an additional message that lets peers say “I have X fees in Y weight in my template” – if the max size they were targeting was standardized then this would be a pretty good indicator when someone you haven’t been syncing against has something you might want to see. Perhaps multiple breakpoints, like a little fee/weight curve communicated in a dozen bytes.

  12. ismaelsadeeq commented at 10:24 am on August 19, 2025: member

    Nice idea.

    A couple of questions:

    1. How do nodes build the block template they share with peers? When they use only their mempool to build those templates, I don’t think that the transactions with diverging mempool policy will relay through the network and improve the compact block reconstruction process via this mechanism. Only peers whose outbound connections are miners will benefit.

    2. We don’t accept transactions in the mempool that have divergent policy because we deem them non-standard. How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them? (I think this is where weak blocks are superior to this proposal because they show that work has been done previously.)

    3. Is there a limit to the memory used to store the transactions that violate a node’s mempool policy but are received from peers? How do you decide which to keep and which to evict when the limit is reached? One approach could be linearization based on incentive compatibility and then evicting low-fee transactions, although this has the drawback of then making us not saving transactions paid for via out-of-band.

  13. polespinasa commented at 9:17 pm on August 19, 2025: contributor

    If latency is not very important, there may be a simpler solution: compute the template, and schedule it for sending, but only send it as soon as all transactions in it have been announced.

    Maybe a stupid question and I’m not understanding something, but what is the point of this if we have to wait for all transactions to be announced? Isn’t the whole idea to make sure our peers know about what we think will be the next block?

  14. instagibbs commented at 6:03 pm on August 20, 2025: member

    @ismaelsadeeq

    How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them

    There’s no need to fully validate these transactions you are given. If they violate your own policy you just won’t include them in your own mempool (or include them in a block template).

  15. ajtowns commented at 4:51 am on August 21, 2025: contributor
    1. How do nodes build the block template they share with peers?

    The same way they build a block template for the getblocktemplate RPC (well, except ignoring config options that might otherwise reduce the block size). See here.

    When they use only their mempool to build those templates, I don’t think that the transactions with diverging mempool policy will relay through the network and improve the compact block reconstruction process via this mechanism.

    This doesn’t aim to improve tx relay in most cases. The scenario this helps with is when a subset of the network has a relaxed policy compared to your node (eg, the librerelay nodes on the network vs you; or the core nodes on the network vs a default knots node; or nodes that have adopted newer policy/replacement rules (eg TRUC, lower min fee, pay to anchor, etc) vs an older/non-enabled node). In that case, if you happen to peer with one of those nodes that have a relaxed policy, and you request templates from that node, you’ll be able to reconstruct blocks mined with that relaxed policy without needing a round trip.

    (The main case where it might improve relay is when a tx was evicted from many but not all mempools due to size limits, but eventually mempools clear and its eligible again, but has not already been widely rebroadcast for other reasons. In some circumstances it also might help with replacement cycling attacks, allowing the victim transaction to automatically be rebroadcast once the conflicting tx has been replaced)

    1. We don’t accept transactions in the mempool that have divergent policy because we deem them non-standard. How does this proposal prevent a scenario where an adversary stuff transactions that are harmful to peers, forcing them to validate and save them?

    Note that this PR does not include code for requesting templates, only providing them; so there’s no change here from this PR alone.

    The proof of concept code linked from https://delvingbitcoin.org/t/sharing-block-templates/1906/ that does request templates grabs templates from outbound/manual peers, and validates any txs in templates that aren’t in the mempool according to the usual standardness rules, adding them to the mempool if they pass. That’s no more harmful (CPU-wise) than receiving the same txs via INV/GETDATA, and the memory usage is limited by only keeping one template per selected peer, and rejecting templates that are more than 1MvB.

    If you were to validate txs in templates against consensus rules only (without the additional standardness rules), then that could be more costly (you’d be missing the tx size limit and BIP 54 rules so might be hitting significant amounts of hashing), though. The proof of concept code doesn’t do that.

    1. Is there a limit to the memory used to store the transactions that violate a node’s mempool policy but are received from peers? How do you decide which to keep and which to evict when the limit is reached? One approach could be linearization based on incentive compatibility and then evicting low-fee transactions, although this has the drawback of then making us not saving transactions paid for via out-of-band.

    If transactions in templates are ordered by (modified) fee rate, then you could keep the first transactions in a template, which might help you preserve transactions that were paid for out-of-band if you have a peer that’s aware of the out-of-band payments. That could also work for things like mempool.space’s accelerator product, where there’s an API that will tell you about the prioritised transactions, provided a sufficient number of nodes use the API and prioritise transactions accordingly.

  16. ajtowns commented at 4:57 am on August 21, 2025: contributor

    Maybe a stupid question and I’m not understanding something, but what is the point of this if we have to wait for all transactions to be announced? Isn’t the whole idea to make sure our peers know about what we think will be the next block?

    If you include a transaction in a template you send to a peer, that implicitly announces the tx to that peer.

    The reason we delay announcing txs to a peer is to add some uncertainty about when precisely we received a transaction, and the order in which we received similarly recent transactions – knowing when many nodes in the network first heard about transactions with precision allows you to make a very good guess about how the network is structured and who heard about a transaction first, which makes it easier to work out who created the transaction which is bad for privacy, and possibly identify weak points in the p2p network to better plan DoS attacks.

    So preserving the same timing uncertainty when adding a potentially new way to announce txs seems worthwhile.

  17. ajtowns force-pushed on Aug 21, 2025
  18. ajtowns commented at 5:52 am on August 21, 2025: contributor
    Bumped the size of templates up to 2MvB, and added latest_template_weight and latest_template_tx to gettemplateinfo to report on the size of the current template.
  19. DrahtBot added the label CI failed on Aug 21, 2025
  20. DrahtBot commented at 6:54 am on August 21, 2025: contributor

    🚧 At least one of the CI tasks failed. Task ARM, unit tests, no functional tests: https://github.com/bitcoin/bitcoin/runs/48553661676 LLM reason (✨ experimental): The failure is caused by a missing initializer for the constexpr static member ‘ALLOW_OVERSIZED_BLOCKS’ in miner.h.

    Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:

    • Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.

    • A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.

    • An intermittent issue.

    Leave a comment here, if you need help tracking down a confusing failure.

  21. ajtowns force-pushed on Aug 22, 2025
  22. DrahtBot removed the label CI failed on Aug 22, 2025
  23. DrahtBot added the label Needs rebase on Aug 28, 2025
  24. node/miner: drop node/context include af6092dd3c
  25. node/miner: Log via MINER debug category
    This removes "CreateNewBlock" messages from logs by default. Also changes
    "-printpriority" argument to automatically enable -debug=miner logging.
    ab4dc8adac
  26. node/miner: allow generation of oversized templates
    Constructing a BlockAssembler with ALLOW_OVERSIZED_BLOCKS allows oversized blocks.
    12b56acd10
  27. blockencoding: Move mempool to InitData argument
    This allows default initialisation of a PartiallyDownloadedBlock.
    59c80938f7
  28. protocol.h: Define template sharing protocol messages
    Also covers functional test setup.
    1e94cd7313
  29. net_processing: add support for template sharing
    This adds suport for SENDTEMPLATE, GETTEMPLATE, TEMPLATE p2p messages to generate
    and send block templates to our peers. It does not support requesting templates from
    our peers.
    a9cc3b9108
  30. ajtowns force-pushed on Aug 29, 2025
  31. ajtowns commented at 1:05 pm on August 29, 2025: contributor
    Rebased over #33253 but not optimised to iterate over template txs efficiently.
  32. DrahtBot removed the label Needs rebase on Aug 29, 2025
  33. DrahtBot added the label CI failed on Aug 29, 2025
  34. DrahtBot commented at 2:37 pm on August 30, 2025: contributor
    from CI: [09:20:21.596] SUMMARY: MemorySanitizer: use-of-uninitialized-value /ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/crypto/./crypto/siphash.cpp:132:5 in SipHashUint256(unsigned long, unsigned long, uint256 const&)
  35. blockencodings: Generalise vExtraTxns
    Allows extra transactions to be provided from structures other than a
    vector of CTransactionRefs.
    de0e954b12
  36. net_processing: use our own old templates to help reconstruct blocks 463dffad55
  37. rpc/net: provide experimental gettemplateinfo a4285ac68e
  38. net_processing: drop dummy coinbase from templates
    Shared templates can never predict a future block's coinbase, so don't
    waste bandwidth including it; likewise don't include the first transaction
    in the compact block prefill when it's not a coinbase.
    1044dc8d37
  39. ajtowns force-pushed on Sep 1, 2025
  40. DrahtBot removed the label CI failed on Sep 1, 2025

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-09-07 06:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me