RFC: Compact Block Reconstruction Macro Benchmark Suite #32131

issue l0rinc openend this issue on March 24, 2025
  1. l0rinc commented at 11:27 am on March 24, 2025: contributor

    Context and Motivation

    Compact blocks significantly improve block propagation efficiency by reducing bandwidth usage and latency during transmission. Precise benchmarking of compact block reconstruction performance is crucial for detecting regressions or improvements across releases, especially when modifying related code paths affecting mempool behavior or block relay performance.

    Recent analysis by B10C highlights significant variance in compact block reconstruction efficiency depending on mempool conditions. Specifically, during periods of high mempool congestion, reconstruction success rates frequently dropped below 50%, requiring additional transaction requests and significantly increasing reconstruction times. Additionally, enabling mempoolfullrbf was found to notably improve reconstruction efficiency, underscoring the importance of consistent mempool policies.

    Proposal: Compact Block Reconstruction Macro Benchmark Suite

    The goal is to create a robust macro benchmark suite measuring the performance of compact block reconstruction across Bitcoin Core releases and different node configurations. This suite would provide consistent and actionable data when reviewing changes to relevant code paths.

    As part of our macro benchmarking efforts - and as Gregory Sanders also outlined in recent Benchmarking meeting notes - we propose:

    • Setting up a node by syncing up to a known block height (e.g., block 840,000, ideally via quick AssumeUTXO seeding).
    • Fetching the next few blocks from the network (lazy-init from network, caching the blocks locally) and adding their transactions to the local mempool.
    • Replaying compact block announcements and measuring reconstruction performance (multiple times for consistent and statistically meaningful results, given variability compared to stable micro-benchmarks).
    • Testing under different mempool scenarios and configurations (e.g., varying mempoolfullrbf settings):
      • Fully populated mempool (asserting the mempool contains every transaction).
      • Single transaction missing (note that this will require unreliable network traffic unless we also add a local node that serves these transactions).
      • Empty mempool.

    RFC / Questions:

    • Should this benchmark run periodically (weekly, pre-release), or should it be triggered automatically via GitHub labels on relevant PRs?
    • Are there additional configurations or mempool states worth considering (such as the mempoolfullrbf discovery above)?
    • Should we measure performance for arbitrary block heights and varying subsequent block counts?
  2. maflcko added the label Brainstorming on Mar 24, 2025
  3. maflcko added the label Tests on Mar 24, 2025
  4. maflcko commented at 11:50 am on March 24, 2025: member
    • Setting up a node by syncing up to a known block height (e.g., block 840,000, ideally via quick AssumeUTXO seeding).

    • Fetching the next few blocks from the network (lazy-init from network, caching the blocks locally) and adding their transactions to the local mempool.

    • Replaying compact block announcements and measuring reconstruction performance

    Maybe I am missing something obvious, but I don’t think this is possible to detect the effects of mempoolfullrbf (one motivation for this benchmark). If you linearly replay blocks from the chain into the mempool, there won’t be any conflicts, so there can’t be any replacements, so any rbf policy settings won’t have any effect. I don’t think it is possible to benchmark this other than on the live network (the way it was done in https://delvingbitcoin.org/t/stats-on-compact-block-reconstructions/1052). If you really wanted to do it offline, you’d have to take a “real” mempool snapshot at the height you are interested in from an online node with the policy settings set you are interested in.

  5. 0xB10C commented at 11:53 am on March 24, 2025: contributor

    Replaying compact block announcements and measuring reconstruction performance (multiple times for consistent and statistically meaningful results, given variability compared to stable micro-benchmarks)

    What is the exact metric you are trying to measure? It’s not 100% clear to me if you are trying to measure performance as in “speed” or performance as in “reconstructions without a round trip”.

    Fetching the next few blocks from the network (lazy-init from network, caching the blocks locally) and adding their transactions to the local mempool.

    Adding multiple future blocks to the mempool is probably not as easy as one might think. Currently about 10% of transactions (https://transactionfee.info/charts/transactions-height-based-locktime/) have a locktime set. This means, you won’t be able to add these transactions and their children to your mempool. This will automatically cause transactions to be missing during compact block reconstruction.

  6. sipa commented at 12:18 pm on March 24, 2025: member
    The discussion at coredev that (I believe) led to this was focused on measuring the runtime of end-to-end block acceptance for 100% reconstructible blocks.

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-03-28 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me