The current ConnectBlock
benchmarks in bench/connectblock.cpp
do not reflect realistic mainnet workloads due to three key issues:
1. Unrealistic block composition
Every benchmarked block is constructed with a highly artificial transaction pattern:
0/*
1 * - Each transaction has the same number of inputs and outputs
2 * - All Taproot inputs use simple key path spends (no script path spends)
3 * - All signatures use SIGHASH_ALL (default sighash)
4 * - Each transaction spends all outputs from the previous transaction
5 */
This setup avoids realistic UTXO set fragmentation and script diversity. The benchmark effectively measures validation of a synthetic “ladder” of transactions rather than a block resembling mainnet traffic.
2. Unrealistic UTXO cache state
Before benchmarking, the code creates a block that produces the outputs, then immediately spends them all in the benchmark block. This keeps the entire UTXO set hot in memory (CoinsTip()
).
In reality:
- Many UTXO lookups hit LevelDB and require disk access.
- Cache misses and eviction policies significantly impact block validation cost.
3. Unrealistic repetition
Each benchmark repeatedly validates the same synthetic block:
0 const auto& test_block{CreateTestBlock(test_setup, keys, outputs)};
1 bench.unit("block").run([&] {
2 /* ... */
3 });
There is no variability in transaction graph, script mix, or UTXO evolution across iterations. As a result, the benchmark never exercises cache churn, block-to-block dependency patterns, or realistic workload diversity.
Why this matters
These issues mean the benchmark results do not reflect real-world ConnectBlock
performance. Instead, they measure a best-case, memory-only workload on a synthetic block structure.