src/bench/bench_bitcoin -printer=plot -filter=CCoinsView.* > plot.html
:
Tested scenarios (plus ideas that could be added later):
- access a cached coin
- add 200.000 coins to cache (plot shows average time per coin)
- flush cache (plot shows average time per coin)
- load cache from disk
- access coins that are not cached
- access coins that are cached but dirty
- flush dirty coins
The flush test creates a temporary directory on disk.
I normalized the cache access benchmark to take about the same speed as the existing benches do on my machine.
I disabled scaling iterations for the cache addition and flush bench. It doesn’t make sense to have more than 1 iteration per eval, because the speed per coin as part of a large flush is a more relevant metric then how often you can flush 1 coin. At the same time I didn’t want the benches to use up too much RAM if someone sets -scaling
to high number.
To test a bigger cache increase N_CACHE_SCALE=1
. The cache is about 40 MB by default, try N_CACHE_SCALE=100
for more realistic scenarios, but note that test needs ~3x RAM.
I’m doing a few things the bench framework doesn’t seem designed for. Would like some feedback before I refactor it in the wrong direction:
- share code with test framework (should support
--disable-test
?) - clean up stuff between evals; e.g. I need to reset the coin cache for
CCoinsViewCacheAddCoinFresh
andCCoinsViewCacheFlush
- disable scaling iterations
- add a memory-scaling argument
- allow pausing the clock between iterations; e.g. to generate test coins on the fly rather in bulk before the run