Running a benchmark 10 times, twice (before and after), for #22974 and then editing the output by hand to remove the warnings and recommendations, brought home the point that it would be nice to be able to do that automatisch. This PR adds a -quiet arg to silence warnings and recommendations and an iters=<n> arg to run each benchmark for the number of iterations passed.
$ src/bench/bench_bitcoin -?
Options:
-?
Print this help message and exit
-asymptote=<n1,n2,n3,...>
Test asymptotic growth of the runtime of an algorithm, if supported by
the benchmark
-filter=<regex>
Regular expression filter to select benchmark by name (default: .*)
-iters=<n>
Iterations of each benchmark to run (default: 1)
-list
List benchmarks without executing them
-output_csv=<output.csv>
Generate CSV file with the most important benchmark results
-output_json=<output.json>
Generate JSON file with all benchmark results
-quiet
Silence warnings and recommendations in benchmark results
examples
$ ./src/bench/bench_bitcoin -filter=AddrManGood -iters=5 -quiet
| ns/op | op/s | err% | total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
| 2,538,968,665.00 | 0.39 | 15.9% | 12.12 | `AddrManGood`
| 2,536,901,200.00 | 0.39 | 13.0% | 13.73 | `AddrManGood`
| 2,337,840,590.00 | 0.43 | 3.9% | 12.07 | `AddrManGood`
| 1,997,515,936.00 | 0.50 | 2.6% | 10.09 | `AddrManGood`
| 2,217,950,210.00 | 0.45 | 1.3% | 11.30 | `AddrManGood`
$ ./src/bench/bench_bitcoin -filter=PrevectorDes*.* -iters=2 -quiet=1
| ns/op | op/s | err% | total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
| 8,062.56 | 124,030.15 | 5.7% | 0.09 | `PrevectorDeserializeNontrivial`
| 7,784.81 | 128,455.29 | 1.5% | 0.09 | `PrevectorDeserializeNontrivial`
| 356.44 | 2,805,497.65 | 1.5% | 0.00 | `PrevectorDeserializeTrivial`
| 354.52 | 2,820,715.33 | 0.9% | 0.00 | `PrevectorDeserializeTrivial`
| 241.27 | 4,144,791.38 | 0.9% | 0.00 | `PrevectorDestructorNontrivial`
| 241.45 | 4,141,658.77 | 0.9% | 0.00 | `PrevectorDestructorNontrivial`
| 146.64 | 6,819,400.81 | 0.9% | 0.00 | `PrevectorDestructorTrivial`
| 147.98 | 6,757,806.43 | 0.6% | 0.00 | `PrevectorDestructorTrivial`
$ ./src/bench/bench_bitcoin -filter=PrevectorDes*.* -iters=-1 -quiet=0
$ ./src/bench/bench_bitcoin -filter=PrevectorDes*.* -iters=0 -quiet=0
$ ./src/bench/bench_bitcoin -filter=PrevectorDes*.* -iters=1 -quiet=0
Warning, results might be unstable:
* DEBUG defined
* CPU frequency scaling enabled: CPU 0 between 400.0 and 3,100.0 MHz
* Turbo is enabled, CPU frequency will fluctuate
Recommendations
* Make sure you compile for Release
* Use 'pyperf system tune' before benchmarking. See https://github.com/psf/pyperf
| ns/op | op/s | err% | total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
| 6,204.87 | 161,163.71 | 15.2% | 0.07 | :wavy_dash: `PrevectorDeserializeNontrivial` (Unstable with ~1.0 iters. Increase `minEpochIterations` to e.g. 10)
| 214.33 | 4,665,680.65 | 0.1% | 0.00 | `PrevectorDeserializeTrivial`
| 257.23 | 3,887,584.03 | 8.6% | 0.00 | :wavy_dash: `PrevectorDestructorNontrivial` (Unstable with ~43.5 iters. Increase `minEpochIterations` to e.g. 435)
| 151.34 | 6,607,846.82 | 1.9% | 0.00 | `PrevectorDestructorTrivial`