maybe provide some kind of user feedback that the option is running (I did this to sanity-check that it was)
--- a/src/bench/bench.cpp
+++ b/src/bench/bench.cpp
@@ -58,6 +58,9 @@ void benchmark::BenchRunner::RunAll(const Args& args)
std::smatch baseMatch;
std::vector<ankerl::nanobench::Result> benchmarkResults;
+ if (args.one_iteration) {
+ std::cout << "Running with --one-iteration option, i.e. epochs(1).epochIterations(1)" << std::endl;
+ }
for (const auto& p : benchmarks()) {
if (!std::regex_match(p.first, baseMatch, reFilter)) {
output
$ time ./src/bench/bench_bitcoin -filter=WalletBalance*.* --one-iteration
Running with --one-iteration option, i.e. epochs(1).epochIterations(1)
Warning, results might be unstable:
* DEBUG defined
* CPU frequency scaling enabled: CPU 0 between 2,485.0 and 3,100.0 MHz
* Turbo is enabled, CPU frequency will fluctuate
Recommendations
* Make sure you compile for Release
* Use 'pyperf system tune' before benchmarking. See https://github.com/psf/pyperf
| ns/op | op/s | err% | total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
(unrelated, and not sure there is a use case, but --one-iteration could later be changed to an integer --iterations arg)