It is unclear why not all benchmarks are run, given that:
- they only run as a sanity check (fastest version)
- no one otherwise runs them, not even CI
- issues have been missed due to this
It is unclear why not all benchmarks are run, given that:
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/32310.
See the guideline for information on the review process.
Type | Reviewers |
---|---|
ACK | l0rinc, BrandonOdiwuor |
Concept ACK | i-am-yuvi |
Stale ACK | hebasto |
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.
Concept ACK
Why is the CI failing?
Why is the CI failing?
Thx, excluded msvc for now, with the same approach as the temporary win-cross exclusion.
Why is the CI failing?
Thx, excluded msvc for now, with the same approach as the temporary win-cross exclusion.
This can now be reverted after rebase.
DEBUG=1
configurations, the bench part seems to take 3x the time. Though, this should be fine, given the motivation in the pull description, and also the fact that there are similarly slow other tests.
Locally the sanity bench (when running ctest
) is taking ~19s, roughly the same time as the second worst secp256k1 test. Under the DEBUG
build type it takes ~140s, the next longest running test is ~80s; so I imagine some devs will wonder why runing ctest is now taking > 2-3 minutes to complete. I checked that under Valgrind the runtime isn’t so long that it’d cause the job to fail due to timeout (although there might be a different issue causing that).
I think ideally we could just delete these benchmarks that take so long to run, especially given that nobody is running them anyway, when changing the related code (#32149).
Under the
DEBUG
build type it takes ~140s, the next longest running test is ~80s; so I imagine some devs will wonder why runing ctest is now taking > 2-3 minutes to complete.
I wonder if this is relevant, given how long the functional tests take. And if someone doesn’t want to run all tests, they can trivially exclude all benchmarks via ctest --exclude-regex ...
.
An alternative may be to allow running the individual benchmarks in parallel, similar to the unit tests.
I wonder if this is relevant, given how long the functional tests take.
My assumption would be that developers are running the units tests regularly, far more often than they are running the entire functional test suite.