In issue #29098 a recommendation is not to use “best (smallest) set of k signatures”. So, this effort is to fall back to the original algorithm which only use the first k available signatures for satisfying a k-of-n multisig. Otherwise, there will be timeout in unit test when we have 999-of-999 use case.
Profiling has been done on Mac to confirm the most time consuming function is in internal::InputResult ProduceInput.
Following tests will hit the affected code:
- ctest –test-dir build
or to be specific: % build/src/test/test_bitcoin –run_test=descriptor_tests % build/src/test/test_bitcoin –run_test=miniscript_tests
- build/test/functional/test_runner.py –extended
or to be specific: build/test/functional/test_runner.py test/functional/wallet_taproot.py –tmpdir /tmp
- env -i HOME="$HOME" PATH="$PATH" USER="$USER" bash -c ‘MAKEJOBS="-j8" FILE_ENV="./ci/test/00_setup_env_native_fuzz.sh" ./ci/test_run_all.sh’
Time enhancement comparision: For original wallet_taproot.py test, it reduces the time from 10 seconds to 7 seconds on Apple M1 chipset (Sequoia 15.1.1).
Also, after cherry pick the PR that identified this performance issue: #28212 , the time spent was down from timed out to 36 seconds.