Scalar 4x64 performance improvements #453

pull peterdettman wants to merge 1 commits into bitcoin-core:master from peterdettman:scalar_opt changing 1 files +117 −61
  1. peterdettman commented at 7:06 am on April 18, 2017: contributor

    (Not for immediate merge)

    In #452 I noted that sqr and mul take about the same time in my config (OSX, 64-bit, no-asm, -O3 -march=native), so this is a quick attempt to speed up _scalar_sqr. This initial commit rewrites _scalar_sqr_512 for an ~ 8% improvement in _scalar_sqr. Second opinions/measurements would be appreciated.

    It seems from the measurements that _scalar_reduce_512 is the real heavyweight here, so I’ll be trying to re-implement that next.

    I can rewrite in terms of macros (the current local code style) prior to any merge.

  2. dcousens commented at 7:07 am on April 18, 2017: contributor
    @peterdettman other than inlining, why is this faster?
  3. peterdettman commented at 7:38 am on April 18, 2017: contributor

    @dcousens The existing code is using macros not functions, so I doubt there is any benefit from writing inline code per se. Pre-loading a->d[0] etc. I don’t think makes much difference (presumably the compiler does it anyway). Did you mean something else?

    My assumption would be that the improvement is coming from using several uint128_t accumulators instead of the existing “160 bit accumulator” - muladd2 is quite awkward as a result, and presumably where the extra time is going.

  4. peterdettman commented at 6:03 am on April 19, 2017: contributor

    New commit rewrites secp256k1_reduce_512. Cumulative speed improvement for _scalar_inverse is now measured at >30%, bench_sign measurements improved by ~7-8%.

    The “trick” used with p4 (see lines 588, 611 and comments) warrants careful review.

  5. sipa commented at 10:51 pm on April 25, 2017: contributor

    Benchmarked bench_verify with GMP and asm disabled on a i7-6820HQ CPU, pegged to 2.6GHz:

    • Master: 101μs
    • This PR: 98μs
  6. ofek commented at 11:33 pm on May 22, 2017: none
    Is this ready to be merged?
  7. peterdettman commented at 5:52 am on May 23, 2017: contributor
    No, it needs thorough review of the carry handling and modular reduction, which can have very subtle bugs that random tests won’t catch. I’d also like to get around to rewriting _scalar_mul in the same spirit, although I expect less improvement from that, and probably putting the code back into a cleaner macro-style like the original code.
  8. real-or-random cross-referenced this on Aug 7, 2020 from issue Use double-wide types for additions in scalar code by sipa
  9. peterdettman force-pushed on Dec 21, 2021
  10. Rewrite _scalar_reduce_512 2bc31d3141
  11. peterdettman force-pushed on Dec 23, 2021
  12. sipa commented at 3:55 pm on May 8, 2023: contributor
    @peterdettman Are you still interested in this? It seems like a reasonable improvement, and I’m willing to look into verifying that the double-correction trick always yields the right answer. I do think we’ll want it in macro-style though.
  13. real-or-random commented at 3:58 pm on May 8, 2023: contributor

    I do think we’ll want it in macro-style though.

    Or instead of macros, actual (inlinable) C functions with type checking and all that modern stuff. ;)

  14. real-or-random added the label performance on May 9, 2023

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin-core/secp256k1. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-23 01:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me