Separate magnitude/normalization/… checking/propagation from implementations #1060

issue sipa openend this issue on January 3, 2022
  1. sipa commented at 7:10 pm on January 3, 2022: contributor

    Right now, it is not exactly clear where the definition of magnitude/normalization rules belongs:

    • On the one hand, both implementations individually track these values separately (and have their own rules)
    • On the other hand, callers of the field code are expected to simultaneously satisfy all of them.

    With #967 things may become even more complicated as it adds a “precomputed” dimension (=reduced to 4 64-bit limbs instead of 5), but lacks the need for magnitudes as a concern.

    So I think that we should make these rules part of the generic field interface, rather than part of the implementations. To avoid duplication, there could be a field_debug.h that implements these by “wrapping” the underlying field. It could be done by giving all field types/functions a suffix, and if debug is in use, have the real field functions forward to those, and if not, just #define fn fn_suffix for all.

    One reason for not doing this would be if we want to make group logic dependent on what the used field is (e.g. a field with laxer magnitude requirements could result in the group code emitting fewer normalization calls).

  2. real-or-random commented at 10:17 am on January 4, 2022: contributor

    Making the rules part of the generic interface seems a reasonable simplification given that we currently use only magnitudes below ~16 (https://github.com/bitcoin-core/secp256k1/pull/1028#discussion_r762599882) but you could argue that then we optimize for 32bit code. Most users are (probably) on 64bit platforms and the magnitude limit there is higher…

    So I think that we should make these rules part of the generic field interface, rather than part of the implementations. To avoid duplication, there could be a field_debug.h that implements these by “wrapping” the underlying field. It could be done by giving all field types/functions a suffix, and if debug is in use, have the real field functions forward to those, and if not, just #define fn fn_suffix for all.

    You mean the wrapper check the rules, i.e., they do what what is currently done in _fe_verify?

    P.S.: Dreaming of dependent types for magnitude..

  3. real-or-random commented at 10:55 am on January 4, 2022: contributor

    With #967 things may become even more complicated as it adds a “precomputed” dimension (=reduced to 4 64-bit limbs instead of 5), but lacks the need for magnitudes as a concern.

    Currently the only rules are magnitude rules. Can you elaborate what kind of rules are added by “precomputed” dimension?

    P.S.: Dreaming of dependent types for magnitude..

    In fact, if we have magnitude implied statically (#1001) and full test coverage, this is equivalent to enforcing the magnitude in the type system. I think we should do that. Then it will be easy to drop normalize calls (we can just run the tests to obtain a proof that everything is alright). In that situation, moving the magnitude rules to the generic interface won’t gain us much.

    By the way, has anyone ever tried a variant with explicit magnitude tracking? What I mean is that there is a magnitude member in the struct (not only in VERIFY) and the implementation decides dynamically when to normalize and thus can be as lazy as possible.

  4. peterdettman commented at 6:51 am on January 11, 2022: contributor

    I’m in favour of enforcing consistent limits across the field implementations. Although I don’t particularly like adding the extra “layer”, perhaps it does make more sense to put those checks in a single place.

    I would prefer to see effort directed to #1001 first.

    One reason for not doing this would be if we want to make group logic dependent on what the used field is (e.g. a field with laxer magnitude requirements could result in the group code emitting fewer normalization calls).

    This is becoming less and less likely IMO. For one thing, our group methods are improving with respect to the output magnitudes, and #1032 would eventually let us drop several normalize calls at entry anyway. Actually I think these will soon be normalization-free (ignoring the _normalizes_to_zero checks that aren’t about magnitude reduction).

    Secondly, the main constraint for normalization is the input magnitudes for _fe_mul/_fe_sqr. We can actually improve that to 16 (from the current 8) for the 5x52 and 10x32 implementations by:

    1. _fe_mul/_fe_sqr leave their final carry in the most-significant-limb.
    2. limbs except for the MSL have their bound halved (i.e. the bound becomes M*(2^52-1) for 5x52 field).
    3. one or two other field methods probably need to be adjusted accordingly; last time I tried this it was only _fe_negate.

    The 5x52 C mul/sqr are already consistent with this. One of my other PRs included that behaviour for the 10x26 C mul/sqr, but had a (slight) performance impact on some platforms (probably ignorable). The blocker for this is currently updating the asm implementation(s).

    Even though I think 16M inputs to mul/sqr is a nice thing to have, the only performance impact I could easily see was to eliminate (IIRC) 2 normalizations from _gej_add_ge, and they may already be gone! Therefore I don’t expect any gains from saved normalizations by having field-implementation-specific code. Perhaps one sunny day someone will e.g. want to sum a long list of field elements, and the most sensible thing to do then would be to give them a new field method I think.

  5. sipa cross-referenced this on Jan 29, 2022 from issue Abstract out and merge all the magnitude/normalized logic by sipa
  6. real-or-random cross-referenced this on Feb 8, 2022 from issue Magnitude in field implementation by jhoenicke
  7. sipa commented at 5:01 pm on November 17, 2022: contributor
    One point to add to the statically-defined magnitudes view discussed above is that we’re unable to have all magnitude reasoning fixed at compile time, just because some of our algorithms have data-dependent branches (only the constant-time ones are guaranteed not to have any, and only for the secret arguments). For example in ecmult (the variable-time one), the exact code path (and the corresponding operations on field elements) are dependent on the input scalar. So relying exactly on enforcement of magnitude rules may still miss edge-cases that are triggerable only with specific scalars in this example.
  8. peterdettman commented at 2:01 am on November 18, 2022: contributor

    When control flows join the effective magnitude (which is a bound) is the worst-case of all of them, except for complicated scenarios where past and future branches aren’t independent of each other.

    It might be worth providing a method to make that an explicit operation (asserting a magnitude limit and actually setting the magnitude to that limit). This parallels the original logic of secp256k1_fe_cmov (which, in passing, hasn’t been restored, though I believe there was an issue tracking it).

    Although any given branch is likely to get code coverage, it’s more difficult to cover all combinations of branches, so I think this would be valuable. I guess one could contrive a scenario where performance would be slightly affected (implying an extra _normalize_weak_var), but it doesn’t strike me as significant.

  9. sipa closed this on May 11, 2023

  10. peterdettman cross-referenced this on May 23, 2023 from issue fiat-crypto + CryptOpt tracking issue by real-or-random

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin-core/secp256k1. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-21 20:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me