New context API #780

issue real-or-random openend this issue on July 29, 2020
  1. real-or-random commented at 10:00 am on July 29, 2020: contributor

    edit: This is now more a meta issue to discuss an improved context API, see the discussion below.

    I’m arguably late to the party but I believe we should either talk about this now (before we do a release) or never:

    Taking a step back, I believe that even given the problems with C, upstream’s API is not optimal here. A better API would be to ask the user to provide entropy already in _context_create, or pass NULL explicitly to opt out.

    Originally posted by @real-or-random in https://github.com/rust-bitcoin/rust-secp256k1/issues/225#issuecomment-655998672

    Note that this does not need to be a breaking change, we could for example add a new function and deprecate secp256k1_context_create.

  2. real-or-random added this to the milestone initial release (1.0.0-rc.1) on Jul 29, 2020
  3. elichai commented at 10:17 am on July 30, 2020: contributor
    Do we want something like secp256k1_context_create_randomized which can accept either a seed or NULL, or should it enforce NULL? (the cost I think is that users that don’t have access to an rng will pass either an array of zeros or a constant random seed, I don’t think either will decrease the security, and a constant seed is probably better than nothing)
  4. gmaxwell commented at 6:29 pm on August 1, 2020: contributor

    @real-or-random Randomization should be ideally be updated for each signature not just once for the context, which is why it doesn’t make sense to just have it as a context creation argument.

    Ideally every signature would advance blinding to a new state, but doing that would write to the context, and the existing interface allows you to use the context from multiple threads.

    This made more sense when the signing context had large and expensive to create precomputed tables in it rather than the tables being static. Now the signing context is essentially just the randomness. It would make a lot more sense in my view to now make signing contexts non-shared, and then they could advance their random state on each use, and it would also be reasonable to generate randomness at creation time.

    The problem I see with that is that the verify contexts still have huge expensive to create precomputed tables, and one probably doesn’t want to have to generate a new one all the time… so it makes sense to share them. This wouldn’t be a big deal, except sign & verify contexts exist.

  5. real-or-random commented at 7:18 am on August 3, 2020: contributor

    @real-or-random Randomization should be ideally be updated for each signature not just once for the context, which is why it doesn’t make sense to just have it as a context creation argument.

    Well, agreed but I didn’t think about removing _randomize. We should keep this for sure, but it may still make sense to expect the randomness at context creation time.

    Ideally every signature would advance blinding to a new state, but doing that would write to the context, and the existing interface allows you to use the context from multiple threads.

    This made more sense when the signing context had large and expensive to create precomputed tables in it rather than the tables being static. Now the signing context is essentially just the randomness. It would make a lot more sense in my view to now make signing contexts non-shared, and then they could advance their random state on each use, and it would also be reasonable to generate randomness at creation time.

    Indeed. Last time we discussed this, we didn’t have a good idea how to do this because of the expensive multiplication. Maybe we could have something like what I mentioned here: http://gnusha.org/secp256k1/2020-07-08.log (I think @sipa mentioned this idea earlier somewhere).

    The problem I see with that is that the verify contexts still have huge expensive to create precomputed tables, and one probably doesn’t want to have to generate a new one all the time… so it makes sense to share them. This wouldn’t be a big deal, except sign & verify contexts exist.

    I don’t think that’s a big deal either. You could make combined contexts non-shared too. If the user wants more concurrency, they can just create separate contexts.

  6. real-or-random cross-referenced this on Aug 3, 2020 from issue Add an optional global, static context by sgeisler
  7. gmaxwell commented at 6:32 pm on August 3, 2020: contributor
    For the scalar update even just having two random 256 bit (scalar,point) pairs stored in the context and randomly choosing one to add to the running scalar blinding with each signature would be pretty good (they should be 256 bit so that they tend to change all the words). I think it would be hard to justify anything more elaborate without measurement.
  8. real-or-random commented at 2:03 pm on August 4, 2020: contributor

    For the scalar update even just having two random 256 bit (scalar,point) pairs stored in the context and randomly choosing one to add to the running scalar blinding with each signature would be pretty good (they should be 256 bit so that they tend to change all the words). I think it would be hard to justify anything more elaborate without measurement.

    That sounds interesting but somewhat ad-hoc. Just to get the idea: You propose to choose one of two (instead of a constant value) to make sure that the scalar is essentially randomized after enough signatures, which is good enough because a side-channel attacker anyway needs a lot of samples and is then forced to generate enough signatures?

  9. gmaxwell commented at 4:25 pm on August 4, 2020: contributor

    Right. Even a single attacker unknown random increment should break most attacks, but a choice of two can be done for the runtime cost of a single additional CMOV and will leave the whole scalar totally randomized after a number of uses.

    Any of these power/emi sidechannel defences are inherently adhoc: nothing the library can do can make it strong against a sufficiently powerful sidechannel attacker.

    If instead we assume the attacker can only manage to extract a few bits, then all the attacks that I’m aware of are extremely fragile and are easily broken. E.g. the quaint old bitcoin ’low-s’ rule completely obliterates them– because they depend on you utterly reliably identifying the most significant bit of the scalar. They can’t brute force the guess because they need hundreds of signatures with the correct guess.

    The fact that these attacks are so fragile opens the door to disrupting them with fairly minimal means…

    The main thing that I know to defend against attacker can get learn the power signature of multiplying with specific constants (e.g. which entries of the precomputed table). The random isomorphism makes it so the first multiplication isn’t entirely constant at least, but at least two arguments to it are.

    So the first defence is randomly blinding the scalar so that learning which table entries were accessed isn’t that useful. But randomly blinding is expensive if you need an extra ecmult to update the blinding and that ecmult leaks too… (though that’s why the update should increment the blinder, not replace it).

    I think if you really can tolerate a blinding that doubled the computational cost, you’d be better off just running two ecmults with the first or the second randomly the real one, and the other a dummy random one (or blinding update), and use cmovs in and out to get the real one output. But that it just extremely expensive and I think hard to justify without measurements that say it helps and that something stupid isn’t leaking everything.

    The next obvious thing to do is to increase the size of the ladder some so that you compute a (say) 320-bit x * G and set x randomly so that x%N is the target scalar– at least that isn’t an arbitrary doubling.

    Dettman had a patch to use random isomorphisms for the table, but to really make use of it the table needs to be writeable, and on just about any embedded device where power/emi sidechannels are an interesting attack will be pushed by memory pressure and init time to keep the table in flash instead of ram. Though it would be reasonable to apply it build time at least– better than nothing and the attacker might not have access to the build (e.g. gen_context picking the random isomorphism).

    Going back on the theme of limited measures being effective against the known attacks which are less powerful than just reading everything (which we’re hopeless against)– Choice of negation and endomorphism gives 2.58 bits of entropy, e.g. randomly scale the scalar by {1, beta, beta^2, -1, -beta, -beta^2} for the low low cost of two scalar multiplies, three scalar cmovs, one scalar conditional negate, two field multiplies, three field cmovs, and one field conditional negate.

    I wish the nonce function it was using naturally gave a somewhat larger output. :-/ even just a couple bits to drive a random scaling factor and/or pick one of two random updates to the blinder.

  10. real-or-random commented at 10:16 am on August 7, 2020: contributor

    Ok, so I still think that having something like secp256k1_context_create_randomized would be a small improvement but in the end then the current API is pretty reasonable then and I don’t think we should change it now.

    We should really think about automatic rerandomization but as discussed here this will imply that signing contexts can’t be used concurrently. This will be a major change, and so it should not be silent, and we would anyway need to break the API and rename functions etc. This deserves some more thoughts:

    • Do we want to split signing/verification contexts entirely?
    • Is signing/verification the right terminology now that we have other operations (ECDH, tweaking) that require contexts?
    • Are the context creation flags the most ergonomic API?

    I think that’s nothing that we want to do for the initial release then. I’m renaming the issue then. If there’s interest, we could still introduce “cheap” randomization already with the existing API, e.g., by adding a context_randomize_cheap or similar.

    Does this make sense?

  11. real-or-random removed this from the milestone initial release (1.0.0-rc.1) on Aug 7, 2020
  12. real-or-random renamed this:
    API for context creation
    New API for context creation
    on Aug 7, 2020
  13. real-or-random renamed this:
    New API for context creation
    New context API
    on Aug 7, 2020
  14. sipa commented at 2:01 pm on August 7, 2020: contributor

    Whatever approach is used to get automatic rerandomization upon signing will require an API with mutable objects, and the _sign function(s) currently take an immutable context object, for which it would be pretty surprising to see modifications (and the corresponding need for synchronization).

    One possibility is separating the signing context entirely as suggested above, and then make (all?) interactions with such a signing context rerandomize it.

    A bit less invasive may be to just have equivalent _sign_and_rerandomize() functions that take a non-const context object, and does the equivalent of signing + separate randomize calls (but with a possibly much cheaper/merged operation). As new functions would be required to deal with a separate context type anyway, this may be strictly less work.

  15. real-or-random commented at 2:51 pm on August 7, 2020: contributor

    A bit less invasive may be to just have equivalent _sign_and_rerandomize() functions that take a non-const context object, and does the equivalent of signing + separate randomize calls (but with a possibly much cheaper/merged operation). As new functions would be required to deal with a separate context type anyway, this may be strictly less work.

    That sounds pretty reasonable.

  16. gmaxwell commented at 5:44 pm on August 7, 2020: contributor

    Do we want to split signing/verification contexts entirely?

    One problem for that is functions which need both kinds of context (I don’t think there are in secp256k1, but IIRC there are in secp256k1-zkp).

    I think I’d like to see long term the big tables just becoming static everywhere even if it means making the shared library 1.5MB. And then the contexts could become mutable and non-shared.

    (doing that would even save memory on systems, even though it’ll make the SO bigger on disk).

    Randomization and scratch space really have made a case for mutable contexts that didn’t exist when these decisions were originally made.

  17. real-or-random cross-referenced this on Aug 7, 2020 from issue Add schnorrsig module which implements BIP-340 compliant signatures by jonasnick
  18. real-or-random commented at 0:12 am on August 12, 2020: contributor

    I wish the nonce function it was using naturally gave a somewhat larger output. :-/ even just a couple bits to drive a random scaling factor and/or pick one of two random updates to the blinder.

    We can use the squareness of the y coordinate of R, we throw that bit away anyway. And @sipa pointed out that any of the tie-breakers (square, even, high) works as a bit of entropy, no matter what we actually use as a tie-breaker in the scheme.

  19. gmaxwell commented at 1:52 am on August 12, 2020: contributor
    oh that is super nice! one bit is what I really wanted, and one bit is what you provided.
  20. sipa commented at 2:01 am on August 12, 2020: contributor
    It’s also provably does not impact the security of the signature scheme itself: there is no security risk from leaking the bit to an attacker, as it is at worst equivalent to a variant of the scheme that includes the R sign explicitly.
  21. real-or-random commented at 10:15 am on August 12, 2020: contributor

    oh that is super nice! one bit is what I really wanted, and one bit is what you provided.

    I feel a bit relieved that we don’t need to change the nonce function.

  22. gmaxwell commented at 7:46 am on August 14, 2020: contributor
    I realized that the specifics in the suggestion I gave above are a little broken, it shouldn’t just add one constant or another constant, it should add one constant or another constant then double. Otherwise different permutations of the same number of each choice would end up in the same state.
  23. real-or-random cross-referenced this on Aug 18, 2020 from issue Add fallback LE/BE for architectures with known endianness + SHA256 selftest by sipa
  24. real-or-random commented at 9:32 am on August 18, 2020: contributor

    Another issue with the current context API is that we can’t have a user-provided error callback when the context is created. Currently we hack around this by calling the default callback instead (https://github.com/bitcoin-core/secp256k1/blob/master/src/secp256k1.c#L149) and we give the user the ability to override the default callback at compile time.

    All of this is not really nice. The runtime callback setting is not very useful because it does not work everywhere and the compile-time callback setting can be cumbersome.

  25. bitcoin-core deleted a comment on May 27, 2021
  26. real-or-random cross-referenced this on Jul 6, 2021 from issue Replace ecmult_context with a generated static array. by roconnor-blockstream
  27. real-or-random cross-referenced this on Oct 8, 2021 from issue Make signing table fully static by real-or-random
  28. elichai commented at 3:48 pm on November 3, 2021: contributor

    Summarizing a long discussion on this from IRC (The discussion, I edited out anything that wasn’t related, I hope I didn’t accidentally modified the discussion in any way) After #988 there will be 2 kinds of contexts left, and we might add another one with #967:

    1. illegal arguments and error callbacks
    2. Randomizing the signing operations for better side-channel resistance
    3. Determining CPU features in runtime for #967.

    Some ideas raised on how to handle these so we can remove the context argument from most of the public API:

    1. These can probably be removed since we have #595, but this might complicate the tests that check these callbacks.
    2. A. Should we make this context mutable (or add a sign_with_randomize) so that we can add randomization to it with every sign operation (harvest a single bit of entropy each time). B. Should this context be thread_local(C11 and can complicate embedded) or implement something using atomics(C11) (We could for example, make a pool of contexts and mark them as currently used/unused and spinlock if they’re all used) C. Add a flag that says if it can assume thread safety or not, if yes re-randomize it with every operation, if not don’t re-randomize it.
    3. Note that this will need to be done only once, and also dynamic dispatch might not be proven performant to do at the field operations level (as it will prohibit inlining) so we might need to do it on the EC operations level or even the API level. A. We could use a call_once(C11). B. We could use compiler extensions like __attribute__((constructor))(link) C. We could use binary specific extensions like .init_array in ELF .CRT$XIB in EXE, and __DATA,__mod_init_func in MCH-O. D. We could use architecture specific operations, for example in x86 movs to/from regular registers are always atomic, so we can call cpuid from inline assembly and write the result into a static variable from the assembly, that way we don’t violate C data race rules. E. Compile the library multiple times with different flags and somehow get the linker to use the right one (@gmaxwell said that’s how libspeex works on debian/ubuntu)

    My own opinions: 2 - we should probably do A and not B because users of this library can probably implement B more easily as they assume more things on their environment (single threaded, an OS etc.), 3 - we should really try to avoid requiring the user to pass a context everywhere just for the cpuid results.

  29. real-or-random commented at 3:40 pm on November 9, 2021: contributor
    Another purpose of the global context is to run the self-tests (which are currently not run for the static no_precomp context…)
  30. real-or-random cross-referenced this on Nov 26, 2021 from issue Reducing context arguments in taproot-related functions by dr-orlovsky
  31. real-or-random cross-referenced this on Dec 11, 2021 from issue Replace MuSig(1) module with MuSig2 by jonasnick
  32. real-or-random cross-referenced this on Feb 1, 2022 from issue Context randomization tracking issue by apoelstra
  33. real-or-random cross-referenced this on Feb 2, 2022 from issue Further changes after making tables static by real-or-random
  34. real-or-random cross-referenced this on Jul 11, 2022 from issue Wasm, Contexts, and Entropy by JeremyRubin
  35. real-or-random commented at 7:02 am on July 26, 2022: contributor
    Shower thought that I don’t want to lose: If we’ll have a more mutable context, then it might make sense to store a callback to a PRG in the context (instead of asking the user for randomness every time).
  36. real-or-random cross-referenced this on Sep 14, 2022 from issue Protecting ecmult_gen against side-channel attacks: the big picture by real-or-random
  37. real-or-random cross-referenced this on Jan 18, 2023 from issue contexts: Forbid destroying, cloning and randomizing the static context by real-or-random

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin-core/secp256k1. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-22 10:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me