I think that right now, the split approach which we used for secp256k1_fe
, is overkill for secp256k1_scalar
. It was meaningful there, because secp256k1_fe
have meaningful properties (normalization, magnitude) which are independent from the chosen implementation, and thus are meaningful to check independently of the chosen implementation. That’s not the case for secp26k1_scalar
, and I don’t think it’s likely to change:
- Scalars are inherently less performance-critical than field elements, so the amount of complexity we’d be willing to tolerate for performance optimizations for them is lower for them.
- The type of properties that are independent of the implementation are I believe necessarily ones that only depend on the operations performed on them, and not their values. Apart from e.g. a “may be zero” property, I don’t see what properties there could be that even apply to scalars, unless denormalized representations are introduced for those too (and I don’t think the performance benefit is worth the complexity for those).
It is true that the abstraction introduced here has an unrelated advantage, namely keeping the entry/exit secp256k1_scalar_verify
calls out of the individual implementations, but I think the amount of code introduced here doesn’t justify that. I guess another advantage is uniformity with the field logic.
Still, I’m leaning towards not taking this approach, but I could be convinced otherwise.