In theory this should be faster, since secp256k1_fe_equal_var is able to shortcut the normalization. On x86_64 the improvement appears to be in the noise for me. At least it makes the code cleaner.
Use secp256k1_fe_equal_var in secp256k1_fe_sqrt_var. #177
pull gmaxwell wants to merge 2 commits into bitcoin-core:master from gmaxwell:faster_sqrt changing 3 files +3 −4-
gmaxwell commented at 1:59 PM on December 31, 2014: contributor
-
70ae0d2851
Use secp256k1_fe_equal_var in secp256k1_fe_sqrt_var.
In theory this should be faster, since secp256k1_fe_equal_var is able to shortcut the normalization. On x86_64 the improvement appears to be in the noise for me. At least it makes the code cleaner.
-
peterdettman commented at 11:44 AM on January 1, 2015: contributor
Actually it looks like 'a' needs to be weakly normalized; the existing code was wrong.
-
gmaxwell commented at 12:00 PM on January 1, 2015: contributor
Can we construct an example where secp256k1_ge_set_xo_var would be incorrect?
Doesn't the explicit magnitude tracking in VERIFY check that the sequence of operations executed in the tests could not have produced an internal overflow for any values? I thought it was conservative. What case are we missing?
-
sipa commented at 12:02 PM on January 1, 2015: contributor
Presumable every actual code path leads to a weakly normalized a in that place.
-
peterdettman commented at 12:14 PM on January 1, 2015: contributor
As things stand, _equal_var contract is not enforced for second argument.
-
sipa commented at 3:51 PM on January 1, 2015: contributor
So, the current implied contract is that field elements have at most magnitude 8, which is verified in all field element operations.
The fe_sqrt routine however accepts a field element a (which, according to the contract can have magnitude up to 8), and is added to t1 (which has magnitude 2), so the addition could result in a field element with magnitude 10, which is not allowed. In practice, all existing code paths are guaranteed (because the veirifcation of magnitudes is static and conservative) to never result in a having magnitude 8.
So the 'bug' could be fixed by just declaring that fe_sqrt takes an argument with magnitude at most 6 (which is already the case in practice).
-
sipa commented at 3:53 PM on January 1, 2015: contributor
In any case, ACK.
-
sipa commented at 3:57 PM on January 1, 2015: contributor
Actually, no. fe_equal_var requires its arguments to be weakly normalized, but fe_sqrt does not. I don't understand why the tests don't fail on this.
-
gmaxwell commented at 4:44 PM on January 1, 2015: contributor
I am really not following this.
The input to the sqrt, a, is not normalized in our code (it's the result of an addition with 7).
The result of the sqrt is squared and as the output of a squaring it is normalized.
secp256k1_fe_equal_var negates the squared number, which is fine. Then adds a. Which is fine.
Then secp256k1_fe_normalizes_to_zero does the minimum amount of normalization needed to check the result. Whats violated?
-
sipa commented at 5:05 PM on January 1, 2015: contributor
Nothing is semantically violated.
The only thing that is missing is a contract that stipulates that the inputs to sqrt are at most magnitude 1 and magnitude 6.
All current code obeys that rule, and given sufficient code coverage there is an absolute guarantee that no current code path will violate that requirement.
Either sqrt does a normalization on the inputs, or it gains a comment saying that the inputs have stronger than usual normalization requirements. That's all that's missing.
-
gmaxwell commented at 5:08 PM on January 1, 2015: contributor
ah! okay. Thanks, I was at a loss in part because it isn't clear to me what the input requirements are in various functions (other that to the extent that they're checked), and I thought you were saying that the change would actually yield a wrong result.
I assume by "inputs to sqrt" you mean inputs to "fe_equal_var"?
-
ABISprotocol commented at 4:19 AM on January 2, 2015: none
Hello, I wanted to leave this brief note here. I'm not sure if it will be helpful or not, but perhaps it may provide something of note or context in this discussion. https://www.w3.org/Bugs/Public/show_bug.cgi?id=25618#c60 this could probably be updated... http://ianix.com/pub/curve25519-deployment.html ok, I'm going away now thx
-
Add magnitude limits to secp256k1_fe_verify to ensure that it's own tests function correctly. 7688e341c5
-
gmaxwell commented at 4:11 PM on January 2, 2015: contributor
@peterdettman @sipa After reviewing all the code I do not see any error in the magnitude requirements, or a failure to enforce (except one:. I did notice that fe_verify imposed no upper limit on the magnitude at all, which is incorrect, so I added a commit with a limit.
I concluded that the input to the square can be up to magnitude 8 and give correct results (it's only ever an input to sqr(), to add() with another input of magnitude 1, and the equal). secp256k1_fe_equal_var()'s first input must be magnitude 1 and this is checked by the negate inside it, which always run, and the second input can be magnitude whatever (well, up to 31, which is where the add would overflow for 10x26; which (with my second commit) is tested by the fe_verify inside the add, which is always executed).
The above is based on an understanding that fe_normalizes_to_zero_var works correctly for any magnitude that fits in the fe over-complete representation. If that isn't true, someone should rescue me from a grave misunderstanding about how all this works, and we should add a VERIFY test to the function that checks the actual requirement.
As far as enforcing contracts goes, what I think we generally want to do is perform the magnitude tests in verifies at the lowest possible level. E.g. nowhere but inside fe_verify or in special operations like the field specific fe_mul and fe_sqr that have their own internal magnitude requirements. Keeping the tests at the lowest levels make them more likely to be correct regardless of what field we're running on, and they're also where the magnitudes must be updated. HOWEVER, if an execution path of higher level code is non-deterministic, we may want to bubble up the worst case normalization path requirement and enforce it unconditionally. We have some code paths that execute with probability less than 2^-128 (and where constructing a test case is cryptographically hard), so we would never know if those paths overflow unless we manually bubble up the requirement. Fortunately the coverage tests I've been running will help catch if we make a mistake in failing to bubble up a requirement for code we don't know how to hit.
Bubbling up presents a problem that the low level requirements differ based on the field in use, which means layering violations (e.g. explicit magnitude tests inside group logic), which it would be preferable to avoid.
-
sipa commented at 10:24 PM on January 2, 2015: contributor
I'm not sure about the "only check normalization constraints at the lowest level except in cases of non-deterministism" policy; it makes it harder to reason about different implementations, and where leeway exists. Especially when non-determinism would depend on lower-level implementations, it becomes less clear.
I think we're going towards exposing more normalization constraints upwards anyway, so I would propose to make this explicit, have the field code expose macros to verify constraints on arguments, use those in higher levels that have non-trivial constraints, and more importantly, also enforce worst-case checking (so for example if fe_sqrt lists a constraint that its a argument has at most magnitude 13, its magnitude is at runtime also actually changed to 13, resulting in tests that effectively check whether the higher-level constraints implicitly satisfy the lower-level ones).
-
sipa commented at 10:24 PM on January 2, 2015: contributor
ACK in any case, this is an improvement.
-
sipa commented at 11:05 PM on January 2, 2015: contributor
I don't see any measurable performance improvement.
-
gmaxwell commented at 11:05 PM on January 2, 2015: contributor
Read the commit message. I didn't either, but thought it was worth taking in any case as it wasn't slower and made the code cleaner. :) Ah okay.
-
sipa commented at 11:06 PM on January 2, 2015: contributor
I saw it. I'm just confirming that I also don't see a performance improvement :)
- sipa merged this on Jan 4, 2015
- sipa closed this on Jan 4, 2015
- sipa referenced this in commit 10c81ffb5d on Jan 4, 2015