This makes a basic effort and has not been audited. Doesn't appear to have a measurable performance impact on bench.
It also adds a secp256k1_num_free to secp256k1_ecdsa_pubkey_create.
This makes a basic effort and has not been audited. Doesn't appear to have a measurable performance impact on bench.
It also adds a secp256k1_num_free to secp256k1_ecdsa_pubkey_create.
This makes a basic effort and has not been audited.
Doesn't appear to have a measurable performance impact on bench.
It also adds a secp256k1_num_free to secp256k1_ecdsa_pubkey_create.
If we're going to go through the trouble to do this, I should probably at least audit it with the valgrind instrumentation trick.
I'm curious what your opinion is of the openssl_cleanse strategy of attempting to overwrite memory with data in a way that can't be optimized out by a compiler.
No amount of cleanse functions no matter how constructed can guarantee you against optimization: The compiler is perfectly permitted to spill your secret data into memory used for something else and then not bother to clean that up.
Any attempt is at best best effort and has to be weighed against the costs. I suppose it would be reasonable to use an assembly cleans loop any place we already have platform assembly— otherwise I'd say it wouldn't be worth the tooling costs. Otherwise just making sure the function is in another compilation unit would be adequate until the next round of compiler intelligence boosts. :)
I did spot-check that these changes weren't being optimized out for me with whatever compiler settings I was using. I think some day we'll be able to get a CI setup working that can detect leaks.
I suppose I should have clarified the attack model I was going for here, I'm mostly attempting to mitigate the effect of some external data leak in external code dumping the stack on a call immediately after a private key operation. Right now the private keys immediately show up in the uninitialized memory of the first array in the next call after a pubkey create or a sign.
So the value you're setting cleared group elements to is not a valid element. That's useful for debugginf purposes (it will presumably mean faster failure if you're using such cleared group elements), however maybe you want to set it to the point at infinity to make it harder to distinguish from used values?
Any use case in which the message would be considered secret data?
Yea, I wasn't sure there, — the message is always a hash too (or you are using DSA wrong and are subject to the usual doom when using DSA wrong :) ), which makes leaks related to the message even less interesting.
198 | + bits = secp256k1_num_shift(&n, 4); 199 | for (int k=0; k<sizeof(secp256k1_ge_t); k++) 200 | ((unsigned char*)(&add))[k] = c->prec[j][k][bits]; 201 | secp256k1_gej_add_ge(r, r, &add); 202 | } 203 | + bits = 0;
Is bits secret data...?
At the end of the loop its effectively the least significant few bits of the scalar we're multiplying by. Your call, considering the probability that this assignment just gets optimized out, maybe its not worth the slight uglification of the loop. (Though we could later change these to call a clean function on &bits, to at least avoid the optimization)
An alternative approach to these changes would be to figure out our peak stack usage and have the entrypoints that handle secret data call the interior function, then call a function that zeros that much stack.