Using some insights learned from #1058, this replaces the fixed-wnaf ecmult_const algorithm with a signed-digit based one. Conceptually both algorithms are very similar, in that they boil down to summing precomputed odd multiples of the input points. Practically however, the new algorithm is simpler because it’s just using scalar operations, rather than relying on wnaf machinery with skew terms to guarantee odd multipliers.

The idea is that we can compute $q \cdot A$ as follows:

- Let $s = f(q)$, for some function $f()$.
- Compute $(s_1, s_2)$ such that $s = s_1 + \lambda s_2$, using
`secp256k1_scalar_lambda_split`

. - Let $v_1 = s_1 + 2^{128}$ and $v_2 = s_2 + 2^{128}$ (such that the $v_i$ are positive and $n$ bits long).
- Computing the result as $$\sum_{i=0}^{n-1} (2v_1[i]-1) 2^i A + \sum_{i=0}^{n-1} (2v_2[i]-1) 2^i \lambda A$$ where $x[i]$ stands for the
*i*’th bit of $x$, so summing positive and negative powers of two times $A$, based on the bits of $v_1.$

The comments in `ecmult_const_impl.h`

show that if $f(q) = (q + (1+\lambda)(2^n - 2^{129} - 1))/2 \mod n$, the result will equal $q \cdot A$.

This last step can be performed in groups of multiple bits at once, by looking up entries in a precomputed table of odd multiples of $A$ and $\lambda A$, and then multiplying by a power of two before proceeding to the next group.

The result is slightly faster (I measure ~2% speedup), but significantly simpler as it only uses scalar arithmetic to determine the table lookup values. The speedup is due to the fact that no skew corrections at the end are needed, and less overhead to determine table indices. The precomputed table sizes are also made independent from the `ecmult`

ones, after observing that the optimal table size is bigger here (which also gives a small speedup).