I had a recent chat with @sipa about an idea I had to reimplement the CheckMultiSig operations to use public key recovery without altering their semantics.
New new proposed implementation would proceed as follows.
- For each signature in a
k
-of-n
CheckMultiSig operation, the message hash is computed (calling find-and-delete if required). - For each of
k
signature and message hash pairs, public key recovery is run to produce a list of sets of points in Jacobin coordinates. This requires at least one modular square root operation (and very rarely two), similar in cost to a pubkey decompression, per signature. The resulting list hask
sets and each set has 2 points (or very rarely 4 points). - All of the recovered points are batch converted from Jacobian coordinates to affine coordinates at the cost of one modular inverse operation for the entire batch.
- The list of sets of recovered points are compared, in order, to the list of serialized pubkeys passed to CheckMultiSig, using a new “compare affine point to serialized pubkey” function. No pubkey decompression need be done.
The advantage of this proposed method over the current “try-and-fail” method is that the number of elliptic curve operations is now roughly proportional to k
, the number of signatures, rather than n
the number of pubkeys.
For non-n
-of-n
CheckMultiSig operations, I expect this method should be faster than the current implementation. For n
-of-n
CheckMultiSig operations, the proposed method is not likely to be faster than local (n
signature) batch verification and certainly not faster than global batch verification (for OP_CHECKMULTISIGVERIFY
).
Unfortunately, I don’t have time to pursue a PR to implement this proposal at the moment, and maybe someone else would like to pick up this issue.