Hi, The idea was proposed by @roconnor-blockstream https://github.com/bitcoin/bitcoin/issues/15621
Basically the idea is to change the cost of verification of k-of-n from n verifications(amount of pubkeys) to k recoveries (amount of signatures).
Currently the performance aren't great: (no gmp, no endo(bitcoin's default))
$ ./becnch_recover
ecdsa_recover: min 66.2us / avg 68.4us / max 74.5us
ecdsa_recover_batch: 10: min 84.5us / avg 85.1us / max 85.8us
$ ./bench_verify
ecdsa_verify: min 58.0us / avg 59.2us / max 60.2us
ecdsa_verify_openssl: min 334us / avg 340us / max 346us
Although it's already improved by batching the gej->ge conversions and by batching the Rx inversions.
This can be further optimized using a suggestion from @sipa to pass in the public keys and then match against them using something like:
static int ge_equals_gej_var(const secp256k1_ge *a, const secp256k1_gej *b) {
secp256k1_fe z2s;
secp256k1_fe u1, s1;
if (a->infinity && b->infinity) {
return 1;
} else if (a->infinity || b->infinity) {
return 0;
}
#ifdef VERIFY
VERIFY_CHECK(b->x.magnitude == 1);
VERIFY_CHECK(b->y.magnitude == 1);
secp256k1_fe_verify(&b->x);
secp256k1_fe_verify(&b->y);
#endif
/* Check a.x * b.z^2 == b.x && a.y * b.z^3 == b.y, to avoid inverses. */
secp256k1_fe_sqr(&z2s, &b->z);
secp256k1_fe_mul(&u1, &a->x, &z2s);
secp256k1_fe_mul(&s1, &a->y, &z2s); secp256k1_fe_mul(&s1, &s1, &b->z);
return secp256k1_fe_equal_var(&u1, &b->x) && secp256k1_fe_equal_var(&s1, &b->y);
}
And then matching through the list (need to think how to do it the most efficiently because it's still 2 mul and a sqrt).
Other idea might be a way to batch secp256k1_gej_add_var(&xj2, &xj2, &u1j, NULL); which is basically computing A = R+X, B=R-X.
I don't have a concrete suggestion into how to batch that yet but it needs some thought.
This is still a work in progress as we would want to see good performance before even considering this, although it's already going to provide a performance boost in cases that k < 7/10*n (2-of-3 and up) but i'm not sure differentiating these cases is a great idea(more complexity to consensus code)
Edit: For reference @0xb10c thankfully provided me with some stats on multisig. currently 9.2% of all historic UTXOs are multisig. with the following distribution(ordered from least used to most): (Thanks! @0xb10c)
4-of-8: 1
1-of-14: 1
11-of-11: 1
13-of-13: 1
6-of-15: 1
4-of-11: 1
6-of-10: 2
1-of-8: 2
1-of-10: 2
6-of-8: 2
8-of-10: 2
7-of-12: 2
3-of-10: 2
2-of-12: 3
5-of-15: 3
5-of-10: 4
8-of-9: 5
12-of-12: 5
1-of-9: 7
12-of-15: 8
6-of-11: 8
6-of-7: 9
10-of-15: 11
3-of-9: 11
5-of-9: 12
6-of-9: 13
2-of-10: 14
3-of-15: 21
1-of-15: 27
5-of-6: 28
6-of-6: 41
9-of-9: 51
2-of-14: 66
8-of-15: 70
2-of-15: 74
7-of-7: 98
15-of-15: 102
1-of-12: 117
5-of-8: 120
10-of-10: 144
8-of-8: 150
3-of-8: 169
6-of-13: 250
11-of-15: 280
5-of-7: 306
2-of-7: 322
4-of-9: 431
2-of-8: 600
5-of-5: 711
2-of-9: 1024
1-of-7: 1801
4-of-7: 2174
4-of-4: 2870
1-of-5: 3216
1-of-4: 3702
1-of-6: 6260
4-of-5: 10719
3-of-7: 16731
2-of-5: 20763
4-of-6: 21304
3-of-6: 24457
3-of-3: 29275
1-of-3: 39495
2-of-4: 310673
1-of-1: 478276
2-of-6: 511406
3-of-5: 624550
1-of-2: 651491
3-of-4: 3849322
2-of-2: 25711364
2-of-3: 73384150
Total inputs: 1143408605
Total multisig inputs: 105709334
ms inputs / inputs 0.0924510568992963