ecmult_multi: reduce strauss memory usage by 30% #1761
pull jonasnick wants to merge 1 commits into bitcoin-core:master from jonasnick:strauss-mem changing 1 files +19 −4-
jonasnick commented at 2:28 pm on October 17, 2025: contributorThis is a draft because I’m not sure about the cleanest way to implement it.
-
ecmult_multi: reduce strauss memory usage by 30% 26166c4f5f
-
real-or-random added the label performance on Oct 27, 2025
-
real-or-random commented at 8:02 am on October 27, 2025: contributor
This is a draft because I’m not sure about the cleanest way to implement it.
The current approach looks clean. What other approaches do you have in mind?
-
real-or-random added the label tweak/refactor on Oct 27, 2025
-
in src/ecmult_impl.h:239 in 26166c4f5f
236+} 237+ 238 struct secp256k1_strauss_point_state { 239- int wnaf_na_1[129]; 240- int wnaf_na_lam[129]; 241+ int8_t wnaf_na_1[129];
real-or-random commented at 8:10 am on October 27, 2025:0 int_least8_t wnaf_na_1[129];This is technically better to retain support for platforms that don’t offer
int8_t. I don’t think there are any relevant platforms of that kind, but why not… 🤷edit: Now that I think about it again, a platform that does not offer
int8_twill most likely one for whichCHAR_BITS != 8. And then it’s neither relevant nor it’s likely to offerint32_torint64_twhich we need anyway. So I guess there’s no difference betweenint8_tandint_least8_t.
hebasto commented at 4:58 pm on October 27, 2025:This is technically better to retain support for platforms that don’t offer
int8_t.int8_tis already used insrc/assumptions.h.hebasto commented at 4:47 pm on October 27, 2025: memberConcept ACK.hebasto commented at 6:00 pm on October 27, 2025: memberOn my x86_64 system, this PR reduces the memory allocated on the scratch space from 2224 bytes to 1452 bytes per point.
Please ping me once it’s undrafted.
jonasnick commented at 6:52 pm on October 27, 2025: contributorThe current approach requires a temporary array
int wnaf_tmp[256];to provide tosecp256k1_ecmult_wnafwhich looks unclean. The alternatives are- copy almost all of
secp256k1_ecmult_wnafintosecp256k1_ecmult_wnaf_small, or - remove
secp256k1_ecmult_wnaf_smalland write ansecp256k1_ecmultmacro.
Both options seem to be worse.
hebasto commented at 10:00 pm on October 27, 2025: memberThe current approach requires a temporary array
int wnaf_tmp[256];to provide tosecp256k1_ecmult_wnafwhich looks unclean. The alternatives are-
copy almost all of
secp256k1_ecmult_wnafintosecp256k1_ecmult_wnaf_small, or -
remove
secp256k1_ecmult_wnaf_smalland write ansecp256k1_ecmultmacro.
Both options seem to be worse.
I might suggest a third option: https://github.com/hebasto/secp256k1/commit/5c0d6eeeb2ec9dc8de92f59cf7647a30e6826dcb.
in src/ecmult_impl.h:228 in 26166c4f5f
219@@ -220,9 +220,24 @@ static int secp256k1_ecmult_wnaf(int *wnaf, int len, const secp256k1_scalar *a, 220 return last_set_bit + 1; 221 } 222 223+/* Same as secp256k1_ecmult_wnaf, but stores to int8_t array. Requires w <= 8. */ 224+static int secp256k1_ecmult_wnaf_small(int8_t *wnaf, int len, const secp256k1_scalar *a, int w) { 225+ int wnaf_tmp[256]; 226+ int ret, i; 227+ 228+ VERIFY_CHECK(2 <= w && w <= 8);
real-or-random commented at 9:37 am on November 7, 2025:0 VERIFY_CHECK(2 <= w && w <= 7);
jonasnick commented at 2:56 pm on November 12, 2025:I don’t see why
w = 8wouldn’t work. The documentation of wnaf says0 * - each wnaf[i] is either 0, or an odd integer between -(1<<(w-1) - 1) and (1<<(w-1) - 1)So for
w = 8, wnaf[i] is in[-127, 127]which fits in anint8_t.
real-or-random commented at 8:18 pm on November 12, 2025:Sorry, yes, you’re right. I was getting confused.
secp256k1_ecmult_wnafitself needsw <= 31(and not 32), if only because it performs acarry << wshift (forint carry) which is certainly UB ifintis 32 bits. (In fact, ifcarry == 1, then even1 << 31is UB. This is another edge case that we should fix! Let me add this to the other issue.)But since your function only copies the results, everything is fine.
jonasnick commented at 8:22 pm on November 12, 2025:In fact, if carry == 1, then even 1 « 31 is UB. This is another edge case that we should fix! Let me add this to the other issue
Oh, great catch!
real-or-random commented at 9:56 am on November 7, 2025: contributorI think the current approach in the PR is good. It may not be elegant to have a tmp array, but it’s simple and correct. We’d need to benchmark if the tmp array makes a difference in the end. But I think this PR needs a benchmark in general to make sure that using
int8_tdoes not increase the running time (much).If we want to avoid it, here’s yet another variant: https://github.com/real-or-random/secp256k1/commit/f83731bce8880787351533c41d14ac334b38680c It uses a macro to define different variants of
secp256k1_ecmult_wnafparametrized in the output type. The macro is not elegant either, but this variant is better for type safety than just turningsecp256k1_ecmult_wnafinto a macro.In fact, the current
secp256k1_ecmult_wnafneeds the unstated and unchecked assumption thatinthas at least 32 value bits when it VERIFY_CHECKs thatw <= 31. In practice, we call it only withWINDOW_A == 5andWINDOW_G == ECMULT_WINDOW_SIZEwhere the latter is configurable in the range2..24.A consequence of this “bug” is that the code fails on a 16-bit platform if you set
ECMULT_WINDOW_SIZE > 16. I don’t think we need to support this, but code without unchecked assumptions is bad. So I suggest that we rewrite the function to useint32_tinstead ofinteven if we don’t use my macro approach. Alternatively, we could add the assumption thatINT_MAX >= INT32_MAXbut this forbids 16-bit platforms, and the code seems to work on them in principle; see #792 (comment).real-or-random commented at 10:17 am on November 7, 2025: contributorI might suggest a third option: hebasto@5c0d6ee.
Sorry, I forgot to comment on that option. That’s also clean, but it introduces a lot of code complexity.
The way I see it:
- If the current approach is fine performance-wise, let’s take it.
- If not, use either @hebasto’s approach or mine. If you ask me, I prefer mine slightly because it’s less code and more “direct” even though it uses a macro.
- If none of this is satisfactory, we can still duplicate the code.
jonasnick commented at 7:26 pm on November 12, 2025: contributorThanks for demonstrating creative alternative solutions :) If it weren’t for the layers of indirection, I’d consider @hebasto’s approach to be the most elegant. The PR’s current approach is just so much simpler. And I ran benchmarks withbench_ecmult, which showed at most a 0.1us slowdown (for some number of points) on my Intel i7 machine.jonasnick marked this as ready for review on Nov 12, 2025real-or-random approvedreal-or-random commented at 8:23 pm on November 12, 2025: contributorutACK 26166c4f5fd5688c31e488449ee2325eb8f2fb36siv2r commented at 2:47 pm on November 17, 2025: contributortACK 26166c4
Points Master Avg (µs) PR Avg (µs) Improvement 8,191 5.22 5.10 2.3% 16,383 6.13 5.44 12.6% 20,479 6.88 5.78 19.0% 32,767 8.26 7.50 10.0% 0Using strauss_wnaf: 1Benchmark , Min(us) , Avg(us) , Max(us) 2 3ecmult_gen , 5.30 , 5.48 , 5.71 4ecmult_const , 12.1 , 12.3 , 12.4 5ecmult_const_xonly , 13.6 , 13.8 , 13.9 6ecmult_1p , 9.82 , 10.0 , 10.1 7ecmult_0p_g , 6.82 , 6.92 , 7.00 8ecmult_1p_g , 5.82 , 5.89 , 5.95 9ecmult_multi_0p_g , 6.84 , 6.93 , 6.99 10ecmult_multi_1p_g , 5.79 , 5.90 , 6.00 11ecmult_multi_2p_g , 5.50 , 5.57 , 5.65 12ecmult_multi_3p_g , 5.39 , 5.45 , 5.58 13ecmult_multi_4p_g , 5.21 , 5.30 , 5.37 14ecmult_multi_5p_g , 5.14 , 5.25 , 5.30 15ecmult_multi_6p_g , 5.10 , 5.20 , 5.27 16ecmult_multi_7p_g , 5.07 , 5.15 , 5.20 17ecmult_multi_8p_g , 5.03 , 5.12 , 5.17 18ecmult_multi_9p_g , 4.92 , 5.09 , 5.16 19ecmult_multi_10p_g , 5.01 , 5.09 , 5.14 20ecmult_multi_11p_g , 5.00 , 5.08 , 5.13 21ecmult_multi_12p_g , 4.98 , 5.06 , 5.12 22ecmult_multi_13p_g , 4.97 , 5.05 , 5.10 23ecmult_multi_14p_g , 4.93 , 5.03 , 5.13 24ecmult_multi_15p_g , 4.94 , 5.03 , 5.08 25ecmult_multi_17p_g , 4.94 , 5.00 , 5.06 26ecmult_multi_19p_g , 4.92 , 4.99 , 5.07 27ecmult_multi_21p_g , 4.89 , 4.96 , 5.05 28ecmult_multi_23p_g , 4.90 , 4.98 , 5.03 29ecmult_multi_25p_g , 4.88 , 4.96 , 5.03 30ecmult_multi_27p_g , 4.85 , 4.95 , 5.00 31ecmult_multi_29p_g , 4.90 , 4.97 , 5.07 32ecmult_multi_31p_g , 4.76 , 4.94 , 5.04 33ecmult_multi_35p_g , 4.84 , 4.94 , 5.09 34ecmult_multi_39p_g , 4.85 , 4.92 , 4.99 35ecmult_multi_43p_g , 4.79 , 5.07 , 6.24 36ecmult_multi_47p_g , 4.71 , 4.88 , 4.98 37ecmult_multi_51p_g , 4.68 , 4.87 , 5.00 38ecmult_multi_55p_g , 4.81 , 4.90 , 4.97 39ecmult_multi_59p_g , 4.80 , 4.93 , 5.00 40ecmult_multi_63p_g , 4.81 , 4.92 , 4.98 41ecmult_multi_71p_g , 4.80 , 4.91 , 4.99 42ecmult_multi_79p_g , 4.84 , 4.93 , 5.02 43ecmult_multi_87p_g , 4.76 , 4.93 , 5.02 44ecmult_multi_95p_g , 4.86 , 4.96 , 5.04 45ecmult_multi_103p_g , 4.88 , 4.94 , 5.02 46ecmult_multi_111p_g , 4.90 , 4.96 , 5.02 47ecmult_multi_119p_g , 4.88 , 4.96 , 5.01 48ecmult_multi_127p_g , 4.90 , 4.98 , 5.02 49ecmult_multi_143p_g , 4.94 , 5.00 , 5.05 50ecmult_multi_159p_g , 4.88 , 4.99 , 5.05 51ecmult_multi_175p_g , 4.88 , 4.98 , 5.05 52ecmult_multi_191p_g , 4.92 , 5.01 , 5.06 53ecmult_multi_207p_g , 4.94 , 5.03 , 5.09 54ecmult_multi_223p_g , 4.92 , 4.99 , 5.08 55ecmult_multi_239p_g , 4.98 , 5.04 , 5.10 56ecmult_multi_255p_g , 4.84 , 5.01 , 5.10 57ecmult_multi_287p_g , 4.98 , 5.05 , 5.11 58ecmult_multi_319p_g , 4.96 , 5.06 , 5.14 59ecmult_multi_351p_g , 5.00 , 5.06 , 5.15 60ecmult_multi_383p_g , 4.99 , 5.06 , 5.13 61ecmult_multi_415p_g , 4.95 , 5.06 , 5.14 62ecmult_multi_447p_g , 4.96 , 5.07 , 5.16 63ecmult_multi_479p_g , 5.01 , 5.09 , 5.15 64ecmult_multi_511p_g , 5.00 , 5.07 , 5.13 65ecmult_multi_575p_g , 4.94 , 5.05 , 5.13 66ecmult_multi_639p_g , 4.99 , 5.07 , 5.13 67ecmult_multi_703p_g , 4.99 , 5.07 , 5.16 68ecmult_multi_767p_g , 4.96 , 5.07 , 5.15 69ecmult_multi_831p_g , 5.00 , 5.07 , 5.14 70ecmult_multi_895p_g , 5.00 , 5.08 , 5.15 71ecmult_multi_959p_g , 4.98 , 5.08 , 5.14 72ecmult_multi_1023p_g , 5.00 , 5.09 , 5.16 73ecmult_multi_1151p_g , 5.01 , 5.09 , 5.17 74ecmult_multi_1279p_g , 5.04 , 5.10 , 5.16 75ecmult_multi_1407p_g , 4.92 , 5.09 , 5.15 76ecmult_multi_1535p_g , 5.00 , 5.11 , 5.16 77ecmult_multi_1663p_g , 5.03 , 5.11 , 5.19 78ecmult_multi_1791p_g , 5.04 , 5.13 , 5.18 79ecmult_multi_1919p_g , 5.02 , 5.11 , 5.20 80ecmult_multi_2047p_g , 4.99 , 5.10 , 5.18 81ecmult_multi_2303p_g , 4.97 , 5.11 , 5.20 82ecmult_multi_2559p_g , 5.02 , 5.11 , 5.19 83ecmult_multi_2815p_g , 5.04 , 5.12 , 5.18 84ecmult_multi_3071p_g , 5.01 , 5.12 , 5.24 85ecmult_multi_3327p_g , 5.05 , 5.12 , 5.16 86ecmult_multi_3583p_g , 5.03 , 5.10 , 5.18 87ecmult_multi_3839p_g , 5.07 , 5.13 , 5.23 88ecmult_multi_4095p_g , 5.02 , 5.09 , 5.19 89ecmult_multi_4607p_g , 5.04 , 5.10 , 5.15 90ecmult_multi_5119p_g , 5.01 , 5.10 , 5.18 91ecmult_multi_5631p_g , 5.02 , 5.15 , 5.29 92ecmult_multi_6143p_g , 5.06 , 5.13 , 5.19 93ecmult_multi_6655p_g , 5.06 , 5.16 , 5.22 94ecmult_multi_7167p_g , 5.10 , 5.21 , 5.40 95ecmult_multi_7679p_g , 5.15 , 5.19 , 5.25 96ecmult_multi_8191p_g , 5.16 , 5.22 , 5.29 97ecmult_multi_9215p_g , 5.30 , 5.34 , 5.41 98ecmult_multi_10239p_g , 5.34 , 5.44 , 5.54 99ecmult_multi_11263p_g , 5.46 , 5.55 , 5.66 100ecmult_multi_12287p_g , 5.45 , 5.64 , 6.08 101ecmult_multi_13311p_g , 5.66 , 5.81 , 6.44 102ecmult_multi_14335p_g , 5.77 , 5.90 , 6.32 103ecmult_multi_15359p_g , 5.89 , 5.95 , 6.08 104ecmult_multi_16383p_g , 6.06 , 6.13 , 6.25 105ecmult_multi_18431p_g , 6.44 , 6.52 , 6.82 106ecmult_multi_20479p_g , 6.82 , 6.88 , 7.07 107ecmult_multi_22527p_g , 7.09 , 7.13 , 7.18 108ecmult_multi_24575p_g , 7.48 , 7.57 , 7.70 109ecmult_multi_26623p_g , 7.74 , 7.82 , 8.02 110ecmult_multi_28671p_g , 7.90 , 7.95 , 8.09 111ecmult_multi_30719p_g , 8.05 , 8.12 , 8.27 112ecmult_multi_32767p_g , 8.21 , 8.26 , 8.410Using strauss_wnaf: 1Benchmark , Min(us) , Avg(us) , Max(us) 2 3ecmult_gen , 5.24 , 5.45 , 5.82 4ecmult_const , 12.0 , 12.1 , 12.3 5ecmult_const_xonly , 13.5 , 13.7 , 13.9 6ecmult_1p , 9.81 , 9.98 , 10.1 7ecmult_0p_g , 6.71 , 6.84 , 6.98 8ecmult_1p_g , 5.80 , 5.86 , 5.92 9ecmult_multi_0p_g , 6.81 , 6.87 , 6.92 10ecmult_multi_1p_g , 5.80 , 5.88 , 5.97 11ecmult_multi_2p_g , 5.39 , 5.53 , 5.59 12ecmult_multi_3p_g , 5.30 , 5.39 , 5.44 13ecmult_multi_4p_g , 5.21 , 5.29 , 5.37 14ecmult_multi_5p_g , 5.13 , 5.23 , 5.28 15ecmult_multi_6p_g , 5.07 , 5.17 , 5.23 16ecmult_multi_7p_g , 5.07 , 5.15 , 5.20 17ecmult_multi_8p_g , 5.04 , 5.13 , 5.19 18ecmult_multi_9p_g , 4.99 , 5.11 , 5.18 19ecmult_multi_10p_g , 4.99 , 5.07 , 5.12 20ecmult_multi_11p_g , 4.97 , 5.06 , 5.18 21ecmult_multi_12p_g , 4.94 , 5.05 , 5.10 22ecmult_multi_13p_g , 4.96 , 5.05 , 5.13 23ecmult_multi_14p_g , 4.98 , 5.03 , 5.11 24ecmult_multi_15p_g , 4.94 , 5.02 , 5.10 25ecmult_multi_17p_g , 4.91 , 5.01 , 5.08 26ecmult_multi_19p_g , 4.89 , 5.00 , 5.07 27ecmult_multi_21p_g , 4.90 , 4.98 , 5.04 28ecmult_multi_23p_g , 4.89 , 4.97 , 5.05 29ecmult_multi_25p_g , 4.84 , 4.96 , 5.06 30ecmult_multi_27p_g , 4.87 , 4.94 , 5.03 31ecmult_multi_29p_g , 4.83 , 4.92 , 4.98 32ecmult_multi_31p_g , 4.82 , 4.92 , 4.99 33ecmult_multi_35p_g , 4.82 , 4.93 , 5.01 34ecmult_multi_39p_g , 4.86 , 4.92 , 4.97 35ecmult_multi_43p_g , 4.82 , 4.91 , 4.99 36ecmult_multi_47p_g , 4.81 , 4.91 , 4.96 37ecmult_multi_51p_g , 4.84 , 4.91 , 4.99 38ecmult_multi_55p_g , 4.79 , 4.90 , 4.97 39ecmult_multi_59p_g , 4.85 , 4.91 , 4.96 40ecmult_multi_63p_g , 4.81 , 4.89 , 4.96 41ecmult_multi_71p_g , 4.79 , 4.88 , 4.97 42ecmult_multi_79p_g , 4.79 , 4.90 , 4.97 43ecmult_multi_87p_g , 4.81 , 4.90 , 4.97 44ecmult_multi_95p_g , 4.82 , 4.90 , 4.99 45ecmult_multi_103p_g , 4.86 , 4.92 , 4.99 46ecmult_multi_111p_g , 4.82 , 4.92 , 4.99 47ecmult_multi_119p_g , 4.84 , 4.94 , 5.00 48ecmult_multi_127p_g , 4.81 , 4.92 , 5.01 49ecmult_multi_143p_g , 4.86 , 4.95 , 5.01 50ecmult_multi_159p_g , 4.92 , 4.97 , 5.06 51ecmult_multi_175p_g , 4.84 , 4.95 , 5.01 52ecmult_multi_191p_g , 4.88 , 4.96 , 5.04 53ecmult_multi_207p_g , 4.91 , 4.99 , 5.04 54ecmult_multi_223p_g , 4.93 , 4.99 , 5.05 55ecmult_multi_239p_g , 4.88 , 4.96 , 5.02 56ecmult_multi_255p_g , 4.88 , 4.97 , 5.03 57ecmult_multi_287p_g , 4.91 , 4.97 , 5.04 58ecmult_multi_319p_g , 4.92 , 4.97 , 5.04 59ecmult_multi_351p_g , 4.92 , 4.98 , 5.04 60ecmult_multi_383p_g , 4.88 , 4.98 , 5.04 61ecmult_multi_415p_g , 4.93 , 4.99 , 5.07 62ecmult_multi_447p_g , 4.81 , 4.98 , 5.05 63ecmult_multi_479p_g , 4.90 , 4.98 , 5.05 64ecmult_multi_511p_g , 4.92 , 5.02 , 5.07 65ecmult_multi_575p_g , 4.95 , 5.01 , 5.06 66ecmult_multi_639p_g , 4.94 , 5.02 , 5.08 67ecmult_multi_703p_g , 4.96 , 5.04 , 5.10 68ecmult_multi_767p_g , 4.92 , 5.03 , 5.09 69ecmult_multi_831p_g , 4.93 , 5.06 , 5.11 70ecmult_multi_895p_g , 4.90 , 5.02 , 5.11 71ecmult_multi_959p_g , 4.94 , 5.02 , 5.11 72ecmult_multi_1023p_g , 4.95 , 5.04 , 5.12 73ecmult_multi_1151p_g , 4.96 , 5.05 , 5.11 74ecmult_multi_1279p_g , 4.87 , 5.06 , 5.33 75ecmult_multi_1407p_g , 4.93 , 5.04 , 5.10 76ecmult_multi_1535p_g , 4.99 , 5.04 , 5.11 77ecmult_multi_1663p_g , 4.98 , 5.04 , 5.14 78ecmult_multi_1791p_g , 4.99 , 5.05 , 5.11 79ecmult_multi_1919p_g , 4.95 , 5.05 , 5.09 80ecmult_multi_2047p_g , 5.01 , 5.03 , 5.13 81ecmult_multi_2303p_g , 4.97 , 5.05 , 5.10 82ecmult_multi_2559p_g , 4.92 , 5.04 , 5.11 83ecmult_multi_2815p_g , 5.00 , 5.05 , 5.11 84ecmult_multi_3071p_g , 4.86 , 5.00 , 5.08 85ecmult_multi_3327p_g , 4.89 , 5.02 , 5.12 86ecmult_multi_3583p_g , 5.00 , 5.06 , 5.12 87ecmult_multi_3839p_g , 4.96 , 5.04 , 5.11 88ecmult_multi_4095p_g , 4.97 , 5.04 , 5.11 89ecmult_multi_4607p_g , 4.99 , 5.07 , 5.19 90ecmult_multi_5119p_g , 4.91 , 5.06 , 5.13 91ecmult_multi_5631p_g , 4.98 , 5.07 , 5.15 92ecmult_multi_6143p_g , 4.99 , 5.07 , 5.13 93ecmult_multi_6655p_g , 5.01 , 5.12 , 5.34 94ecmult_multi_7167p_g , 5.00 , 5.06 , 5.14 95ecmult_multi_7679p_g , 5.03 , 5.07 , 5.13 96ecmult_multi_8191p_g , 5.00 , 5.10 , 5.30 97ecmult_multi_9215p_g , 5.05 , 5.09 , 5.12 98ecmult_multi_10239p_g , 5.08 , 5.19 , 5.38 99ecmult_multi_11263p_g , 5.06 , 5.15 , 5.21 100ecmult_multi_12287p_g , 5.08 , 5.16 , 5.22 101ecmult_multi_13311p_g , 5.15 , 5.21 , 5.28 102ecmult_multi_14335p_g , 5.21 , 5.30 , 5.61 103ecmult_multi_15359p_g , 5.22 , 5.30 , 5.36 104ecmult_multi_16383p_g , 5.33 , 5.44 , 5.83 105ecmult_multi_18431p_g , 5.48 , 5.51 , 5.59 106ecmult_multi_20479p_g , 5.65 , 5.78 , 6.26 107ecmult_multi_22527p_g , 5.87 , 5.92 , 5.96 108ecmult_multi_24575p_g , 6.14 , 6.24 , 6.64 109ecmult_multi_26623p_g , 6.43 , 6.52 , 6.79 110ecmult_multi_28671p_g , 6.66 , 6.74 , 6.86 111ecmult_multi_30719p_g , 6.92 , 6.98 , 7.22 112ecmult_multi_32767p_g , 7.14 , 7.50 , 8.38hebasto approvedhebasto commented at 8:23 pm on November 17, 2025: memberACK 26166c4f5fd5688c31e488449ee2325eb8f2fb36, I have reviewed the code and it looks OK.
On my machine (AMD Ryzen AI 7 350), my benchmarking results for this PR differ from others:
- a tiny (<0.5%) but consistent slowdown for a small number of points
- for a large number of points, the speedup is less than 2%
real-or-random approvedreal-or-random commented at 8:35 am on November 18, 2025: contributorACK 26166c4f5fd5688c31e488449ee2325eb8f2fb36 benchmarks show no significant difference (only tried low point counts)real-or-random merged this on Nov 18, 2025real-or-random closed this on Nov 18, 2025
This is a metadata mirror of the GitHub repository bitcoin-core/secp256k1. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-11-21 04:15 UTC
More mirrored repositories can be found on mirror.b10c.me