ecmult_multi: Replace scratch space with malloc, use abcd cost model #1789

pull fjahr wants to merge 2 commits into bitcoin-core:master from fjahr:2025-11-mem-multi-var changing 7 files +749 −841
  1. fjahr commented at 11:47 pm on December 15, 2025: contributor

    This is an implementation of the discussed changes from an in-person meeting in October. It removes usage of scratch space in batch validation and replaces it with internal malloc usage. It also adds an ABCD cost model for algorithm selection.

    The API and internals follow the drafted spec from the meeting very closely: https://gist.github.com/fjahr/c2a009487dffe7a1fbf17ca1821976ca There are few minor changes that should not change the intended behavior. The test coverage may be currently a bit lower than it was previously. I am guessing an adapted form of the test_ecmult_multi test should be added back and there are some TODOs left in the test code which I am planning to address after a first round of conceptual feedback.

    The second commit demonstrates the calibration tooling that I have been using though it’s not exactly what has given me the results that are in the PR. I have still been struggling with the calibration code and seem to never really get a result that just works without manual tweaking. The calibration itself as well as the code added there thus is rather a work in progress. I am assuming some version of the calibration code should be added to the repo and I haven’t thought much about what the best place to add it is. Putting it into the benchmark and combining it with the python script was just a convenient way for experimentation. I am very open to suggestions on how to change this.

  2. fjahr force-pushed on Dec 16, 2025
  3. in src/modules/musig/keyagg_impl.h:152 in 8760b3c148 outdated
    163-#endif
    164-    secp256k1_musig_keyaggcoef_internal(sc, ctx->pks_hash, pt, &ctx->second_pk);
    165-    return 1;
    166-}
    167-
    168 int secp256k1_musig_pubkey_agg(const secp256k1_context* ctx, secp256k1_xonly_pubkey *agg_pk, secp256k1_musig_keyagg_cache *keyagg_cache, const secp256k1_pubkey * const* pubkeys, size_t n_pubkeys) {
    


    siv2r commented at 1:33 pm on December 26, 2025:
    It might be better to pass mem_limit as an input argument rather than defining it internally

    fjahr commented at 4:26 pm on January 20, 2026:
    Hm, I see the point that this would align with the batch validation API that has been discussed but I am not sure that this justifies changing the existing API of musig. It would probably better to have this in a separate/follow-up PR but I will wait a bit to see what other reviewers think about this.
  4. in src/modules/musig/keyagg_impl.h:206 in 8760b3c148
    207+        secp256k1_musig_keyaggcoef_internal(&scalars[i], ecmult_data.pks_hash, &points[i], &ecmult_data.second_pk);
    208+    }
    209+
    210+    /* TODO: Previously this just used TRIVIAL algorithm by not providing
    211+     * scratch space. To be discussed if this should be changed. */
    212+    mem_limit = n_pubkeys * (sizeof(secp256k1_ge) + sizeof(secp256k1_scalar)) * 4;
    


    siv2r commented at 1:51 pm on December 26, 2025:
    nit: Why the constant 4? I assume it covers internal overhead. I’m wondering whether we can define a ratio k such that mem_limit = n_points * (sizeof(secp256k1_ge) + sizeof(secp256k1_scalar)) * k yields a batch size of n_points when given to _ecmult_multi_batch_size()

    siv2r commented at 1:57 pm on December 26, 2025:
    Hmm, we could probably do mem_limit = max(STRAUSS_POINT_SIZE * n, PIPPENGER_POINT_SIZE * n + PIPPENGER_FIXED_SIZE). Just thinking out loud, not suggesting a change here.

    fjahr commented at 4:26 pm on January 20, 2026:
    This was just a rough heuristic that I put not much thought in because of the TODO above which I am still unsure about. Your suggestions are interesting and I think we could actually just do mem_limit = SECP256K1_STRAUSS_POINT_SIZE * n_pubkeys; because in 99.9% of musig cases Strauss should be the optimal algorithm anyway and this makes the reasoning of the code easier as well. Let me know what you think, I have updated the code with this for now.
  5. in src/tests_exhaustive.c:187 in 8760b3c148
    185@@ -186,17 +186,13 @@ typedef struct {
    186     secp256k1_ge pt[2];
    187 } ecmult_multi_data;
    


    siv2r commented at 2:02 pm on December 26, 2025:
    We can probably remove this data structure. In the old code, I only see this acting as cbdata (initialized using local scalars and points arrays) to the ecmult_multi_var

    fjahr commented at 4:26 pm on January 20, 2026:
    Oh, true, good catch!
  6. in src/bench_ecmult.c:339 in 8760b3c148
    332@@ -306,11 +333,11 @@ static void run_ecmult_multi_bench(bench_data* data, size_t count, int includes_
    333 int main(int argc, char **argv) {
    334     bench_data data;
    335     int i, p;
    336-    size_t scratch_size;
    337 
    338     int iters = get_iters(10000);
    339 
    340-    data.ecmult_multi = secp256k1_ecmult_multi_var;
    341+    data.forced_algo = -1;
    


    siv2r commented at 2:09 pm on December 26, 2025:
    nit: We could define #define BENCH_ALGO_AUTO (-1) to make this more readable. Then, the check on line 252 would be: if (data->forced_algo != BENCH_ALGO_AUTO)

    fjahr commented at 4:26 pm on January 20, 2026:
    Sounds good, taken.
  7. in src/ecmult_impl.h:586 in 8760b3c148 outdated
    636+
    637+/* Pippenger fixed overhead: buckets + state + 2 extra entries */
    638+#define SECP256K1_PIPPENGER_FIXED_SIZE(w) \
    639+    ((sizeof(secp256k1_gej) << (w)) \
    640+     + sizeof(struct secp256k1_pippenger_state) \
    641+     + 2 * SECP256K1_PIPPENGER_ENTRY_SIZE(w))
    


    siv2r commented at 2:15 pm on December 26, 2025:
    0#define SECP256K1_PIPPENGER_FIXED_SIZE(w) \
    1    ((sizeof(secp256k1_gej) << (w)) \
    2     + 2 * SECP256K1_PIPPENGER_ENTRY_SIZE(w))
    

    We can remove the pippenger_state since it’s a stack variable. IIUC, the old code allocated this variable on the heap, hence it included it in the size calculation.


    fjahr commented at 4:26 pm on January 20, 2026:
    That looks correct to me, taken!
  8. in src/ecmult_impl.h:732 in 8760b3c148
    858+            n * ECMULT_TABLE_SIZE(WINDOW_A) * sizeof(secp256k1_ge));
    859+        state.ps = (struct secp256k1_strauss_point_state *)checked_malloc(error_callback,
    860+            n * sizeof(struct secp256k1_strauss_point_state));
    861+        points_gej = (secp256k1_gej *)checked_malloc(error_callback,
    862+            n * sizeof(secp256k1_gej));
    863+        scalars_copy = (secp256k1_scalar *)checked_malloc(error_callback,
    


    siv2r commented at 2:22 pm on December 26, 2025:
    Is there a specific reason for copying the scalars here? Since they are managed by the upper layer (like batch verification or MuSig), could we just pass the pointer to _strauss_wnaf directly?

    fjahr commented at 4:27 pm on January 20, 2026:
    I think you are correct, I have removed it.
  9. in src/ecmult_impl.h:750 in 8760b3c148
    876+        secp256k1_ecmult_strauss_wnaf(&state, r, n, points_gej, scalars_copy, scalar_g);
    877+    } else {
    878+        secp256k1_gej tmpj;
    879+        secp256k1_gej_set_infinity(&tmpj);
    880+        secp256k1_ecmult(r, &tmpj, &secp256k1_scalar_zero, scalar_g);
    881+    }
    


    siv2r commented at 2:25 pm on December 26, 2025:
    We can probably remove this else block. The n=0 case is already handled by the call to ecmult_strauss_wnaf in the preceding if block. Plus, secp256k1_ecmult calls strauss_wnaf internally, so this seems redundant.

    fjahr commented at 4:27 pm on January 20, 2026:
    You are correct! Removing it.
  10. in src/ecmult_impl.h:598 in 8760b3c148 outdated
    648+ * Index 2-13 = PIPPENGER with window sizes 1-12
    649+ */
    650+static const struct secp256k1_ecmult_multi_abcd secp256k1_ecmult_multi_abcds[SECP256K1_ECMULT_MULTI_NUM_ALGOS] = {
    651+/*  A (per-point bytes)                     B (fixed bytes)                        C        D           */
    652+    {0,                                     0,                                     1000,    0      },
    653+    {SECP256K1_STRAUSS_POINT_SIZE,          0,                                     100,     0      },
    


    siv2r commented at 9:00 am on December 27, 2025:
    D shouldn’t be 0 here right?

    fjahr commented at 5:23 pm on January 25, 2026:
    Yeah, I saw some wild jumps in the D value and just set it to zero manually because it didn’t seem to matter much. I am pretty sure I wrote a comment at the time but then probably it got lost when I updated the values again at some point later.
  11. in src/bench_ecmult.c:76 in 8b62524b0f outdated
    71+ * Measures the performance of each algorithm at various batch sizes and
    72+ * outputs. Use tools/ecmult_multi_calib.py to calculate optimal C and D
    73+ * values from the output.
    74+ */
    75+static void run_ecmult_multi_calib(bench_data* data) {
    76+    static const size_t batch_sizes[] = {10, 20, 30, 50, 75, 100, 150, 200, 300, 500, 750, 1000, 1500, 2000, 3000, 5000, 7500, 10000, 15000, 20000, 30000};
    


    siv2r commented at 9:17 am on December 27, 2025:

    I think it woulbe nice to have more batch sizes in the crossover region of strauss and pippenger (~80-150 points). So, something like this:

     0    static const size_t batch_sizes[] = {
     1        /* Small (Strauss region) */
     2        5, 10, 15, 20, 30, 50, 70,
     3        /* Crossover region */
     4        85, 88, 90, 100, 120, 150, 175,
     5        /* Medium (Pippenger small windows, w=6..8) */
     6        200, 300, 500, 750, 1000, 1200,
     7        /* Large (Pippenger large windows, w=9..12) */
     8        1500, 2000, 3000, 5000, 7500,
     9        10000, 15000, 20000, 30000
    10    };
    

    siv2r commented at 9:26 am on December 27, 2025:

    We should probably cap the batch size for Strauss calibration (7,500 or less?). At higher sizes, the benchmark results curve and become outliers, which will skew the linear regression model. Since ecmult_multi_select won’t choose Strauss for large batches anyway, we can avoid these sizes. I’ve attached a visualization below.


    siv2r commented at 6:42 pm on March 26, 2026:

    The Strauss fit improved. More info: #1789 (comment)

  12. siv2r commented at 9:38 am on December 27, 2025: contributor

    I ran the calibration tool, and I’m getting the following results. It’s very close the exisitng results, uptil PIPPENGER_8 after which the C value increases for me (this shouldn’t happen right?). I’ll run this a few more times.

     0static const struct secp256k1_ecmult_multi_abcd secp256k1_ecmult_multi_abcds[SECP256K1_ECMULT_MULTI_NUM_ALGOS] = {
     1    {0,                                     0,                                     1000,  0     },
     2    {SECP256K1_STRAUSS_POINT_SIZE,          0,                                     113,   143  },
     3    {SECP256K1_PIPPENGER_POINT_SIZE(1),  SECP256K1_PIPPENGER_FIXED_SIZE(1),  199,   303  },
     4    {SECP256K1_PIPPENGER_POINT_SIZE(2),  SECP256K1_PIPPENGER_FIXED_SIZE(2),  152,   460  },
     5    {SECP256K1_PIPPENGER_POINT_SIZE(3),  SECP256K1_PIPPENGER_FIXED_SIZE(3),  117,   782  },
     6    {SECP256K1_PIPPENGER_POINT_SIZE(4),  SECP256K1_PIPPENGER_FIXED_SIZE(4),  100,   1158 },
     7    {SECP256K1_PIPPENGER_POINT_SIZE(5),  SECP256K1_PIPPENGER_FIXED_SIZE(5),  86,    1837 },
     8    {SECP256K1_PIPPENGER_POINT_SIZE(6),  SECP256K1_PIPPENGER_FIXED_SIZE(6),  77,    3013 },
     9    {SECP256K1_PIPPENGER_POINT_SIZE(7),  SECP256K1_PIPPENGER_FIXED_SIZE(7),  72,    4845 },
    10    {SECP256K1_PIPPENGER_POINT_SIZE(8),  SECP256K1_PIPPENGER_FIXED_SIZE(8),  69,    8775 },
    11    {SECP256K1_PIPPENGER_POINT_SIZE(9),  SECP256K1_PIPPENGER_FIXED_SIZE(9),  73,    14373},
    12    {SECP256K1_PIPPENGER_POINT_SIZE(10), SECP256K1_PIPPENGER_FIXED_SIZE(10), 78,    26442},
    13    {SECP256K1_PIPPENGER_POINT_SIZE(11), SECP256K1_PIPPENGER_FIXED_SIZE(11), 80,    48783},
    14    {SECP256K1_PIPPENGER_POINT_SIZE(12), SECP256K1_PIPPENGER_FIXED_SIZE(12), 106,   88289},
    15};
    

    I’ve visualized the benchmarks for all algorithms (see https://github.com/siv2r/secp256k1/commit/311938612268866bd3dd0cf6c1d006e418518b1d for the diagrams). I’m planning to do a deeper dive into the regression model fit to ensure they align with these results. Will share any useful findings here

  13. siv2r commented at 2:11 pm on December 27, 2025: contributor

    I did some analysis on the linear regression model for the CD values (see https://github.com/siv2r/secp256k1/commit/b5985a60454e68c7f88bfe7a9d41b89a47f2b530). I basically ran ./bench_ecmult calib twice. Used the first run to compute the linear regression model, and the second run to test it against new benchmark values.

     0static const struct secp256k1_ecmult_multi_abcd secp256k1_ecmult_multi_abcds[SECP256K1_ECMULT_MULTI_NUM_ALGOS] = {
     1    {0,                                     0,                                     1000,  0     },
     2    {SECP256K1_STRAUSS_POINT_SIZE,          0,                                     112,   173  },
     3    {SECP256K1_PIPPENGER_POINT_SIZE(1),  SECP256K1_PIPPENGER_FIXED_SIZE(1),  197,   285  },
     4    {SECP256K1_PIPPENGER_POINT_SIZE(2),  SECP256K1_PIPPENGER_FIXED_SIZE(2),  152,   480  },
     5    {SECP256K1_PIPPENGER_POINT_SIZE(3),  SECP256K1_PIPPENGER_FIXED_SIZE(3),  117,   767  },
     6    {SECP256K1_PIPPENGER_POINT_SIZE(4),  SECP256K1_PIPPENGER_FIXED_SIZE(4),  100,   1167 },
     7    {SECP256K1_PIPPENGER_POINT_SIZE(5),  SECP256K1_PIPPENGER_FIXED_SIZE(5),  86,    1887 },
     8    {SECP256K1_PIPPENGER_POINT_SIZE(6),  SECP256K1_PIPPENGER_FIXED_SIZE(6),  78,    3023 },
     9    {SECP256K1_PIPPENGER_POINT_SIZE(7),  SECP256K1_PIPPENGER_FIXED_SIZE(7),  73,    4906 },
    10    {SECP256K1_PIPPENGER_POINT_SIZE(8),  SECP256K1_PIPPENGER_FIXED_SIZE(8),  70,    8889 },
    11    {SECP256K1_PIPPENGER_POINT_SIZE(9),  SECP256K1_PIPPENGER_FIXED_SIZE(9),  74,    14544},
    12    {SECP256K1_PIPPENGER_POINT_SIZE(10), SECP256K1_PIPPENGER_FIXED_SIZE(10), 79,    26764},
    13    {SECP256K1_PIPPENGER_POINT_SIZE(11), SECP256K1_PIPPENGER_FIXED_SIZE(11), 84,    49179},
    14    {SECP256K1_PIPPENGER_POINT_SIZE(12), SECP256K1_PIPPENGER_FIXED_SIZE(12), 106,   89518},
    15};
    

    Metric Definitions

    Metric Meaning Good Value for Benchmarking
    How well the linear model fits (0-1) > 0.95 (strong linear relationship)
    Std Error Uncertainty in slope estimate Low relative to slope
    p-value Probability slope = 0 by chance < 0.05 (statistically significant)
    Slope Time increase per additional n (μs) Positive, algorithm-dependent
    Intercept Fixed overhead time (μs) Algorithm-dependent

    Per-Algorithm Metrics

    Algorithm Std Error p-value Slope Intercept
    STRAUSS 0.983536 0.1660 1.29e-25 6.2956 -1917.23
    PIPPENGER_1 0.999995 0.0039 6.29e-73 8.5860 -133.20
    PIPPENGER_2 0.999992 0.0036 9.98e-71 6.7066 -115.04
    PIPPENGER_3 0.999993 0.0026 8.15e-71 4.9746 -34.11
    PIPPENGER_4 0.999996 0.0015 5.34e-75 4.1772 -13.27
    PIPPENGER_5 0.999995 0.0015 1.16e-73 3.5473 56.56
    PIPPENGER_6 0.999991 0.0019 2.30e-69 3.1035 105.03
    PIPPENGER_7 0.999994 0.0013 4.69e-72 2.7083 219.62
    PIPPENGER_8 0.999989 0.0016 2.64e-68 2.4879 414.58
    PIPPENGER_9 0.999977 0.0021 3.76e-64 2.2872 719.46
    PIPPENGER_10 0.999953 0.0028 5.02e-60 2.1705 1287.92
    PIPPENGER_11 0.999847 0.0049 4.73e-53 2.0592 2321.53
    PIPPENGER_12 0.999583 0.0075 3.62e-47 1.9331 4153.43

    The linear fit ($R^2$) and standard error for STRAUSS is relatively worse. We can improve this by using smaller batch sizes when calibrating Strauss. I see negative intercepts (i.e., D values) for PIPPENGER_1..4, this should be positive. I think this could also be address by capping these algorithms to medium batch sizes during calibration. Otherwise, the calibrated values are excellent!

  14. fjahr force-pushed on Jan 20, 2026
  15. fjahr force-pushed on Jan 20, 2026
  16. fjahr commented at 4:29 pm on January 20, 2026: contributor
    Addressed the comments on the implementation code and rebased, thanks a lot for the review @siv2r ! I need a little more time to think about the calibration again, but will comment on that asap.
  17. fjahr force-pushed on Jan 25, 2026
  18. fjahr commented at 5:22 pm on January 25, 2026: contributor

    @siv2r Thanks again for the review! The hint about using more narrow ranges for each of the algos in the calibration was awesome, I didn’t think of this before and it pretty much fixed all the problems I ran into with the calibration results. I took these suggestions with some smaller additional tweaks, specifically with Strauss which still had quite a bit of variance. I see pretty stable calibration results between multiple runs now and the measurements between the different benchmarks are running smoother than any of my previous iterations.

    These improvements have given me a lot more confidence in these changes, so taking this out of draft status too.

  19. fjahr marked this as ready for review on Jan 25, 2026
  20. fjahr force-pushed on Feb 3, 2026
  21. fjahr commented at 10:37 am on February 3, 2026: contributor
    Randomly found #638 when looking at older PRs and found a small bug in my pippenger size calculation from comparing the overlapping code.
  22. fjahr force-pushed on Feb 3, 2026
  23. in src/ecmult.h:79 in 4b059914c4
    84+ * For each algorithm, memory usage is modeled as m(x) = A*x + B and
    85+ * running time as c(x) = C*x + D, where x is the batch size. This
    86+ * function finds the algorithm that minimizes time per operation
    87+ * C + D/x at the maximum batch size x = (mem_limit - B) / A.
    88+ *
    89+ * Returns: The optimal batch size, or 0 if memory is insufficient.
    


    siv2r commented at 4:33 pm on March 23, 2026:

    This function doesn’t return 0 when memory is insufficient. It returns ECMULT_MAX_POINTS_PER_BATCH. We need to update the docs here. Something like:

    0The optimal batch size. Falls back to ECMULT_MAX_POINTS_PER_BATCH 
    1using the trivial algorithm when no Strauss/Pippenger algorithm fits.
    

    fjahr commented at 10:34 am on March 24, 2026:
    Thanks, I fixed it and took your suggestion.
  24. fjahr force-pushed on Mar 24, 2026
  25. fjahr commented at 10:35 am on March 24, 2026: contributor
    Addressed @siv2r ’s feedback and rebased, I had kind of lost track of that, sorry.
  26. ecmult: Refactor ecmult algo selection cb5545a2f8
  27. bench: Tooling for ecmult algo selection abcd calibration 5050b5e097
  28. fjahr force-pushed on Mar 24, 2026
  29. in src/ecmult_impl.h:848 in 5050b5e097
    1006+    const secp256k1_scalar *scalars,
    1007+    const secp256k1_scalar *scalar_g,
    1008+    size_t mem_limit
    1009+) {
    1010+    secp256k1_ecmult_multi_algo algo = secp256k1_ecmult_multi_select(mem_limit, n_points);
    1011+    return secp256k1_ecmult_multi_internal(error_callback, algo, r, n_points, points, scalars, scalar_g);
    


    siv2r commented at 9:30 am on March 26, 2026:

    The old secp256k1_ecmult_multi_var had an internal batch loop: it split n points into ceil(n / max_points_per_batch) chunks based on scratch capacity, ran Strauss or Pippenger on each chunk, and accumulated results. The new secp256k1_ecmult_multi passes n_points directly as batch_size to secp256k1_ecmult_multi_select with no splitting. All points are processed in one shot.

    This means when n_points * A + B > mem_limit for every algorithm, select falls back to TRIVIAL. On a 64-bit system, Strauss needs ~1,414 bytes/point and even the cheapest Pippenger (window 1) needs ~760 bytes/point. So for test_ecmult_multi_batching with n_points = 200:

    mem_limit Strauss (needs 283 KB for 200 pts) Pippenger w1 (needs 153 KB for 200 pts) Selected
    1 KB exceeds exceeds TRIVIAL
    4 KB exceeds exceeds TRIVIAL
    16 KB exceeds exceeds TRIVIAL
    64 KB exceeds exceeds TRIVIAL
    256 KB exceeds fits Pippenger
    1 MB+ fits fits best fit

    The old test (176 points, scratch sized from 1 to 176) used TRIVIAL only 2 out of 178 calls (the two explicit edge cases), and every loop iteration used Strauss or Pippenger via batching. The new test hits TRIVIAL for 4 of 7 memory limits, it still passes (TRIVIAL is correct), but no longer exercises the efficient algorithms under constrained memory.

    One fix: re-introduce the batch loop in secp256k1_ecmult_multi using the existing secp256k1_ecmult_multi_batch_size to compute chunk size, then call secp256k1_ecmult_multi_internal per chunk. This also means modules that want to manage their own batching (e.g., batch verification) could call _internal directly.

    Alternatively, if callers are always expected to provide sufficient mem_limit, the current design works. Since ecmult_multi isn’t a public API, that’s not unreasonable. But most callers probably won’t know what “sufficient memory” means for a given n_points, they’d expect ecmult_multi to handle that for them (see also this discussion about exposing mem_limit to musig callers). I’m slightly leaning toward letting ecmult_multi do the batching like the old code, but will probably know more once caller requirements become clearer.


    siv2r commented at 12:12 pm on March 26, 2026:

    One downside of having ecmult_multi do internal batching (like the old code) is that callers lose visibility into which algorithm is actually being used. In particular, it’s hard to detect a silent fallback to TRIVIAL. The current design avoids this since callers can call _select beforehand to inspect the choice.

    That said, I think we can get the best of both worlds. If ecmult_multi handled batching internally, callers who just want the result would use it directly. Callers who want more control (inspect the algorithm, force a specific one, handle batching themselves) would use _select + _multi_internal instead.

    Right now, ecmult_multi is basically a thin wrapper over _select + _multi_internal, so adding batching to it would give the two functions more distinct roles. What do you think?

  30. in src/ecmult_impl.h:572 in 5050b5e097
    611-    }
    612+/* Strauss per-point memory */
    613+#define SECP256K1_STRAUSS_POINT_SIZE \
    614+    ((sizeof(secp256k1_ge) + sizeof(secp256k1_fe)) * ECMULT_TABLE_SIZE(WINDOW_A) \
    615+     + sizeof(struct secp256k1_strauss_point_state) \
    616+     + sizeof(secp256k1_gej) + sizeof(secp256k1_scalar))
    


    siv2r commented at 10:32 am on March 26, 2026:
    0#define SECP256K1_STRAUSS_POINT_SIZE \
    1    ((sizeof(secp256k1_ge) + sizeof(secp256k1_fe)) * ECMULT_TABLE_SIZE(WINDOW_A) \
    2     + sizeof(struct secp256k1_strauss_point_state) \
    3     + sizeof(secp256k1_gej))
    

    We should remove sizeof(secp256k1_scalar), as ecmult_multi_strauss now passes the scalars pointer directly to ecmult_multi_strauss_wnaf without allocating it on the heap (the old code used to do this)

  31. in src/bench_ecmult.c:154 in 5050b5e097
    149+        if (n >= 300) iters = base_iters / 2;
    150+        if (iters < 100) iters = 100;
    151+
    152+        t_start = gettime_i64();
    153+        for (iter = 0; iter < iters; iter++) {
    154+            secp256k1_ecmult_multi_internal(&data->ctx->error_callback, algo,
    


    siv2r commented at 11:05 am on March 26, 2026:
    Should we wrap this in CHECK()?
  32. in src/bench_ecmult.c:189 in 5050b5e097
    184+            if (n >= 15000) iters = base_iters / 100;
    185+            if (iters < 3) iters = 3;
    186+
    187+            t_start = gettime_i64();
    188+            for (iter = 0; iter < iters; iter++) {
    189+                secp256k1_ecmult_multi_internal(&data->ctx->error_callback, algo,
    


    siv2r commented at 11:06 am on March 26, 2026:
    Should we wrap this in CHECK()?
  33. in src/ecmult.h:96 in 5050b5e097
    102+ * Among algorithms that fit within mem_limit for the given batch_size,
    103+ * selects the one that minimizes time per operation C + D/batch_size.
    104+ *
    105+ * Returns: The optimal algorithm identifier.
    106+ */
    107+static secp256k1_ecmult_multi_algo secp256k1_ecmult_multi_select(
    


    siv2r commented at 12:16 pm on March 26, 2026:
    nit: I think it’d be nice to document that _batch_size and _select may disagree on their choice of optimial algorithm.
  34. in src/ecmult_impl.h:906 in 5050b5e097
    1064+                                                    points, scalars, scalar_g, 11);
    1065+        case SECP256K1_ECMULT_MULTI_ALGO_PIPPENGER_12:
    1066+            return secp256k1_ecmult_multi_pippenger(error_callback, r, n_points,
    1067+                                                    points, scalars, scalar_g, 12);
    1068+        default:
    1069+            return secp256k1_ecmult_multi_trivial(r, n_points, points, scalars, scalar_g);
    


    siv2r commented at 12:34 pm on March 26, 2026:
    0        default:
    1            VERIFY_CHECK(0); /* unreachable with valid algo values */
    2            return secp256k1_ecmult_multi_trivial(r, n_points, points, scalars, scalar_g);
    

    Since the enum covers exactly 14 values (0-13) and _select only returns valid values, this branch should be unreachable. If it ever runs, it likely means a bug in the selection logic, and silently falling back to TRIVIAL would make that very hard to notice. This would catch bugs in debug builds while keeping the fallback in production.

  35. in src/tests.c:1 in 5050b5e097


    siv2r commented at 2:15 pm on March 26, 2026:

    The PR description acknowledges that test coverage is currently lower than before. Here’s a list that might be helpful when adding tests back:

    • Test each algorithm individually via _multi_internal (TRIVIAL, STRAUSS, PIPPENGER 1-12), verifying results against secp256k1_ecmult for small inputs (n=0, 1, 2)
    • Structured edge case tests: all-infinity points, all-zero scalars, cancelling pairs (negated scalar with same point, same scalar with negated point), sum-to-zero across multiple points
    • Boundary tests for _batch_size: mem_limit=0 should return ECMULT_MAX_POINTS_PER_BATCH (TRIVIAL fallback), mem_limit=1 similarly. For larger values, verify the returned batch_size leads to a successful _multi call
    • Property tests for _select: with large mem_limit, _select(mem_limit, 10) should pick STRAUSS (fastest for tiny batches), _select(mem_limit, 100000) should pick a PIPPENGER variant (fastest for large batches), _select(0, n) should return TRIVIAL for any n.
      • I’m wondering if we can also define a range of points where each algorithm transitions to the next, and assert those transitions
  36. siv2r commented at 6:41 pm on March 26, 2026: contributor

    I re-ran my old analysis on the new ABCD values. The results are very positive, and there’s clear improvement in STRAUSS values, the linear fit is better now (0.983 -> 0.998).

     0static const struct secp256k1_ecmult_multi_abcd secp256k1_ecmult_multi_abcds[SECP256K1_ECMULT_MULTI_NUM_ALGOS] = {
     1    {0,                                     0,                                     1000,  0     },
     2    {SECP256K1_STRAUSS_POINT_SIZE,          0,                                     103,   274  },
     3    {SECP256K1_PIPPENGER_POINT_SIZE(1),  SECP256K1_PIPPENGER_FIXED_SIZE(1),  195,   417  },
     4    {SECP256K1_PIPPENGER_POINT_SIZE(2),  SECP256K1_PIPPENGER_FIXED_SIZE(2),  147,   602  },
     5    {SECP256K1_PIPPENGER_POINT_SIZE(3),  SECP256K1_PIPPENGER_FIXED_SIZE(3),  117,   888  },
     6    {SECP256K1_PIPPENGER_POINT_SIZE(4),  SECP256K1_PIPPENGER_FIXED_SIZE(4),  100,   1390 },
     7    {SECP256K1_PIPPENGER_POINT_SIZE(5),  SECP256K1_PIPPENGER_FIXED_SIZE(5),  86,    2227 },
     8    {SECP256K1_PIPPENGER_POINT_SIZE(6),  SECP256K1_PIPPENGER_FIXED_SIZE(6),  76,    3755 },
     9    {SECP256K1_PIPPENGER_POINT_SIZE(7),  SECP256K1_PIPPENGER_FIXED_SIZE(7),  67,    6626 },
    10    {SECP256K1_PIPPENGER_POINT_SIZE(8),  SECP256K1_PIPPENGER_FIXED_SIZE(8),  62,    10976},
    11    {SECP256K1_PIPPENGER_POINT_SIZE(9),  SECP256K1_PIPPENGER_FIXED_SIZE(9),  57,    19909},
    12    {SECP256K1_PIPPENGER_POINT_SIZE(10), SECP256K1_PIPPENGER_FIXED_SIZE(10), 54,    36574},
    13    {SECP256K1_PIPPENGER_POINT_SIZE(11), SECP256K1_PIPPENGER_FIXED_SIZE(11), 50,    72630},
    14    {SECP256K1_PIPPENGER_POINT_SIZE(12), SECP256K1_PIPPENGER_FIXED_SIZE(12), 46,    135404},
    15};
    

    Metric Definitions

    Metric Meaning Good Value for Benchmarking
    How well the linear model fits (0-1) > 0.95 (strong linear relationship)
    Std Error Uncertainty in slope estimate Low relative to slope
    p-value Probability slope = 0 by chance < 0.05 (statistically significant)
    Slope Time increase per additional n (μs) Positive, algorithm-dependent
    Intercept Fixed overhead time (μs) Algorithm-dependent

    Per-Algorithm Metrics

    Algorithm Std Error p-value Slope Intercept
    STRAUSS 0.998256 0.0468 2.77e-26 4.5477 -15.70
    PIPPENGER_1 0.999981 0.0110 6.54e-25 7.6313 16.23
    PIPPENGER_2 0.999932 0.0131 1.38e-30 5.7699 21.74
    PIPPENGER_3 0.999973 0.0066 1.98e-33 4.6724 27.37
    PIPPENGER_4 0.999987 0.0040 3.30e-33 3.9572 45.39
    PIPPENGER_5 0.999988 0.0033 9.93e-36 3.4059 80.97
    PIPPENGER_6 0.999979 0.0044 1.02e-24 2.9971 123.83
    PIPPENGER_7 0.999960 0.0057 4.08e-21 2.6388 220.59
    PIPPENGER_8 0.999989 0.0027 1.18e-23 2.4487 353.68
    PIPPENGER_9 0.999995 0.0019 8.61e-20 2.2403 700.71
    PIPPENGER_10 0.999992 0.0028 7.05e-14 2.1095 1444.03
    PIPPENGER_11 0.999914 0.0109 3.41e-07 1.9767 2804.01
    PIPPENGER_12 0.999997 0.0033 1.14e-03 1.8235 5310.41


fjahr siv2r


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin-core/secp256k1. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-03-29 14:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me