General batch verification API context #1087

issue siv2r openend this issue on March 13, 2022
  1. siv2r commented at 3:26 am on March 13, 2022: contributor

    context: #760 (comment)

    I am trying to implement a PoC for the API proposed above. I have the following batch_verify object in mind.

     0typedef struct {
     1    unsigned char chacha_seed[32];  /* for generating common randomizers (1, a2, a3 ... au) */
     2    secp256k1_scalar randomizer_cache[2];
     3    schnorrsig_batch_verify sigs_data;
     4    tweaked_key_batch_verify tweaked_keys_data;
     5} batch_verify_struct;
     6
     7typdef struct {
     8    secp256k1_scratch *sigs_data; /* (sig, msg, pk) */
     9    size_t len; 
    10    size_t capacity; /* equals (sigs_data->max_size)/(64 + 32 + sizeof(secp256k1_xonly)) */
    11    int result;
    12} schnorrsig_batch_verify;
    13
    14typdef struct {
    15    secp256k1_scratch *tweaks_data; /* (pairity, tweaked_key, tweak32, pubkey_key) */
    16    size_t len;
    17    size_t capacity;
    18    int result;
    19} tweaked_key_batch_verify;
    

    I plan to use a scratch object to store the data (schnorrsig or tweaks) since it will allow us to keep on adding new data (using batch_add_sig and batch_add_xpubkey_tweak) and increase the batch object’s size accordingly. This batch object doesn’t seem compatible with ecmult_pippenger_batch or ecmult_strauss_batch function call.

    Since both Pippenger and Strauss takes the arguments:

    • void *cbdata –> contains the required data
    • secp256k1_scratch *scratch –> newly allocated scratch space where scalars and points are loaded for multi multiplication

    But this batch object already has the required data in a scratch space. Maybe use another scratch space for loading scalars and points? Won’t this increase memory usage? Also, does this API require a new module? Or including these in the schnorrsig module suffice?

  2. jonasnick commented at 10:45 pm on March 13, 2022: contributor

    This is a start. Ideally, the batch object does not hold signatures, messages and the likes. Instead, only scalars and points are stored on the batch object’s scratch space. In order to avoid allocating space again for scalars and points in ecmult_strauss_batch and ecmult_pippenger_batch, we need to refactor these functions (and others) to be able to tell them that scalars and points already exist on the scratch space. For this idea (and your idea btw) to work, the scratch space provided to the batch object is exclusively owned by the batch object and must not be touched by the user anymore until batch verification is over. Another drawback is that we must represent points as secp256k1_gej because that’s required by Strauss whereas secp256k1_ge would be sufficient for Pippenger.

    We also need to keep in mind that we can not compute the Schnorr batch verification randomizer by hashing all signatures, public keys and messages as before. We just don’t know all of them yet when secp256k1_batch_add_sig is called to add a single (randomized) scalar to the batch object’s scratch space. A very simple approach (but not close to being optimally efficient) is to have a (tagged) secp256k1_sha object in the batch object that hashes everything that was seen so far. Everytime we need a fresh randomizer, a copy of the sha object is made and finalized. This approach of computing the randomizers only from input available so far would be covered by the proof here.

  3. jonasnick commented at 8:54 pm on March 14, 2022: contributor

    @real-or-random pointed out to me that there is a simpler solution at the cost of requiring more memory. What I had assumed above is that we compute the randomizer immediately to allow storing only the sum of scalars that are multiplied with G (c.f. (s_1 + a_2⋅s_2 + ... + a_u⋅s_u)⋅G in BIP-340). If we instead store each individual such scalar, we can delay computing the randomizer to right before batch verifying.

    We can do this by having the batch object store a secp256k1_sha object. Every time something is added to the batch, we write the input data (e.g. message, public key, signature per BIP-340 batch verification recommendation) into the sha object, but do not randomize the scalars yet. Only right before batch verifying, we finalize the sha object to obtain a hash for seeding the CSPRNG. Each scalar is then multiplied with a randomizer (the right randomizer to be precise)

  4. real-or-random commented at 2:09 pm on March 17, 2022: contributor

    Hm yeah, right but then we’ll need to store the scalars as you point out, and I’m not sure that’s worth the hassle.

    So we’ll anyway need to keep some O(u) things:

    • The pubkeys P_i
    • The nonces r_i

    Adding s_i will make be an increase of 50% and could mean that we’ll support smaller batch sizes for a given memory.

    On the other hand, if you have enough memory, this argument won’t apply. Moreover, the caller may keep the s_i (and also the other inputs around anyway). With an API that would require the caller not to touch these until the computation has been finalized, we could save a lot of memory (and copying). But that API will be harder to use correctly. Now that I say this, maybe a “streaming” API is the actually wrong approach and it should be just a single call as in the BIP. That’s simple and stateless.

  5. siv2r commented at 9:14 pm on March 26, 2022: contributor

    Sorry for the delay.

    I took a look at scratch_alloc done by pippenger_batch and strauss_batch. A shared format for the scratch space (allocated with scalars and points), which the ecmult_mulit_var can pass to either pippenger_batch or strauss_batch (for multi multiplication), seems infeasible. I can think of two options now:

    1. refactor the scratch allocations done in pippenger_batch and strauss_batch to support a shared format.
    2. avoid a shared format for scratch space and use any one of pippenger_batch or strauss_batch for multi multiplication

    Is option1 the right approach?


    Batch object’s Scratch Space Initialization: user calls void batch_verify_init(secp256k1_context ctx, size_t n_terms)

    0batch* batch_verify_init(size_t n_terms) {
    1     batch ret;
    2     scratch_size = strauss_scratch_size(n_terms) + STRAUSS_SCRATCH_OBJECT*16;
    3     ret.scratch = scratch_create(&ctx->error_callback, scratch_size);
    4     /* allocate space for n_terms (scalar, points) on scratch space*/ --> implementation info below
    5     /* other necessary batch obj allocations */
    6     return &ret;
    7}
    

    Here, we create scratch memory required for n_terms Strauss points since it is always greater than the scratch memory required for n_terms Pippenger points. The ecmult benchmark used a similar approach (see here).

    Allocating scratch memory for n_terms (scalar, points):

    • Format1: for supporting strauss_batch we need to do: (see here)
    0     /* both of these are impl  using scratch_alloc() */
    1    ret.scratch->points = scratch_alloc(n_terms * sizeof(secp256k1_gej));
    2    ret.scratch->scalars = scratch_alloc(n_terms * sizeof(secp256k1_scalar));
    
    • Format2: for supporting pippenger_batch we need to do: (see here)
    0    ret.scratch->points = scratch_alloc((2*n_terms + 2) * sizeof(secp256k1_ge));
    1    ret.scratch->scalars = scratch_alloc((2*n_terms + 2) * sizeof(secp256k1_scalar));
    

    If we use format1, we can’t call pippenger_batch; if we use format2, we can’t call strauss_batch. This is the format issue that I was talking about earlier.

  6. jonasnick commented at 4:12 pm on March 29, 2022: contributor

    If we use format1, we can’t call pippenger_batch;

    That’s not true if the pippenger algorithm is refactored appropriately. The algorithm would know that n_terms * sizeof(secp256k1_gej) and n_terms * sizeof(secp256k1_scalar) are already on the scratch space. Therefore, it would only allocate (n_terms + 2) * sizeof(secp256k1_ge) and (n_terms + 2) * sizeof(secp256k1_scalar).

    Besides, I like the idea that the batch object creates its own scratch space.

  7. jonasnick commented at 8:36 pm on April 26, 2022: contributor

    @real-or-random

    maybe a “streaming” API is the actually wrong approach

    There’s another advantage to having a single call instead of a streaming API. In general, developers want to know approximately how long a particular function call takes. With the “streaming” API one can not predict when a call to batch_add_* will be fast and when it will take much much longer.

    and it should be just a single call as in the BIP

    If there was a way to do this that allows multiple objects to be batch verified and is extensible this would be worth exploring. I just don’t see how.

  8. siv2r cross-referenced this on Aug 21, 2022 from issue Add an experimental batch module by siv2r
  9. jonasnick cross-referenced this on Aug 22, 2022 from issue bip-340: reduce size of randomizers to 128 bit and provide argument by jonasnick
  10. jonasnick cross-referenced this on Aug 23, 2022 from issue bip-340: reduce size of randomizers to 128 bit and provide argument by jonasnick
  11. real-or-random cross-referenced this on May 10, 2023 from issue Rework or get rid of scratch space by real-or-random

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin-core/secp256k1. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-22 03:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me