Bitcoin Development Mailinglist
 help / color / mirror / Atom feed
* [bitcoindev] Benchmarking Bitcoin Script Evaluation for the Varops Budget (GSR)
@ 2025-11-07 15:50 'Julian' via Bitcoin Development Mailing List
  2025-11-10 14:46 ` 'Russell O'Connor' via Bitcoin Development Mailing List
  0 siblings, 1 reply; 3+ messages in thread
From: 'Julian' via Bitcoin Development Mailing List @ 2025-11-07 15:50 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 4751 bytes --]



Hello everyone interested in Great Script Restoration and the Varops Budget,

The main concerns that led to the disabling of many opcodes in v0.3.1 were 
denial-of-service attacks through excessive computational time and memory 
usage in Bitcoin script execution. To mitigate these risks, we propose to 
generalize the sigops budget in a new Tapscript leaf version and apply it 
to all operations before attempting to restore any computationally 
expensive operations or lifting any other script limits.

Similar to the sigops budget (which is applied to each input individually), 
the varops budget is based on transaction weight, a larger transaction has 
proportionally more compute units available. Currently, the budget is set 
to 5,200 units per weight unit of the transaction.

The varops cost of each opcode depends on the length of its arguments and 
how it acts on the data; whether it copies, compares, moves, performs 
hashing, or does arithmetic etc. More details can be found in the BIP: bips/bip-unknown-varops-budget.mediawiki 
at guilt/varops · rustyrussell/bips · GitHub 
<https://github.com/rustyrussell/bips/blob/guilt/varops/bip-unknown-varops-budget.mediawiki>

To validate that this approach is working and that the free parameters are 
reasonable, we need to understand how it constrains script execution and 
what the worst-case scripts are.

=== Benchmark Methodology ===

For simplicity, we benchmark the script evaluation of block sized scripts 
with the goal of finding the slowest possible script to validate. This 
block sized script is limited by:

- Size: 4M weight units

- Varops budget: 20.8B compute units (4M × 5,200)

To construct and execute such a large script, it must be looped until one 
of the two limits is exhausted. For example, a loop of OP_DUP OP_DROP would 
take an initial stack element and benchmark the copying and dropping 
repeatedly until either the maximum size or the varops budget is reached. 
Computationally intensive operations like arithmetic or hashing on large 
numbers are generally bound by the varops budget, while faster operations 
like stack manipulation or arithmetic on small numbers are bound by the 
block size limit.

For simple operations like hashing (1 in → 1 out), we create a loop like:
OP_SHA256 OP_DROP OP_DUP (repeated) 

Other operations have different restoration patterns. For bit operations (2 
in → 1 out):
OP_DUP OP_AND OP_DROP OP_DUP (repeated) 

These scripts act on initial stack elements of various sizes. The initial 
elements are placed onto the stack “for free” for simplicity and to make 
the budget more conservative. In reality, these elements would need to be 
pushed onto the stack first, consuming additional space and varops budget.

=== Baseline: Signature Validation ===

Currently, the theoretical limit for sigops in one block is:
4M weight units / 50 weight units per sig = 80,000 signature checks per 
block 

Using nanobench, we measure how long it takes to execute 
pubkey.VerifySchnorr(sighash, sig) 80,000 times. On a modern CPU, this 
takes between one and two seconds.

If we want the varops budget to limit script execution time to be no slower 
than the worst case signature validation time, we need to collect 
benchmarks from various machines and architectures. This is especially 
important for hashing operations, where computational time does not scale 
linearly and depends on the implementation, which varies between chips and 
architectures.

=== How to Help ===

To collect more data, we would like to run benchmarks on various machines. 
You can run the benchmark by:

1. Checking out the GSR prototype implementation branch:

GitHub - jmoik/bitcoin at gsr <https://github.com/jmoik/bitcoin/tree/gsr>

2. Compiling with benchmarks enabled (-DBUILD_BENCH=ON)

3. Running the benchmark:

./build/bin/bench_varops --file bench_varops_data.csv

This will store the results in a csv and predict a maximum value for the 
varops budget specifically for your machine depending on your Schnorr 
checksig times and the slowest varops limited script. It would be very 
helpful if you shared your results so we can analyze the data across 
different systems and verify if the budget is working well or has to be 
adjusted!

Cheers

Julian

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/19fde638-aa4f-4a02-9aad-ea437c73b3c1n%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 9652 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [bitcoindev] Benchmarking Bitcoin Script Evaluation for the Varops Budget (GSR)
  2025-11-07 15:50 [bitcoindev] Benchmarking Bitcoin Script Evaluation for the Varops Budget (GSR) 'Julian' via Bitcoin Development Mailing List
@ 2025-11-10 14:46 ` 'Russell O'Connor' via Bitcoin Development Mailing List
  2025-11-28 13:09   ` 'Julian' via Bitcoin Development Mailing List
  0 siblings, 1 reply; 3+ messages in thread
From: 'Russell O'Connor' via Bitcoin Development Mailing List @ 2025-11-10 14:46 UTC (permalink / raw)
  To: Bitcoin Development Mailing List; +Cc: Julian

[-- Attachment #1: Type: text/plain, Size: 6057 bytes --]

My understanding is that in order to avoid block assembly becoming an
NP-hard packing problem, there must be only one dimension of constraint
solving.  However, AFAICT, in your tarscript V2 code you have both the new
varops constraint and the original sigops constraint.

FWIW, in Simplicity we reuse the same budget mechanism introduced in
tapscript (V1) with our cost calculations (though our costs are computed
statically instead of dynamically at runtime for better or for worse).

On Fri, Nov 7, 2025 at 11:06 AM 'Julian' via Bitcoin Development Mailing
List <bitcoindev@googlegroups.com> wrote:

> Hello everyone interested in Great Script Restoration and the Varops
> Budget,
>
> The main concerns that led to the disabling of many opcodes in v0.3.1 were
> denial-of-service attacks through excessive computational time and memory
> usage in Bitcoin script execution. To mitigate these risks, we propose to
> generalize the sigops budget in a new Tapscript leaf version and apply it
> to all operations before attempting to restore any computationally
> expensive operations or lifting any other script limits.
>
> Similar to the sigops budget (which is applied to each input
> individually), the varops budget is based on transaction weight, a larger
> transaction has proportionally more compute units available. Currently, the
> budget is set to 5,200 units per weight unit of the transaction.
>
> The varops cost of each opcode depends on the length of its arguments and
> how it acts on the data; whether it copies, compares, moves, performs
> hashing, or does arithmetic etc. More details can be found in the BIP: bips/bip-unknown-varops-budget.mediawiki
> at guilt/varops · rustyrussell/bips · GitHub
> <https://github.com/rustyrussell/bips/blob/guilt/varops/bip-unknown-varops-budget.mediawiki>
>
> To validate that this approach is working and that the free parameters are
> reasonable, we need to understand how it constrains script execution and
> what the worst-case scripts are.
>
> === Benchmark Methodology ===
>
> For simplicity, we benchmark the script evaluation of block sized scripts
> with the goal of finding the slowest possible script to validate. This
> block sized script is limited by:
>
> - Size: 4M weight units
>
> - Varops budget: 20.8B compute units (4M × 5,200)
>
> To construct and execute such a large script, it must be looped until one
> of the two limits is exhausted. For example, a loop of OP_DUP OP_DROP would
> take an initial stack element and benchmark the copying and dropping
> repeatedly until either the maximum size or the varops budget is reached.
> Computationally intensive operations like arithmetic or hashing on large
> numbers are generally bound by the varops budget, while faster operations
> like stack manipulation or arithmetic on small numbers are bound by the
> block size limit.
>
> For simple operations like hashing (1 in → 1 out), we create a loop like:
> OP_SHA256 OP_DROP OP_DUP (repeated)
>
> Other operations have different restoration patterns. For bit operations
> (2 in → 1 out):
> OP_DUP OP_AND OP_DROP OP_DUP (repeated)
>
> These scripts act on initial stack elements of various sizes. The initial
> elements are placed onto the stack “for free” for simplicity and to make
> the budget more conservative. In reality, these elements would need to be
> pushed onto the stack first, consuming additional space and varops budget.
>
> === Baseline: Signature Validation ===
>
> Currently, the theoretical limit for sigops in one block is:
> 4M weight units / 50 weight units per sig = 80,000 signature checks per
> block
>
> Using nanobench, we measure how long it takes to execute
> pubkey.VerifySchnorr(sighash, sig) 80,000 times. On a modern CPU, this
> takes between one and two seconds.
>
> If we want the varops budget to limit script execution time to be no
> slower than the worst case signature validation time, we need to collect
> benchmarks from various machines and architectures. This is especially
> important for hashing operations, where computational time does not scale
> linearly and depends on the implementation, which varies between chips and
> architectures.
>
> === How to Help ===
>
> To collect more data, we would like to run benchmarks on various machines.
> You can run the benchmark by:
>
> 1. Checking out the GSR prototype implementation branch:
>
> GitHub - jmoik/bitcoin at gsr <https://github.com/jmoik/bitcoin/tree/gsr>
>
> 2. Compiling with benchmarks enabled (-DBUILD_BENCH=ON)
>
> 3. Running the benchmark:
>
> ./build/bin/bench_varops --file bench_varops_data.csv
>
> This will store the results in a csv and predict a maximum value for the
> varops budget specifically for your machine depending on your Schnorr
> checksig times and the slowest varops limited script. It would be very
> helpful if you shared your results so we can analyze the data across
> different systems and verify if the budget is working well or has to be
> adjusted!
>
> Cheers
>
> Julian
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/19fde638-aa4f-4a02-9aad-ea437c73b3c1n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/19fde638-aa4f-4a02-9aad-ea437c73b3c1n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAMZUoK%3D1B%3DLQxkhPsbx22wAFkgsKT20J-6%2Br-Pa0AAh9AMgo4g%40mail.gmail.com.

[-- Attachment #2: Type: text/html, Size: 9816 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [bitcoindev] Benchmarking Bitcoin Script Evaluation for the Varops Budget (GSR)
  2025-11-10 14:46 ` 'Russell O'Connor' via Bitcoin Development Mailing List
@ 2025-11-28 13:09   ` 'Julian' via Bitcoin Development Mailing List
  0 siblings, 0 replies; 3+ messages in thread
From: 'Julian' via Bitcoin Development Mailing List @ 2025-11-28 13:09 UTC (permalink / raw)
  To: Bitcoin Development Mailing List


[-- Attachment #1.1: Type: text/plain, Size: 6894 bytes --]

Hi Russell,

thanks for taking a look at the code.

In interpreter.cpp the static function EvalChecksigTapscript(...) is 
responsible for subtracting from execdata.m_validation_weight_left, for the 
original SigVersion::TAPSCRIPT this is still the case, but Tapscript v2 is 
implemented as a new SigVersion::TAPSCRIPT_V2 and therefore it will not 
take the original sigops constraint into account (there is an if condition 
right above checking for the SigVersion).

The new varops budget replaces this sigops constraint and is contained in 
the new EvalScript(...) overload. Currently it will only subtract from the 
budget if the checksig succeeds, but I think this should be moved up a 
statement, such that it will always subtract the varops cost, making the 
cost calculation more static.

The changes have not been reviewed in depth and I am looking for someone 
interested in helping me with that.



On Monday, 10 November 2025 at 15:48:27 UTC+1 Russell O'Connor wrote:

My understanding is that in order to avoid block assembly becoming an 
NP-hard packing problem, there must be only one dimension of constraint 
solving.  However, AFAICT, in your tarscript V2 code you have both the new 
varops constraint and the original sigops constraint.

FWIW, in Simplicity we reuse the same budget mechanism introduced in 
tapscript (V1) with our cost calculations (though our costs are computed 
statically instead of dynamically at runtime for better or for worse).

On Fri, Nov 7, 2025 at 11:06 AM 'Julian' via Bitcoin Development Mailing 
List <bitco...@googlegroups.com> wrote:

Hello everyone interested in Great Script Restoration and the Varops Budget,

The main concerns that led to the disabling of many opcodes in v0.3.1 were 
denial-of-service attacks through excessive computational time and memory 
usage in Bitcoin script execution. To mitigate these risks, we propose to 
generalize the sigops budget in a new Tapscript leaf version and apply it 
to all operations before attempting to restore any computationally 
expensive operations or lifting any other script limits.

Similar to the sigops budget (which is applied to each input individually), 
the varops budget is based on transaction weight, a larger transaction has 
proportionally more compute units available. Currently, the budget is set 
to 5,200 units per weight unit of the transaction.

The varops cost of each opcode depends on the length of its arguments and 
how it acts on the data; whether it copies, compares, moves, performs 
hashing, or does arithmetic etc. More details can be found in the BIP: bips/bip-unknown-varops-budget.mediawiki 
at guilt/varops · rustyrussell/bips · GitHub 
<https://github.com/rustyrussell/bips/blob/guilt/varops/bip-unknown-varops-budget.mediawiki>

To validate that this approach is working and that the free parameters are 
reasonable, we need to understand how it constrains script execution and 
what the worst-case scripts are.

=== Benchmark Methodology ===

For simplicity, we benchmark the script evaluation of block sized scripts 
with the goal of finding the slowest possible script to validate. This 
block sized script is limited by:

- Size: 4M weight units

- Varops budget: 20.8B compute units (4M × 5,200)

To construct and execute such a large script, it must be looped until one 
of the two limits is exhausted. For example, a loop of OP_DUP OP_DROP would 
take an initial stack element and benchmark the copying and dropping 
repeatedly until either the maximum size or the varops budget is reached. 
Computationally intensive operations like arithmetic or hashing on large 
numbers are generally bound by the varops budget, while faster operations 
like stack manipulation or arithmetic on small numbers are bound by the 
block size limit.

For simple operations like hashing (1 in → 1 out), we create a loop like:
OP_SHA256 OP_DROP OP_DUP (repeated) 

Other operations have different restoration patterns. For bit operations (2 
in → 1 out):
OP_DUP OP_AND OP_DROP OP_DUP (repeated) 

These scripts act on initial stack elements of various sizes. The initial 
elements are placed onto the stack “for free” for simplicity and to make 
the budget more conservative. In reality, these elements would need to be 
pushed onto the stack first, consuming additional space and varops budget.

=== Baseline: Signature Validation ===

Currently, the theoretical limit for sigops in one block is:
4M weight units / 50 weight units per sig = 80,000 signature checks per 
block 

Using nanobench, we measure how long it takes to execute 
pubkey.VerifySchnorr(sighash, sig) 80,000 times. On a modern CPU, this 
takes between one and two seconds.

If we want the varops budget to limit script execution time to be no slower 
than the worst case signature validation time, we need to collect 
benchmarks from various machines and architectures. This is especially 
important for hashing operations, where computational time does not scale 
linearly and depends on the implementation, which varies between chips and 
architectures.

=== How to Help ===

To collect more data, we would like to run benchmarks on various machines. 
You can run the benchmark by:

1. Checking out the GSR prototype implementation branch:

GitHub - jmoik/bitcoin at gsr <https://github.com/jmoik/bitcoin/tree/gsr>

2. Compiling with benchmarks enabled (-DBUILD_BENCH=ON)

3. Running the benchmark:

./build/bin/bench_varops --file bench_varops_data.csv

This will store the results in a csv and predict a maximum value for the 
varops budget specifically for your machine depending on your Schnorr 
checksig times and the slowest varops limited script. It would be very 
helpful if you shared your results so we can analyze the data across 
different systems and verify if the budget is working well or has to be 
adjusted!

Cheers

Julian

-- 
You received this message because you are subscribed to the Google Groups 
"Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an 
email to bitcoindev+...@googlegroups.com.
To view this discussion visit 
https://groups.google.com/d/msgid/bitcoindev/19fde638-aa4f-4a02-9aad-ea437c73b3c1n%40googlegroups.com 
<https://groups.google.com/d/msgid/bitcoindev/19fde638-aa4f-4a02-9aad-ea437c73b3c1n%40googlegroups.com?utm_medium=email&utm_source=footer>
.

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/5906e2bb-c215-44b0-bb61-0bb91d55717dn%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 11567 bytes --]

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2025-11-28 13:47 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-11-07 15:50 [bitcoindev] Benchmarking Bitcoin Script Evaluation for the Varops Budget (GSR) 'Julian' via Bitcoin Development Mailing List
2025-11-10 14:46 ` 'Russell O'Connor' via Bitcoin Development Mailing List
2025-11-28 13:09   ` 'Julian' via Bitcoin Development Mailing List

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox