* [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
@ 2025-10-02 20:42 PortlandHODL
2025-10-02 22:19 ` Andrew Poelstra
` (6 more replies)
0 siblings, 7 replies; 46+ messages in thread
From: PortlandHODL @ 2025-10-02 20:42 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 1881 bytes --]
Proposing: Softfork to after (n) block height; the creation of outpoints
with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
This is my gathering of information per BIP 0002
After doing some research into the number of outpoints that would have
violated the proposed rule there are exactly 169 outpoints. With only 8
being non OP_RETURN. I think after 15 years and not having discovered use
for 'large' ScriptPubkeys; the reward for not invalidating them at the
consensus level is lower than the risk of their abuse.
-
*Reasons for *
- Makes DoS blocks likely impossible to create that would have any
sufficient negative impact on the network.
- Leaves enough room for hooks long term
- Would substantially reduce the divergence between consensus and
relay policy
- Incredibly little use onchain as evidenced above.
- Could possibly reduce codebase complexity. Legacy Script is largely
considered a mess though this isn't a complete disablement it should reduce
the total surface that is problematic.
- Would make it harder to use the ScriptPubkey as a 'large'
datacarrier.
- Possible UTXO set size bloat reduction.
- *Reasons Against *
- Bitcoin could need it in the future? Quantum?
- Users could just create more outpoints.
Thoughts?
source of onchain data
<https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
PortlandHODL
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 2232 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
@ 2025-10-02 22:19 ` Andrew Poelstra
2025-10-02 22:46 ` Andrew Poelstra
2025-10-02 22:47 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-02 22:27 ` Brandon Black
` (5 subsequent siblings)
6 siblings, 2 replies; 46+ messages in thread
From: Andrew Poelstra @ 2025-10-02 22:19 UTC (permalink / raw)
To: PortlandHODL; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 1593 bytes --]
On Thu, Oct 02, 2025 at 01:42:06PM -0700, PortlandHODL wrote:
> Proposing: Softfork to after (n) block height; the creation of outpoints
> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
Personally, I like this. Unlike restrictions on opcode behavior or
witness data, it is impossible for there to be any existing UTXOs which
"might turn out to need" scriptpubkeys greater than 520 bytes. In a
post-covenant world I suppose this could change.
There is a risk of confiscation of coins which have pre-signed but
unpublished transactions spending them to new outputs with large
scriptPubKeys. Due to long-standing standardness rules, and the presence
of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
such transactions exist.
In any case, if confiscation is a worry, as always we can exempt the
current UTXO set from the rule -- if you are only spending outputs that
existed prior to the new rule, your new UTXOs are allowed to be large.
I would even suggest going lower than 520 bytes.
--
Andrew Poelstra
Director, Blockstream Research
Email: apoelstra at wpsoftware.net
Web: https://www.wpsoftware.net/andrew
The sun is always shining in space
-Justin Lewis-Webster
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN76f2wKPHFcj8qt%40mail.wpsoftware.net.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
2025-10-02 22:19 ` Andrew Poelstra
@ 2025-10-02 22:27 ` Brandon Black
2025-10-03 1:21 ` [bitcoindev] " /dev /fd0
` (4 subsequent siblings)
6 siblings, 0 replies; 46+ messages in thread
From: Brandon Black @ 2025-10-02 22:27 UTC (permalink / raw)
To: PortlandHODL; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 2770 bytes --]
Love this idea.
I think "users will just use more outputs" is the one argument against. But with witness size not limited in this way, I don't see that being a problem.
If this avoids any of the fiddliness involved in avoiding DOS with GCC, I think we should do it.
Best,
Brandon
--Brandon, sent by an Android
Oct 2, 2025 15:00:22 PortlandHODL <admin@qrsnap.io>:
> Proposing: Softfork to after (n) block height; the creation of outpoints with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
> This is my gathering of information per BIP 0002
>
> After doing some research into the number of outpoints that would have violated the proposed rule there are exactly 169 outpoints. With only 8 being non OP_RETURN. I think after 15 years and not having discovered use for 'large' ScriptPubkeys; the reward for not invalidating them at the consensus level is lower than the risk of their abuse.
> * *Reasons for
> * *Makes DoS blocks likely impossible to create that would have any sufficient negative impact on the network.
> * Leaves enough room for hooks long term
> * Would substantially reduce the divergence between consensus and relay policy
> * Incredibly little use onchain as evidenced above.
> * Could possibly reduce codebase complexity. Legacy Script is largely considered a mess though this isn't a complete disablement it should reduce the total surface that is problematic.
> * Would make it harder to use the ScriptPubkey as a 'large' datacarrier.
> * Possible UTXO set size bloat reduction.
>
> * *Reasons Against *
> * Bitcoin could need it in the future? Quantum?
> * Users could just create more outpoints.
> Thoughts?
>
> source of onchain data [https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv]
>
> PortlandHODL
>
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com[https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com?utm_medium=email&utm_source=footer].
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/2e5be000-c054-44ea-818c-653dc11f0901%40reardencode.com.
[-- Attachment #2: Type: text/html, Size: 4301 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 22:19 ` Andrew Poelstra
@ 2025-10-02 22:46 ` Andrew Poelstra
2025-10-02 22:47 ` 'moonsettler' via Bitcoin Development Mailing List
1 sibling, 0 replies; 46+ messages in thread
From: Andrew Poelstra @ 2025-10-02 22:46 UTC (permalink / raw)
To: PortlandHODL; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 2221 bytes --]
On Thu, Oct 02, 2025 at 10:19:43PM +0000, Andrew Poelstra wrote:
> On Thu, Oct 02, 2025 at 01:42:06PM -0700, PortlandHODL wrote:
> > Proposing: Softfork to after (n) block height; the creation of outpoints
> > with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
> >
>
> Personally, I like this. Unlike restrictions on opcode behavior or
> witness data, it is impossible for there to be any existing UTXOs which
> "might turn out to need" scriptpubkeys greater than 520 bytes. In a
> post-covenant world I suppose this could change.
>
> There is a risk of confiscation of coins which have pre-signed but
> unpublished transactions spending them to new outputs with large
> scriptPubKeys. Due to long-standing standardness rules, and the presence
> of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> such transactions exist.
>
To add to this -- if we whitelisted existing UTXOs to preserve the
validity of pre-signed transactions, this still might not be enough;
there could be arbitrarily long chains of pre-signed transaction.
This is still possible to overcome -- we whitelist all existing UTXOs,
their descendants (UTXOs created from transactions which only spend
existing UTXOs), and so on. The result would be that from the point
of activation, new coinbase outputs would have limited size, as would
their children, and so on, and the limit would spread outward.
I don't think this is a great idea -- it would be technically hard to
implement and slow deployment indefinitely. But I am bringing it up
so people are aware that it's possible to address the confiscation
issue, no matter how rigid you are about it.
--
Andrew Poelstra
Director, Blockstream Research
Email: apoelstra at wpsoftware.net
Web: https://www.wpsoftware.net/andrew
The sun is always shining in space
-Justin Lewis-Webster
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN8A3TnDiwiNlunQ%40mail.wpsoftware.net.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 22:19 ` Andrew Poelstra
2025-10-02 22:46 ` Andrew Poelstra
@ 2025-10-02 22:47 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 7:11 ` Garlo Nicon
1 sibling, 1 reply; 46+ messages in thread
From: 'moonsettler' via Bitcoin Development Mailing List @ 2025-10-02 22:47 UTC (permalink / raw)
To: Andrew Poelstra; +Cc: PortlandHODL, Bitcoin Development Mailing List
Hi All,
Agreed, this is something we should consider.
> I would even suggest going lower than 520 bytes.
200 should be enough.
If this should apply to OP_RETURN (nulldata) or not, is something I can't make my mind up on.
BR,
moonsettler
Sent with Proton Mail secure email.
On Friday, October 3rd, 2025 at 12:31 AM, Andrew Poelstra <apoelstra@wpsoftware.net> wrote:
> On Thu, Oct 02, 2025 at 01:42:06PM -0700, PortlandHODL wrote:
>
> > Proposing: Softfork to after (n) block height; the creation of outpoints
> > with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
>
> Personally, I like this. Unlike restrictions on opcode behavior or
> witness data, it is impossible for there to be any existing UTXOs which
> "might turn out to need" scriptpubkeys greater than 520 bytes. In a
> post-covenant world I suppose this could change.
>
> There is a risk of confiscation of coins which have pre-signed but
> unpublished transactions spending them to new outputs with large
> scriptPubKeys. Due to long-standing standardness rules, and the presence
> of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> such transactions exist.
>
> In any case, if confiscation is a worry, as always we can exempt the
> current UTXO set from the rule -- if you are only spending outputs that
> existed prior to the new rule, your new UTXOs are allowed to be large.
>
>
> I would even suggest going lower than 520 bytes.
>
>
> --
> Andrew Poelstra
> Director, Blockstream Research
> Email: apoelstra at wpsoftware.net
> Web: https://www.wpsoftware.net/andrew
>
> The sun is always shining in space
> -Justin Lewis-Webster
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN76f2wKPHFcj8qt%40mail.wpsoftware.net.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/FIpHCygrCyfUu_jNgLJumi-06nYm5P6rmUVc01R3SmhdMVbQo9-8Lyxbh5yGUPrHFQRtyYQ_RvgltQNuoulyXmdnuQSklTab_sM5X63FUs4%3D%40protonmail.com.
^ permalink raw reply [flat|nested] 46+ messages in thread
* [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
2025-10-02 22:19 ` Andrew Poelstra
2025-10-02 22:27 ` Brandon Black
@ 2025-10-03 1:21 ` /dev /fd0
2025-10-03 10:46 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 13:59 ` Andrew Poelstra
2025-10-03 13:21 ` [bitcoindev] " Peter Todd
` (3 subsequent siblings)
6 siblings, 2 replies; 46+ messages in thread
From: /dev /fd0 @ 2025-10-03 1:21 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 2520 bytes --]
Hi portlandhodl,
We can't predict future usage, so it would be great if this was restricted
to OP_RETURN. While there is no real use for a scriptPubKey larger than 520
bytes as shown in the data you shared, it is possible that users may create
more OP_RETURN outputs after this change. It does not affect the UTXO set
but will cost more and economically discourage the use of multiple
OP_RETURN outputs.
/dev/fd0
floppy disk guy
On Friday, October 3, 2025 at 3:29:24 AM UTC+5:30 PortlandHODL wrote:
> Proposing: Softfork to after (n) block height; the creation of outpoints
> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
> This is my gathering of information per BIP 0002
>
> After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only 8
> being non OP_RETURN. I think after 15 years and not having discovered use
> for 'large' ScriptPubkeys; the reward for not invalidating them at the
> consensus level is lower than the risk of their abuse.
>
> -
> *Reasons for *
> - Makes DoS blocks likely impossible to create that would have any
> sufficient negative impact on the network.
> - Leaves enough room for hooks long term
> - Would substantially reduce the divergence between consensus and
> relay policy
> - Incredibly little use onchain as evidenced above.
> - Could possibly reduce codebase complexity. Legacy Script is
> largely considered a mess though this isn't a complete disablement it
> should reduce the total surface that is problematic.
> - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> - Possible UTXO set size bloat reduction.
>
> - *Reasons Against *
> - Bitcoin could need it in the future? Quantum?
> - Users could just create more outpoints.
>
> Thoughts?
>
> source of onchain data
> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>
> PortlandHODL
>
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/842930fb-bede-408a-8380-776d4be4e094n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 3260 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 22:47 ` 'moonsettler' via Bitcoin Development Mailing List
@ 2025-10-03 7:11 ` Garlo Nicon
0 siblings, 0 replies; 46+ messages in thread
From: Garlo Nicon @ 2025-10-03 7:11 UTC (permalink / raw)
To: moonsettler
Cc: Andrew Poelstra, PortlandHODL, Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 3851 bytes --]
> 200 should be enough.
Maybe. But "520" is a battle-tested value, when it comes to the maximum
allowed stack push. Picking "520" should be safe enough, and it has a
higher chances to be accepted as a new consensus rule. Also, if it turns
out, that a lower limit, like "200" is enough, then it can be lowered later
(but bumping it would be much harder).
> If this should apply to OP_RETURN (nulldata) or not, is something I can't
make my mind up on.
I think it should be applied everywhere. And if someone needs a larger
OP_RETURN, then that Script can be taken, wrapped into TapScript branch,
and included to any Taproot address.
pt., 3 paź 2025 o 00:49 'moonsettler' via Bitcoin Development Mailing List <
bitcoindev@googlegroups.com> napisał(a):
> Hi All,
>
> Agreed, this is something we should consider.
>
> > I would even suggest going lower than 520 bytes.
>
> 200 should be enough.
>
> If this should apply to OP_RETURN (nulldata) or not, is something I can't
> make my mind up on.
>
> BR,
> moonsettler
>
> Sent with Proton Mail secure email.
>
> On Friday, October 3rd, 2025 at 12:31 AM, Andrew Poelstra <
> apoelstra@wpsoftware.net> wrote:
>
> > On Thu, Oct 02, 2025 at 01:42:06PM -0700, PortlandHODL wrote:
> >
> > > Proposing: Softfork to after (n) block height; the creation of
> outpoints
> > > with greater than 520 bytes in the ScriptPubkey would be consensus
> invalid.
> >
> >
> > Personally, I like this. Unlike restrictions on opcode behavior or
> > witness data, it is impossible for there to be any existing UTXOs which
> > "might turn out to need" scriptpubkeys greater than 520 bytes. In a
> > post-covenant world I suppose this could change.
> >
> > There is a risk of confiscation of coins which have pre-signed but
> > unpublished transactions spending them to new outputs with large
> > scriptPubKeys. Due to long-standing standardness rules, and the presence
> > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > such transactions exist.
> >
> > In any case, if confiscation is a worry, as always we can exempt the
> > current UTXO set from the rule -- if you are only spending outputs that
> > existed prior to the new rule, your new UTXOs are allowed to be large.
> >
> >
> > I would even suggest going lower than 520 bytes.
> >
> >
> > --
> > Andrew Poelstra
> > Director, Blockstream Research
> > Email: apoelstra at wpsoftware.net
> > Web: https://www.wpsoftware.net/andrew
> >
> > The sun is always shining in space
> > -Justin Lewis-Webster
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+unsubscribe@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/aN76f2wKPHFcj8qt%40mail.wpsoftware.net
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/FIpHCygrCyfUu_jNgLJumi-06nYm5P6rmUVc01R3SmhdMVbQo9-8Lyxbh5yGUPrHFQRtyYQ_RvgltQNuoulyXmdnuQSklTab_sM5X63FUs4%3D%40protonmail.com
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAN7kyNj0zWY8mRtitZNGSrexpQES6U4txswcEgd6BZQUYKX_tw%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 5464 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 1:21 ` [bitcoindev] " /dev /fd0
@ 2025-10-03 10:46 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 11:26 ` /dev /fd0
2025-10-03 13:35 ` jeremy
2025-10-03 13:59 ` Andrew Poelstra
1 sibling, 2 replies; 46+ messages in thread
From: 'moonsettler' via Bitcoin Development Mailing List @ 2025-10-03 10:46 UTC (permalink / raw)
To: /dev /fd0; +Cc: Bitcoin Development Mailing List
Hi Floppy,
There are only weak arguments for this proposal to extend to OP_RETURN, at least nothing I would normally entertain;
but also there are weak arguments to make an exception for OP_RETURN explicitly.
People could just add many OP_RETURNs to a transaction, that makes it more cumbersome and marginally more expensive.
BR,
moonsettler
On Friday, October 3rd, 2025 at 10:58 AM, /dev /fd0 <alicexbtong@gmail.com> wrote:
> Hi portlandhodl,
>
> We can't predict future usage, so it would be great if this was restricted to OP_RETURN. While there is no real use for a scriptPubKey larger than 520 bytes as shown in the data you shared, it is possible that users may create more OP_RETURN outputs after this change. It does not affect the UTXO set but will cost more and economically discourage the use of multiple OP_RETURN outputs.
>
> /dev/fd0
> floppy disk guy
> On Friday, October 3, 2025 at 3:29:24 AM UTC+5:30 PortlandHODL wrote:
>
> > Proposing: Softfork to after (n) block height; the creation of outpoints with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
> >
> > This is my gathering of information per BIP 0002
> >
> > After doing some research into the number of outpoints that would have violated the proposed rule there are exactly 169 outpoints. With only 8 being non OP_RETURN. I think after 15 years and not having discovered use for 'large' ScriptPubkeys; the reward for not invalidating them at the consensus level is lower than the risk of their abuse.
> >
> > - Reasons for
> > - Makes DoS blocks likely impossible to create that would have any sufficient negative impact on the network.
> > - Leaves enough room for hooks long term
> > - Would substantially reduce the divergence between consensus and relay policy
> > - Incredibly little use onchain as evidenced above.
> > - Could possibly reduce codebase complexity. Legacy Script is largely considered a mess though this isn't a complete disablement it should reduce the total surface that is problematic.
> > - Would make it harder to use the ScriptPubkey as a 'large' datacarrier.
> > - Possible UTXO set size bloat reduction.
> >
> > - Reasons Against
> > - Bitcoin could need it in the future? Quantum?
> > - Users could just create more outpoints.
> >
> > Thoughts?
> >
> > source of onchain data
> >
> > PortlandHODL
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/842930fb-bede-408a-8380-776d4be4e094n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/MH0kp4ehOxe8iikzhdxmFXM7GmIzpHGzGFujVeAdyUF_usOKf_CkToIBGSM2fB75TugGLsebVk8gM4OlS2VLHpGKIgmjkWDQuOwZeIr-F-E%3D%40protonmail.com.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 10:46 ` 'moonsettler' via Bitcoin Development Mailing List
@ 2025-10-03 11:26 ` /dev /fd0
2025-10-03 13:35 ` jeremy
1 sibling, 0 replies; 46+ messages in thread
From: /dev /fd0 @ 2025-10-03 11:26 UTC (permalink / raw)
To: moonsettler; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 3806 bytes --]
Hi moonsettler,
> People could just add many OP_RETURNs to a transaction, that makes it
more cumbersome and marginally more expensive.
This is exactly what I wrote in my email and I consider it a positive
thing. I think we are just looking at this proposal from different
perspectives.
/dev/fd0
floppy disk guy
On Fri, Oct 3, 2025 at 4:16 PM moonsettler <moonsettler@protonmail.com>
wrote:
> Hi Floppy,
>
> There are only weak arguments for this proposal to extend to OP_RETURN, at
> least nothing I would normally entertain;
> but also there are weak arguments to make an exception for OP_RETURN
> explicitly.
>
> People could just add many OP_RETURNs to a transaction, that makes it more
> cumbersome and marginally more expensive.
>
> BR,
> moonsettler
>
>
> On Friday, October 3rd, 2025 at 10:58 AM, /dev /fd0 <alicexbtong@gmail.com>
> wrote:
>
> > Hi portlandhodl,
> >
> > We can't predict future usage, so it would be great if this was
> restricted to OP_RETURN. While there is no real use for a scriptPubKey
> larger than 520 bytes as shown in the data you shared, it is possible that
> users may create more OP_RETURN outputs after this change. It does not
> affect the UTXO set but will cost more and economically discourage the use
> of multiple OP_RETURN outputs.
> >
> > /dev/fd0
> > floppy disk guy
> > On Friday, October 3, 2025 at 3:29:24 AM UTC+5:30 PortlandHODL wrote:
> >
> > > Proposing: Softfork to after (n) block height; the creation of
> outpoints with greater than 520 bytes in the ScriptPubkey would be
> consensus invalid.
> > >
> > > This is my gathering of information per BIP 0002
> > >
> > > After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only 8
> being non OP_RETURN. I think after 15 years and not having discovered use
> for 'large' ScriptPubkeys; the reward for not invalidating them at the
> consensus level is lower than the risk of their abuse.
> > >
> > > - Reasons for
> > > - Makes DoS blocks likely impossible to create that would have
> any sufficient negative impact on the network.
> > > - Leaves enough room for hooks long term
> > > - Would substantially reduce the divergence between consensus
> and relay policy
> > > - Incredibly little use onchain as evidenced above.
> > > - Could possibly reduce codebase complexity. Legacy Script is
> largely considered a mess though this isn't a complete disablement it
> should reduce the total surface that is problematic.
> > > - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> > > - Possible UTXO set size bloat reduction.
> > >
> > > - Reasons Against
> > > - Bitcoin could need it in the future? Quantum?
> > > - Users could just create more outpoints.
> > >
> > > Thoughts?
> > >
> > > source of onchain data
> > >
> > > PortlandHODL
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+unsubscribe@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/842930fb-bede-408a-8380-776d4be4e094n%40googlegroups.com
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CALiT-Zop43oiYxz1qaLm8RGZHOrd-_antFksY2VfgpzrKqPzCg%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 5075 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
` (2 preceding siblings ...)
2025-10-03 1:21 ` [bitcoindev] " /dev /fd0
@ 2025-10-03 13:21 ` Peter Todd
2025-10-03 16:52 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 15:42 ` Anthony Towns
` (2 subsequent siblings)
6 siblings, 1 reply; 46+ messages in thread
From: Peter Todd @ 2025-10-03 13:21 UTC (permalink / raw)
To: PortlandHODL; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 2848 bytes --]
On Thu, Oct 02, 2025 at 01:42:06PM -0700, PortlandHODL wrote:
> Proposing: Softfork to after (n) block height; the creation of outpoints
> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
> This is my gathering of information per BIP 0002
>
> After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only 8
> being non OP_RETURN. I think after 15 years and not having discovered use
> for 'large' ScriptPubkeys; the reward for not invalidating them at the
> consensus level is lower than the risk of their abuse.
>
> -
> *Reasons for *
> - Makes DoS blocks likely impossible to create that would have any
> sufficient negative impact on the network.
Further restricting v0 scripts is sufficient to achieve this goal. We do not
need to actually prohibit >520 byte pushes.
> - Leaves enough room for hooks long term
> - Would substantially reduce the divergence between consensus and
> relay policy
> - Incredibly little use onchain as evidenced above.
> - Could possibly reduce codebase complexity. Legacy Script is largely
> considered a mess though this isn't a complete disablement it should reduce
> the total surface that is problematic.
> - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> - Possible UTXO set size bloat reduction.
>
> - *Reasons Against *
> - Bitcoin could need it in the future? Quantum?
NACK, for exactly this reason. It's hard to predict what kind of math will be
needed in the future for future signature algorithms. With taproot, we include
bare pubkeys in scriptPubKeys for a good reason. It's quite possible that we'll
want to do something similar with >520byte pubkeys for some future signature
algorithm (e.g. quantum hard) or some other difficult to predict technical
upgrade (the spendableness of scriptPubKeys with >520bytes isn't relevant to
this discussion).
> - Users could just create more outpoints.
The second reason for my NACK. It makes no significant difference whether or
not data is contiguous or split across multiple outputs. All the same concerns
about arbitrary data ("spam") exist and will continue to be argued over even if
we do a soft-fork to prohibit this. All we'll done is have used up valuable dev
and political resources.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN_N4i4zZ5Dt8TdG%40petertodd.org.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 10:46 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 11:26 ` /dev /fd0
@ 2025-10-03 13:35 ` jeremy
1 sibling, 0 replies; 46+ messages in thread
From: jeremy @ 2025-10-03 13:35 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 4890 bytes --]
I think that this type of rule is OK if we do it as a "sunsetting"
restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
years, 5 years, 10 years).
We may yet find a compelling use for larger scriptpubkeys, and to me the
interactions between different key types is non-obvious.
An example of where big SPK is valuable v.s. e.g. Taproot, Segwit, P2SH is
if there is one big script path required in a two-tx protocol, and the
inclusion price must be paid by the proposed of the first tx. In this case,
we'd want the inclusion guaranteed by the first tx and then the cost isn't
paid (other than satisfaction cost).
You can argue against this example probably, but it is worth considering
that absence of evidence of use is not evidence of absence of use and I
myself feel that overall our understanding of Bitcoin transaction
programming possibilities is still early. If you don't like this example,
I can give you others (probably).
As such, I'm NACK on a permanent restriction on what could be a valuable
use. But I do think it could be reasonable to set up an auto-renewing
restriction on a 1-2 year basis, and allow it to be removed if we later
decide we want them.
(N.B. this differs from past temporary soft fork proposals as it's a
restriction on something we think no one will do that we eventually lift,
rather than removing after a time an opcode that we expect people would
want to rely on.)
On Friday, October 3, 2025 at 7:03:09 AM UTC-4 moonsettler wrote:
> Hi Floppy,
>
> There are only weak arguments for this proposal to extend to OP_RETURN, at
> least nothing I would normally entertain;
> but also there are weak arguments to make an exception for OP_RETURN
> explicitly.
>
> People could just add many OP_RETURNs to a transaction, that makes it more
> cumbersome and marginally more expensive.
>
> BR,
> moonsettler
>
>
> On Friday, October 3rd, 2025 at 10:58 AM, /dev /fd0 <alice...@gmail.com>
> wrote:
>
> > Hi portlandhodl,
> >
> > We can't predict future usage, so it would be great if this was
> restricted to OP_RETURN. While there is no real use for a scriptPubKey
> larger than 520 bytes as shown in the data you shared, it is possible that
> users may create more OP_RETURN outputs after this change. It does not
> affect the UTXO set but will cost more and economically discourage the use
> of multiple OP_RETURN outputs.
> >
> > /dev/fd0
> > floppy disk guy
> > On Friday, October 3, 2025 at 3:29:24 AM UTC+5:30 PortlandHODL wrote:
> >
> > > Proposing: Softfork to after (n) block height; the creation of
> outpoints with greater than 520 bytes in the ScriptPubkey would be
> consensus invalid.
> > >
> > > This is my gathering of information per BIP 0002
> > >
> > > After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only 8
> being non OP_RETURN. I think after 15 years and not having discovered use
> for 'large' ScriptPubkeys; the reward for not invalidating them at the
> consensus level is lower than the risk of their abuse.
> > >
> > > - Reasons for
> > > - Makes DoS blocks likely impossible to create that would have any
> sufficient negative impact on the network.
> > > - Leaves enough room for hooks long term
> > > - Would substantially reduce the divergence between consensus and
> relay policy
> > > - Incredibly little use onchain as evidenced above.
> > > - Could possibly reduce codebase complexity. Legacy Script is largely
> considered a mess though this isn't a complete disablement it should reduce
> the total surface that is problematic.
> > > - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> > > - Possible UTXO set size bloat reduction.
> > >
> > > - Reasons Against
> > > - Bitcoin could need it in the future? Quantum?
> > > - Users could just create more outpoints.
> > >
> > > Thoughts?
> > >
> > > source of onchain data
> > >
> > > PortlandHODL
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+...@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/842930fb-bede-408a-8380-776d4be4e094n%40googlegroups.com
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/63c19ec4-ab83-4280-a5b3-037ff1b1b126n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 6247 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 1:21 ` [bitcoindev] " /dev /fd0
2025-10-03 10:46 ` 'moonsettler' via Bitcoin Development Mailing List
@ 2025-10-03 13:59 ` Andrew Poelstra
2025-10-03 14:18 ` /dev /fd0
1 sibling, 1 reply; 46+ messages in thread
From: Andrew Poelstra @ 2025-10-03 13:59 UTC (permalink / raw)
To: /dev /fd0; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 1577 bytes --]
On Thu, Oct 02, 2025 at 06:21:18PM -0700, /dev /fd0 wrote:
>
> We can't predict future usage,
Aside from proof-of-publication (i.e. data storage directly in the UTXO
set) there is no usage of script which can't be equally (or better)
accomplished by using a Segwit v0 or Taproot script.
> so it would be great if this was restricted
> to OP_RETURN. While there is no real use for a scriptPubKey larger than 520
> bytes as shown in the data you shared, it is possible that users may create
> more OP_RETURN outputs after this change. It does not affect the UTXO set
> but will cost more and economically discourage the use of multiple
> OP_RETURN outputs.
>
Restricting it to OP_RETURN would have zero effect on people trying to
use scriptpubkeys for data storage. They would switch to any of the 65
or so other OP_RETURN equivalents, and failing that, switch to
OP_RESERVED, then to OP_FALSE, then to `0 1 EQVERIFY`, and so on. A
restriction that applies specifically to OP_RETURN outputs is no
restriction at all.
--
Andrew Poelstra
Director, Blockstream Research
Email: apoelstra at wpsoftware.net
Web: https://www.wpsoftware.net/andrew
The sun is always shining in space
-Justin Lewis-Webster
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN_Wz5YbZ9NieQu0%40mail.wpsoftware.net.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 13:59 ` Andrew Poelstra
@ 2025-10-03 14:18 ` /dev /fd0
2025-10-03 14:59 ` Andrew Poelstra
0 siblings, 1 reply; 46+ messages in thread
From: /dev /fd0 @ 2025-10-03 14:18 UTC (permalink / raw)
To: Andrew Poelstra; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 2503 bytes --]
Hi Andrew,
> Restricting it to OP_RETURN would have zero effect on people trying to
use scriptpubkeys for data storage.
1. The data shows that nobody is using scriptPubKeys for more than 520
bytes. In fact, people have found new ways to encode data in transactions.
Example: [Merkle path][0] in taproot control block
2. If this applies to all scriptPubKeys, it could negatively affect the
[UTXO set][1] size because multiple outputs is an alternative if someone
really wants to use scriptPubKey for data.
[0]:
https://mempool.space/tx/c5714af322cd2ba94adf3d74325eb17f03d029ad2bf47dc54c3d929833c02628
[1]: https://mainnet.observer/charts/utxoset-size/
/dev/fd0
floppy disk guy
On Fri, Oct 3, 2025 at 7:29 PM Andrew Poelstra <apoelstra@wpsoftware.net>
wrote:
> On Thu, Oct 02, 2025 at 06:21:18PM -0700, /dev /fd0 wrote:
> >
> > We can't predict future usage,
>
> Aside from proof-of-publication (i.e. data storage directly in the UTXO
> set) there is no usage of script which can't be equally (or better)
> accomplished by using a Segwit v0 or Taproot script.
>
> > so it would be great if this was restricted
> > to OP_RETURN. While there is no real use for a scriptPubKey larger than
> 520
> > bytes as shown in the data you shared, it is possible that users may
> create
> > more OP_RETURN outputs after this change. It does not affect the UTXO
> set
> > but will cost more and economically discourage the use of multiple
> > OP_RETURN outputs.
> >
>
> Restricting it to OP_RETURN would have zero effect on people trying to
> use scriptpubkeys for data storage. They would switch to any of the 65
> or so other OP_RETURN equivalents, and failing that, switch to
> OP_RESERVED, then to OP_FALSE, then to `0 1 EQVERIFY`, and so on. A
> restriction that applies specifically to OP_RETURN outputs is no
> restriction at all.
>
>
> --
> Andrew Poelstra
> Director, Blockstream Research
> Email: apoelstra at wpsoftware.net
> Web: https://www.wpsoftware.net/andrew
>
> The sun is always shining in space
> -Justin Lewis-Webster
>
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CALiT-ZpJ_F2UrvUwRjgMukxQJ%2Bs8GVzgDCHWt%3DzMR%2BHkMDDWWQ%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 3561 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 14:18 ` /dev /fd0
@ 2025-10-03 14:59 ` Andrew Poelstra
2025-10-03 16:15 ` Anthony Towns
0 siblings, 1 reply; 46+ messages in thread
From: Andrew Poelstra @ 2025-10-03 14:59 UTC (permalink / raw)
To: /dev /fd0; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 2873 bytes --]
On Fri, Oct 03, 2025 at 07:48:38PM +0530, /dev /fd0 wrote:
> Hi Andrew,
>
> > Restricting it to OP_RETURN would have zero effect on people trying to
> use scriptpubkeys for data storage.
>
> 1. The data shows that nobody is using scriptPubKeys for more than 520
> bytes. In fact, people have found new ways to encode data in transactions.
> Example: [Merkle path][0] in taproot control block
>
I'm relieved to hear this -- if you must embed data it is much cheaper
to do so in witness data, exactly because this data puts less load on
the network (in particular it does not need to be stored by non-archival
nodes).
Unfortunately, the evidence from the current "filters" debate, where in
the current 80-byte policy limit is filtering transactions that actually
appear in blocks, suggests that we just need to wait for the "on-chain
bitcoin spam" market to have a shift in sentiment before we have people
blowing past 520 bytes or beyond.
Adding a hard consensus limit seems harmless, and will put a hard
barrier against any such sentiment shifts.
If "it's cheaper to use witness data" were enough of a barrier, nobody
would be using OP_RETURN outputs today except for opentimestamps and
maybe some other super-low-load applications.
> 2. If this applies to all scriptPubKeys, it could negatively affect the
> [UTXO set][1] size because multiple outputs is an alternative if someone
> really wants to use scriptPubKey for data.
>
Good point! But if they are forced to use multiple outputs this will
increase the cost for them even further (and force them to split up
their data, which may force some technical pain even if the network
fees aren't enough).
I'm no spammer sociologist, but at some point if we can force the cost
difference between witness spam and UTXO-set spam high enough, nobody
will choose the latter, right?
And if not -- one of the most serious problems with spam is that it
muscles out protocols like LN or Ark by out-spending them on block
space, preventing them from gaining the network effects they would
need to spend a comparable amount. Every marginal cost we add to
spammers increases the delta by which they need to out-spend.
> [0]:
> https://mempool.space/tx/c5714af322cd2ba94adf3d74325eb17f03d029ad2bf47dc54c3d929833c02628
> [1]: https://mainnet.observer/charts/utxoset-size/
>
--
Andrew Poelstra
Director, Blockstream Research
Email: apoelstra at wpsoftware.net
Web: https://www.wpsoftware.net/andrew
The sun is always shining in space
-Justin Lewis-Webster
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN_k1EAXZ0schWDs%40mail.wpsoftware.net.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
` (3 preceding siblings ...)
2025-10-03 13:21 ` [bitcoindev] " Peter Todd
@ 2025-10-03 15:42 ` Anthony Towns
2025-10-03 20:02 ` Luke Dashjr
2025-10-15 20:04 ` [bitcoindev] " Casey Rodarmor
6 siblings, 0 replies; 46+ messages in thread
From: Anthony Towns @ 2025-10-03 15:42 UTC (permalink / raw)
To: PortlandHODL; +Cc: Bitcoin Development Mailing List
On Thu, Oct 02, 2025 at 01:42:06PM -0700, PortlandHODL wrote:
> Proposing: Softfork to after (n) block height; the creation of outpoints
> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
> - Leaves enough room for hooks long term
> - Bitcoin could need it in the future?
One place where large scriptPubKeys could be useful is in script caching.
Suppose we have a future where complicated smart contracts are common;
eg perhaps some future version of lightning implemented using opcodes
from the great-script-restoration has a 9,000 byte script that is
used for every uncooperative close, and that lightning is so prevelant,
uncooperative closes are common.
In that scenario, we might like to be able to cache the 9,000 byte
script, and just invoke it by reference. One way to do that would
be to hardcode that script into consensus and soft-fork it in as a
new opcode. A more flexible alternative, however, would be to put that
script in our existing database, ie the utxo set, and look it up via its
36 bit txid/vout reference. To avoid permanently bloating the utxo set,
we could make such outputs expire after perhaps 100k blocks, and perhaps
increase the "weight" of creating such utxos by 10x, so that it's only
economical if the script is going to be used ~40x before it expires.
Using the utxo set here rather than creating a new database makes upgrades
easier; you don't have to rescan blocks to populate the script cache
database once you upgrade to a node version supporting script caching.
So I think there's potential uses for this flexibility that it wouldn't
be wise to just throw away.
(If you restricted the change to only applying to scripts that used
non-push operators, that would probably still provide upgrade flexibility
while also preventing potential script abuses. But it wouldn't do anything
to prevent publishing data)
> - Possible UTXO set size bloat reduction.
I don't think this works -- breaking up a scriptPubKey across multiple
utxos increases the utxo set bloat significantly, as in addition to the
scriptPubKey, each utxo includes a key (the 36 bytes for txid and vout),
an amount (8 bytes), a coinbase flag and a height (4 bytes), and likely
additional indexing data to keep lookups efficient.
If you're putting 10kB into the utxo set, then that's perhaps 50B of
overhead for a single entry (ie 0.5%); if you have to split it into 20x
500B entries with 50B overhead each, that's 1kB of overhead in total
(ie 10%).
> - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
This seems to be a bad goal to me; ie one that doesn't achieve anything
positive in reducing the bad things you want to prevent, but does make
things worse for other users you want to support. Breaking up data
and recovering it is straightforward, and already supported by various
Bitcoin-specific systems already; all breaking up the data achieves is
to use up slightly more resources. If the data being sent is already
economically marginal, that may result in less data being sent --
but only a similar reduction to what you'd get if fees increased at a
similar rate. When the data storage use case is not economically marginal,
it will instead just result in less resources remaining available for
whatever monetary activity is still taking place.
As far as the "but contiguous data will be regulated more strictly"
argument goes; I don't think "your honour, my offensive content has
strings of 4d0802 every 520 bytes, and as they say: if the data doesn't
flow, you must let me go" is an argument that will fly. Having the data
be separated by longer strings or otherwise structured differently isn't
a bigger difference between an image in a bmp, a jpg, or one dumped
in a zip file or mime-encoded, and none of those will let you avoid a
regulator's ire.
Cheers,
aj
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN_u-xB2ogn2D834%40erisian.com.au.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 14:59 ` Andrew Poelstra
@ 2025-10-03 16:15 ` Anthony Towns
2025-10-05 9:59 ` Guus Ellenkamp
0 siblings, 1 reply; 46+ messages in thread
From: Anthony Towns @ 2025-10-03 16:15 UTC (permalink / raw)
To: Andrew Poelstra, Bitcoin Development Mailing List
On Fri, Oct 03, 2025 at 02:59:32PM +0000, Andrew Poelstra wrote:
> If "it's cheaper to use witness data" were enough of a barrier, nobody
> would be using OP_RETURN outputs today except for opentimestamps and
> maybe some other super-low-load applications.
It isn't chepaer to use witness data until you're publishing more than
~143 bytes of data, due to the overhead of the setup transaction. (It's
also not cheaper if you want extremely easy proof of publication of
the data)
Excluding the couple of months between the topic of increasing the
OP_RETURN limit was raised on this list and the increased limit was
merged into Bitcoin Core master, there have, in fact, been very few
OP_RETURN outputs generated that are above the ~143B size. In particular,
between blocks 900k and 915,843 I get:
15,003,149 total OP_RETURN outputs
131 OP_RETURN outputs larger than 83 bytes
81 OP_RETURN outputs of 144 bytes or more
19,707 OP_RETURN outputs with non-zero value
cf https://github.com/bitcoin/bitcoin/pull/33453#issuecomment-3341177765
Cheers,
aj
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN_2qZzAu9gxa1h0%40erisian.com.au.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 13:21 ` [bitcoindev] " Peter Todd
@ 2025-10-03 16:52 ` 'moonsettler' via Bitcoin Development Mailing List
0 siblings, 0 replies; 46+ messages in thread
From: 'moonsettler' via Bitcoin Development Mailing List @ 2025-10-03 16:52 UTC (permalink / raw)
To: Peter Todd; +Cc: PortlandHODL, Bitcoin Development Mailing List
> NACK, for exactly this reason. It's hard to predict what kind of math will be
> needed in the future for future signature algorithms. With taproot, we include
> bare pubkeys in scriptPubKeys for a good reason. It's quite possible that we'll
> want to do something similar with >520byte pubkeys for some future signature
>
> algorithm (e.g. quantum hard) or some other difficult to predict technical
> upgrade (the spendableness of scriptPubKeys with >520bytes isn't relevant to
>
> this discussion).
No matter how large a pubkey script you need, you can just delegate to the
witness if you have a cryptographically secure hash function.
Hard to even imagine needing anywhere near 4096 bits for that.
The going assumption for quantum algos is they could halve the bit strength of
a hash function, but SHA512 seems quiet robust even under worst assumptions.
And it's not enough to find ANY collision for a script or some Merkle root.
Putting the unlocking conditions into the UTXO set does not seem like a healthy
idea to me anyhow.
BR,
moonsettler
PS:
No hard opinion on temporary vs final restrictions. I wouldn't worry about it.
Sent with Proton Mail secure email.
On Friday, October 3rd, 2025 at 5:51 PM, Peter Todd <pete@petertodd.org> wrote:
> On Thu, Oct 02, 2025 at 01:42:06PM -0700, PortlandHODL wrote:
>
> > Proposing: Softfork to after (n) block height; the creation of outpoints
> > with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
> >
> > This is my gathering of information per BIP 0002
> >
> > After doing some research into the number of outpoints that would have
> > violated the proposed rule there are exactly 169 outpoints. With only 8
> > being non OP_RETURN. I think after 15 years and not having discovered use
> > for 'large' ScriptPubkeys; the reward for not invalidating them at the
> > consensus level is lower than the risk of their abuse.
> >
> > -
> > *Reasons for *
> > - Makes DoS blocks likely impossible to create that would have any
> > sufficient negative impact on the network.
>
>
> Further restricting v0 scripts is sufficient to achieve this goal. We do not
> need to actually prohibit >520 byte pushes.
>
> > - Leaves enough room for hooks long term
> > - Would substantially reduce the divergence between consensus and
> > relay policy
> > - Incredibly little use onchain as evidenced above.
> > - Could possibly reduce codebase complexity. Legacy Script is largely
> > considered a mess though this isn't a complete disablement it should reduce
> > the total surface that is problematic.
> > - Would make it harder to use the ScriptPubkey as a 'large'
> > datacarrier.
> > - Possible UTXO set size bloat reduction.
> >
> > - *Reasons Against *
> > - Bitcoin could need it in the future? Quantum?
>
>
> NACK, for exactly this reason. It's hard to predict what kind of math will be
> needed in the future for future signature algorithms. With taproot, we include
> bare pubkeys in scriptPubKeys for a good reason. It's quite possible that we'll
> want to do something similar with >520byte pubkeys for some future signature
>
> algorithm (e.g. quantum hard) or some other difficult to predict technical
> upgrade (the spendableness of scriptPubKeys with >520bytes isn't relevant to
>
> this discussion).
>
> > - Users could just create more outpoints.
>
>
> The second reason for my NACK. It makes no significant difference whether or
> not data is contiguous or split across multiple outputs. All the same concerns
> about arbitrary data ("spam") exist and will continue to be argued over even if
> we do a soft-fork to prohibit this. All we'll done is have used up valuable dev
> and political resources.
>
> --
> https://petertodd.org 'peter'[:-1]@petertodd.org
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aN_N4i4zZ5Dt8TdG%40petertodd.org.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/ONFWceYdQT0aizGwn2vyyzdr2RZ9GlQ7vAfNfIRRO_IGsTaX-l3bghNiygjXmccG8UJO_7pxrAr2ZKbUrlvNrAZ83EfyPjzuAR26J7xp4bw%3D%40protonmail.com.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
` (4 preceding siblings ...)
2025-10-03 15:42 ` Anthony Towns
@ 2025-10-03 20:02 ` Luke Dashjr
2025-10-03 20:52 ` /dev /fd0
2025-10-08 15:03 ` Greg Tonoski
2025-10-15 20:04 ` [bitcoindev] " Casey Rodarmor
6 siblings, 2 replies; 46+ messages in thread
From: Luke Dashjr @ 2025-10-03 20:02 UTC (permalink / raw)
To: bitcoindev
[-- Attachment #1: Type: text/plain, Size: 4620 bytes --]
If we're going this route, we should just close all the gaps for the
immediate future:
- Limit (new) scriptPubKeys to 83 bytes or less. 34 doesn't seem
terrible. UTXOs are a huge cost to nodes, we should always keep them as
small as possible. Anything else can be hashed (if SHA256 is broken, we
need a hardfork anyway).
- Limit script data pushes to 256 bytes, with an exception for BIP16
redeem scripts.
- Make undefined witness/taproot versions invalid, including the annex
and OP_SUCCESS*. To make any legitimate usage of them, we need a
softfork anyway (see below about expiring this).
- Limit taproot control block to 257 bytes (128 scripts max), or at
least way less than it currently is. 340e36 scripts is completely
unrealistic.
- Make OP_IF invalid inside Tapscript. It should be unnecessary with
taproot, and has only(?) seen abuse.
We can do these all together in a temporary softfork that self-expires
after a year or two. This would buy time to come up with longer-term
solutions, and observe how it impacts the real world. Since it expires,
other softforks making use of upgradable mechanisms can just wait it out
for those mechanisms to become available again - therefore we basically
lose nothing. (This is intended to buy us time, not as a permanent fix.)
Alternatively, but much more complex, we could redesign the block weight
metric so the above limits could be exceeded, but at a higher
weight-per-byte; perhaps weigh data 25% more per byte beyond the
expected size. This could also be a temporary softfork, perhaps with a
rolling window, so future softforks could be free to lower weights
should they be needed.
Another idea might be to increase the weight based on
coin-days-destroyed/coin-age, so rapid churn has a higher feerate than
occasional settlements. But this risks encouraging UTXO bloat, so needs
careful consideration to proceed further.
Happy to throw together a BIP and/or code if there's community support
for this.
Luke
On 10/2/25 16:42, PortlandHODL wrote:
> Proposing: Softfork to after (n) block height; the creation of
> outpoints with greater than 520 bytes in the ScriptPubkey would be
> consensus invalid.
>
> This is my gathering of information per BIP 0002
>
> After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only
> 8 being non OP_RETURN. I think after 15 years and not having
> discovered use for 'large' ScriptPubkeys; the reward for not
> invalidating them at the consensus level is lower than the risk of
> their abuse.
>
> * *Reasons for
> *
> o Makes DoS blocks likely impossible to create that would have
> any sufficient negative impact on the network.
> o Leaves enough room for hooks long term
> o Would substantially reduce the divergence between consensus
> and relay policy
> o Incredibly little use onchain as evidenced above.
> o Could possibly reduce codebase complexity. Legacy Script is
> largely considered a mess though this isn't a complete
> disablement it should reduce the total surface that is
> problematic.
> o Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> o Possible UTXO set size bloat reduction.
>
> * *Reasons Against *
> o Bitcoin could need it in the future? Quantum?
> o Users could just create more outpoints.
>
> Thoughts?
>
> source of onchain data
> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>
> PortlandHODL
>
> --
> You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org.
[-- Attachment #2: Type: text/html, Size: 6147 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 20:02 ` Luke Dashjr
@ 2025-10-03 20:52 ` /dev /fd0
2025-10-04 23:12 ` jeremy
2025-10-08 15:03 ` Greg Tonoski
1 sibling, 1 reply; 46+ messages in thread
From: /dev /fd0 @ 2025-10-03 20:52 UTC (permalink / raw)
To: Luke Dashjr; +Cc: bitcoindev
[-- Attachment #1: Type: text/plain, Size: 5674 bytes --]
Hi Luke,
> We can do these all together in a temporary softfork that self-expires
after a year or two.
That sounds reasonable and it could work if we can agree on the specifics
of this proposal. As Jeremy also mentioned in his email, we could set up an
auto-renewing restriction lasting 1–2 years with the option to remove it
later if we decide we want to.
/dev/fd0
floppy disk guy
On Sat, Oct 4, 2025 at 1:39 AM Luke Dashjr <luke@dashjr.org> wrote:
> If we're going this route, we should just close all the gaps for the
> immediate future:
>
> - Limit (new) scriptPubKeys to 83 bytes or less. 34 doesn't seem terrible.
> UTXOs are a huge cost to nodes, we should always keep them as small as
> possible. Anything else can be hashed (if SHA256 is broken, we need a
> hardfork anyway).
>
> - Limit script data pushes to 256 bytes, with an exception for BIP16
> redeem scripts.
>
> - Make undefined witness/taproot versions invalid, including the annex and
> OP_SUCCESS*. To make any legitimate usage of them, we need a softfork
> anyway (see below about expiring this).
>
> - Limit taproot control block to 257 bytes (128 scripts max), or at least
> way less than it currently is. 340e36 scripts is completely unrealistic.
>
> - Make OP_IF invalid inside Tapscript. It should be unnecessary with
> taproot, and has only(?) seen abuse.
>
> We can do these all together in a temporary softfork that self-expires
> after a year or two. This would buy time to come up with longer-term
> solutions, and observe how it impacts the real world. Since it expires,
> other softforks making use of upgradable mechanisms can just wait it out
> for those mechanisms to become available again - therefore we basically
> lose nothing. (This is intended to buy us time, not as a permanent fix.)
>
> Alternatively, but much more complex, we could redesign the block weight
> metric so the above limits could be exceeded, but at a higher
> weight-per-byte; perhaps weigh data 25% more per byte beyond the expected
> size. This could also be a temporary softfork, perhaps with a rolling
> window, so future softforks could be free to lower weights should they be
> needed.
>
> Another idea might be to increase the weight based on
> coin-days-destroyed/coin-age, so rapid churn has a higher feerate than
> occasional settlements. But this risks encouraging UTXO bloat, so needs
> careful consideration to proceed further.
>
> Happy to throw together a BIP and/or code if there's community support for
> this.
>
> Luke
>
>
> On 10/2/25 16:42, PortlandHODL wrote:
>
> Proposing: Softfork to after (n) block height; the creation of outpoints
> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
> This is my gathering of information per BIP 0002
>
> After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only 8
> being non OP_RETURN. I think after 15 years and not having discovered use
> for 'large' ScriptPubkeys; the reward for not invalidating them at the
> consensus level is lower than the risk of their abuse.
>
> -
> *Reasons for *
> - Makes DoS blocks likely impossible to create that would have any
> sufficient negative impact on the network.
> - Leaves enough room for hooks long term
> - Would substantially reduce the divergence between consensus and
> relay policy
> - Incredibly little use onchain as evidenced above.
> - Could possibly reduce codebase complexity. Legacy Script is
> largely considered a mess though this isn't a complete disablement it
> should reduce the total surface that is problematic.
> - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> - Possible UTXO set size bloat reduction.
>
> - *Reasons Against *
> - Bitcoin could need it in the future? Quantum?
> - Users could just create more outpoints.
>
> Thoughts?
>
> source of onchain data
> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>
> PortlandHODL
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org
> <https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CALiT-Zo8wiZGCFeMwfd92zptw_cKz7ajMOjFWW%3DrdS9by3zYHQ%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 7414 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 20:52 ` /dev /fd0
@ 2025-10-04 23:12 ` jeremy
2025-10-05 10:59 ` Luke Dashjr
0 siblings, 1 reply; 46+ messages in thread
From: jeremy @ 2025-10-04 23:12 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 6691 bytes --]
- Limit taproot control block to 257 bytes (128 scripts max), or at least
way less than it currently is. 340e36 scripts is completely unrealistic.
this is a misunderstanding of taptree's depth purpose, which is not to
bound the number of elements directly.
It's a bound on the huffman encoding to optimize for on-chain cost with
many scripts and known likelihood of execution.
So the right way to constrain taproot is by bounding the minimum
probability of script execution. E.g., if it's one-in-4 billion chance of
executing, then you'd need depth 32.
128 depth was chosen because if a branch is (2^128 -1)/2^128 unlikely to
execute, then it's negligibly likely, the same order of probability as
being able to e.g. brute force a key.
On Friday, October 3, 2025 at 6:46:16 PM UTC-4 /dev /fd0 wrote:
> Hi Luke,
>
> > We can do these all together in a temporary softfork that self-expires
> after a year or two.
>
> That sounds reasonable and it could work if we can agree on the specifics
> of this proposal. As Jeremy also mentioned in his email, we could set up an
> auto-renewing restriction lasting 1–2 years with the option to remove it
> later if we decide we want to.
>
> /dev/fd0
> floppy disk guy
>
> On Sat, Oct 4, 2025 at 1:39 AM Luke Dashjr <lu...@dashjr.org> wrote:
>
>> If we're going this route, we should just close all the gaps for the
>> immediate future:
>>
>> - Limit (new) scriptPubKeys to 83 bytes or less. 34 doesn't seem
>> terrible. UTXOs are a huge cost to nodes, we should always keep them as
>> small as possible. Anything else can be hashed (if SHA256 is broken, we
>> need a hardfork anyway).
>>
>> - Limit script data pushes to 256 bytes, with an exception for BIP16
>> redeem scripts.
>>
>> - Make undefined witness/taproot versions invalid, including the annex
>> and OP_SUCCESS*. To make any legitimate usage of them, we need a softfork
>> anyway (see below about expiring this).
>>
>> - Limit taproot control block to 257 bytes (128 scripts max), or at least
>> way less than it currently is. 340e36 scripts is completely unrealistic.
>>
>> - Make OP_IF invalid inside Tapscript. It should be unnecessary with
>> taproot, and has only(?) seen abuse.
>>
>> We can do these all together in a temporary softfork that self-expires
>> after a year or two. This would buy time to come up with longer-term
>> solutions, and observe how it impacts the real world. Since it expires,
>> other softforks making use of upgradable mechanisms can just wait it out
>> for those mechanisms to become available again - therefore we basically
>> lose nothing. (This is intended to buy us time, not as a permanent fix.)
>>
>> Alternatively, but much more complex, we could redesign the block weight
>> metric so the above limits could be exceeded, but at a higher
>> weight-per-byte; perhaps weigh data 25% more per byte beyond the expected
>> size. This could also be a temporary softfork, perhaps with a rolling
>> window, so future softforks could be free to lower weights should they be
>> needed.
>>
>> Another idea might be to increase the weight based on
>> coin-days-destroyed/coin-age, so rapid churn has a higher feerate than
>> occasional settlements. But this risks encouraging UTXO bloat, so needs
>> careful consideration to proceed further.
>>
>> Happy to throw together a BIP and/or code if there's community support
>> for this.
>>
>> Luke
>>
>>
>> On 10/2/25 16:42, PortlandHODL wrote:
>>
>> Proposing: Softfork to after (n) block height; the creation of outpoints
>> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>>
>> This is my gathering of information per BIP 0002
>>
>> After doing some research into the number of outpoints that would have
>> violated the proposed rule there are exactly 169 outpoints. With only 8
>> being non OP_RETURN. I think after 15 years and not having discovered use
>> for 'large' ScriptPubkeys; the reward for not invalidating them at the
>> consensus level is lower than the risk of their abuse.
>>
>> -
>> *Reasons for *
>> - Makes DoS blocks likely impossible to create that would have any
>> sufficient negative impact on the network.
>> - Leaves enough room for hooks long term
>> - Would substantially reduce the divergence between consensus and
>> relay policy
>> - Incredibly little use onchain as evidenced above.
>> - Could possibly reduce codebase complexity. Legacy Script is
>> largely considered a mess though this isn't a complete disablement it
>> should reduce the total surface that is problematic.
>> - Would make it harder to use the ScriptPubkey as a 'large'
>> datacarrier.
>> - Possible UTXO set size bloat reduction.
>>
>> - *Reasons Against *
>> - Bitcoin could need it in the future? Quantum?
>> - Users could just create more outpoints.
>>
>> Thoughts?
>>
>> source of onchain data
>> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>>
>> PortlandHODL
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to bitcoindev+...@googlegroups.com.
>> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com
>> <https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to bitcoindev+...@googlegroups.com.
>>
> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org
>> <https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org?utm_medium=email&utm_source=footer>
>> .
>>
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/1e0f9843-0f08-4dea-b037-24df38bf8ed0n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 9562 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 16:15 ` Anthony Towns
@ 2025-10-05 9:59 ` Guus Ellenkamp
0 siblings, 0 replies; 46+ messages in thread
From: Guus Ellenkamp @ 2025-10-05 9:59 UTC (permalink / raw)
To: bitcoindev
If there are really so few OP_RETURN outputs more than 144 bytes, then
why increase the limit if that change is so controversial? It seems
people who want to use a larger OP_RETURN size do it anyway, even with
the current default limits.
On 10/4/25 00:15, Anthony Towns wrote:
> On Fri, Oct 03, 2025 at 02:59:32PM +0000, Andrew Poelstra wrote:
>> If "it's cheaper to use witness data" were enough of a barrier, nobody
>> would be using OP_RETURN outputs today except for opentimestamps and
>> maybe some other super-low-load applications.
> It isn't chepaer to use witness data until you're publishing more than
> ~143 bytes of data, due to the overhead of the setup transaction. (It's
> also not cheaper if you want extremely easy proof of publication of
> the data)
>
> Excluding the couple of months between the topic of increasing the
> OP_RETURN limit was raised on this list and the increased limit was
> merged into Bitcoin Core master, there have, in fact, been very few
> OP_RETURN outputs generated that are above the ~143B size. In particular,
> between blocks 900k and 915,843 I get:
>
> 15,003,149 total OP_RETURN outputs
> 131 OP_RETURN outputs larger than 83 bytes
> 81 OP_RETURN outputs of 144 bytes or more
> 19,707 OP_RETURN outputs with non-zero value
>
> cf https://github.com/bitcoin/bitcoin/pull/33453#issuecomment-3341177765
>
> Cheers,
> aj
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/83c68b1f-7b92-4a28-a79a-02d56eff2c84%40activediscovery.net.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-04 23:12 ` jeremy
@ 2025-10-05 10:59 ` Luke Dashjr
0 siblings, 0 replies; 46+ messages in thread
From: Luke Dashjr @ 2025-10-05 10:59 UTC (permalink / raw)
To: bitcoindev
[-- Attachment #1: Type: text/plain, Size: 8353 bytes --]
Yes, sorry if I was unclear. The temporary restriction of 257 B is
ultimately based on the size, which doesn't accommodate for that design
ideal. It's a tradeoff until a better solution is implemented. While it
might not be optimal in all cases to have 128 scripts, the fact remains
that size/depth _allows for it_. (And 128 depth is still unrealistic,
even if you don't like the script-count framing.)
Luke
On 10/4/25 19:12, jeremy wrote:
>
> - Limit taproot control block to 257 bytes (128 scripts max), or at
> least way less than it currently is. 340e36 scripts is completely
> unrealistic.
>
>
> this is a misunderstanding of taptree's depth purpose, which is not to
> bound the number of elements directly.
>
> It's a bound on the huffman encoding to optimize for on-chain cost
> with many scripts and known likelihood of execution.
>
> So the right way to constrain taproot is by bounding the minimum
> probability of script execution. E.g., if it's one-in-4 billion chance
> of executing, then you'd need depth 32.
>
> 128 depth was chosen because if a branch is (2^128 -1)/2^128 unlikely
> to execute, then it's negligibly likely, the same order of probability
> as being able to e.g. brute force a key.
>
> On Friday, October 3, 2025 at 6:46:16 PM UTC-4 /dev /fd0 wrote:
>
> Hi Luke,
>
> > We can do these all together in a temporary softfork that
> self-expires after a year or two.
>
> That sounds reasonable and it could work if we can agree on the
> specifics of this proposal. As Jeremy also mentioned in his email,
> we could set up an auto-renewing restriction lasting 1–2 years
> with the option to remove it later if we decide we want to.
>
> /dev/fd0
> floppy disk guy
>
> On Sat, Oct 4, 2025 at 1:39 AM Luke Dashjr <lu...@dashjr.org> wrote:
>
> If we're going this route, we should just close all the gaps
> for the immediate future:
>
> - Limit (new) scriptPubKeys to 83 bytes or less. 34 doesn't
> seem terrible. UTXOs are a huge cost to nodes, we should
> always keep them as small as possible. Anything else can be
> hashed (if SHA256 is broken, we need a hardfork anyway).
>
> - Limit script data pushes to 256 bytes, with an exception for
> BIP16 redeem scripts.
>
> - Make undefined witness/taproot versions invalid, including
> the annex and OP_SUCCESS*. To make any legitimate usage of
> them, we need a softfork anyway (see below about expiring this).
>
> - Limit taproot control block to 257 bytes (128 scripts max),
> or at least way less than it currently is. 340e36 scripts is
> completely unrealistic.
>
> - Make OP_IF invalid inside Tapscript. It should be
> unnecessary with taproot, and has only(?) seen abuse.
>
> We can do these all together in a temporary softfork that
> self-expires after a year or two. This would buy time to come
> up with longer-term solutions, and observe how it impacts the
> real world. Since it expires, other softforks making use of
> upgradable mechanisms can just wait it out for those
> mechanisms to become available again - therefore we basically
> lose nothing. (This is intended to buy us time, not as a
> permanent fix.)
>
> Alternatively, but much more complex, we could redesign the
> block weight metric so the above limits could be exceeded, but
> at a higher weight-per-byte; perhaps weigh data 25% more per
> byte beyond the expected size. This could also be a temporary
> softfork, perhaps with a rolling window, so future softforks
> could be free to lower weights should they be needed.
>
> Another idea might be to increase the weight based on
> coin-days-destroyed/coin-age, so rapid churn has a higher
> feerate than occasional settlements. But this risks
> encouraging UTXO bloat, so needs careful consideration to
> proceed further.
>
> Happy to throw together a BIP and/or code if there's community
> support for this.
>
> Luke
>
>
> On 10/2/25 16:42, PortlandHODL wrote:
>> Proposing: Softfork to after (n) block height; the creation
>> of outpoints with greater than 520 bytes in the ScriptPubkey
>> would be consensus invalid.
>>
>> This is my gathering of information per BIP 0002
>>
>> After doing some research into the number of outpoints that
>> would have violated the proposed rule there are exactly 169
>> outpoints. With only 8 being non OP_RETURN. I think after 15
>> years and not having discovered use for 'large'
>> ScriptPubkeys; the reward for not invalidating them at the
>> consensus level is lower than the risk of their abuse.
>>
>> * *Reasons for
>> *
>> o Makes DoS blocks likely impossible to create that
>> would have any sufficient negative impact on the network.
>> o Leaves enough room for hooks long term
>> o Would substantially reduce the divergence between
>> consensus and relay policy
>> o Incredibly little use onchain as evidenced above.
>> o Could possibly reduce codebase complexity. Legacy
>> Script is largely considered a mess though this isn't
>> a complete disablement it should reduce the total
>> surface that is problematic.
>> o Would make it harder to use the ScriptPubkey as a
>> 'large' datacarrier.
>> o Possible UTXO set size bloat reduction.
>>
>> * *Reasons Against *
>> o Bitcoin could need it in the future? Quantum?
>> o Users could just create more outpoints.
>>
>> Thoughts?
>>
>> source of onchain data
>> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>>
>> PortlandHODL
>>
>> --
>> You received this message because you are subscribed to the
>> Google Groups "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from
>> it, send an email to bitcoindev+...@googlegroups.com.
>> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com
>> <https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com?utm_medium=email&utm_source=footer>.
> --
> You received this message because you are subscribed to the
> Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from
> it, send an email to bitcoindev+...@googlegroups.com.
>
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org
> <https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org?utm_medium=email&utm_source=footer>.
>
> --
> You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/1e0f9843-0f08-4dea-b037-24df38bf8ed0n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/1e0f9843-0f08-4dea-b037-24df38bf8ed0n%40googlegroups.com?utm_medium=email&utm_source=footer>.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/0585bba3-fb88-41d6-b86c-167774c14eb9%40dashjr.org.
[-- Attachment #2: Type: text/html, Size: 13348 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-03 20:02 ` Luke Dashjr
2025-10-03 20:52 ` /dev /fd0
@ 2025-10-08 15:03 ` Greg Tonoski
2025-10-08 18:15 ` Keagan McClelland
1 sibling, 1 reply; 46+ messages in thread
From: Greg Tonoski @ 2025-10-08 15:03 UTC (permalink / raw)
To: bitcoindev
[-- Attachment #1: Type: text/plain, Size: 6432 bytes --]
I'm for all the consensus proposals below and would further specify:
- limiting the maximum size of the scriptPubKey of a transaction to 67
bytes (to keep supporting P2PK and avoid impacting usability/compatibility
of old, deprecated software and risk of similar corner cases); good
riddance to P2MS;
- limit the maximum size of script data pushes to 73 bytes (for the same
reason, i.e. keep supporting for P2PK input: signature with its encoding
overhead; also P2PKH) "with an exception for BIP16 redeem scripts [which
may embed multiple public keys for multisig]",
- rule out "OP_FALSE OP_IF" (CVE-2023-50428),
- discontinue P2SPAM (the one that repurposed the mnemonic OP_RETURN and
was standardized in 2014, commit a79342479f577013f2fd2573fb32585d6f4981b3
<https://github.com/bitcoinknots/bitcoin/commit/a79342479f577013f2fd2573fb32585d6f4981b3>
).
BTW I think we should also consider consensus-wise limit on the maximum
size of the so-called Witness field (3600 bytes max. by policy.h) along
with max. size (80 bytes max. by policy.h) and max. count of its items (100
by policy.h). Any suggestions, anybody?
--
Greg Tonoski
On Fri, Oct 3, 2025 at 10:09 PM Luke Dashjr <luke@dashjr.org> wrote:
> If we're going this route, we should just close all the gaps for the
> immediate future:
>
> - Limit (new) scriptPubKeys to 83 bytes or less. 34 doesn't seem terrible.
> UTXOs are a huge cost to nodes, we should always keep them as small as
> possible. Anything else can be hashed (if SHA256 is broken, we need a
> hardfork anyway).
>
> - Limit script data pushes to 256 bytes, with an exception for BIP16
> redeem scripts.
>
> - Make undefined witness/taproot versions invalid, including the annex and
> OP_SUCCESS*. To make any legitimate usage of them, we need a softfork
> anyway (see below about expiring this).
>
> - Limit taproot control block to 257 bytes (128 scripts max), or at least
> way less than it currently is. 340e36 scripts is completely unrealistic.
>
> - Make OP_IF invalid inside Tapscript. It should be unnecessary with
> taproot, and has only(?) seen abuse.
>
> We can do these all together in a temporary softfork that self-expires
> after a year or two. This would buy time to come up with longer-term
> solutions, and observe how it impacts the real world. Since it expires,
> other softforks making use of upgradable mechanisms can just wait it out
> for those mechanisms to become available again - therefore we basically
> lose nothing. (This is intended to buy us time, not as a permanent fix.)
>
> Alternatively, but much more complex, we could redesign the block weight
> metric so the above limits could be exceeded, but at a higher
> weight-per-byte; perhaps weigh data 25% more per byte beyond the expected
> size. This could also be a temporary softfork, perhaps with a rolling
> window, so future softforks could be free to lower weights should they be
> needed.
>
> Another idea might be to increase the weight based on
> coin-days-destroyed/coin-age, so rapid churn has a higher feerate than
> occasional settlements. But this risks encouraging UTXO bloat, so needs
> careful consideration to proceed further.
>
> Happy to throw together a BIP and/or code if there's community support for
> this.
>
> Luke
>
>
> On 10/2/25 16:42, PortlandHODL wrote:
>
> Proposing: Softfork to after (n) block height; the creation of outpoints
> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
> This is my gathering of information per BIP 0002
>
> After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only 8
> being non OP_RETURN. I think after 15 years and not having discovered use
> for 'large' ScriptPubkeys; the reward for not invalidating them at the
> consensus level is lower than the risk of their abuse.
>
> -
> *Reasons for *
> - Makes DoS blocks likely impossible to create that would have any
> sufficient negative impact on the network.
> - Leaves enough room for hooks long term
> - Would substantially reduce the divergence between consensus and
> relay policy
> - Incredibly little use onchain as evidenced above.
> - Could possibly reduce codebase complexity. Legacy Script is
> largely considered a mess though this isn't a complete disablement it
> should reduce the total surface that is problematic.
> - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> - Possible UTXO set size bloat reduction.
>
> - *Reasons Against *
> - Bitcoin could need it in the future? Quantum?
> - Users could just create more outpoints.
>
> Thoughts?
>
> source of onchain data
> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>
> PortlandHODL
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org
> <https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAMHHROwJA5N%3DSfejF353oWFFsKVCtk5cpr9QkOv%2BZcqGDFz6oA%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 8247 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-08 15:03 ` Greg Tonoski
@ 2025-10-08 18:15 ` Keagan McClelland
0 siblings, 0 replies; 46+ messages in thread
From: Keagan McClelland @ 2025-10-08 18:15 UTC (permalink / raw)
To: Greg Tonoski; +Cc: bitcoindev
[-- Attachment #1: Type: text/plain, Size: 8427 bytes --]
Hard NACK on capping the witness size as that would effectively ban large
scripts even in the P2SH wrapper which undermines Bitcoin's ability to be
an effectively programmable money.
The issue is that if you do that then you effectively make script unusable
for complex scripting or anything related to ZKPs. At that point you may as
well just remove script altogether and just make Bitcoin a key-only
currency, which I think would be silly. I think making Bitcoin safely more
programmable should be the goal, not hamstringing what can be done with
script by capping the witness size. The "spam" (which I'll remind people is
an incoherent idea in a leaderless system) is a symptom of the inability
for a robust fee market to develop for block space.
I am hesitant to limit the scriptPubKey all the way down to 67 bytes.
Although it may be compelling to tighten it up as restrictively as
possible, if we find a reason to increase it again, it either has to be
done via a hard fork or via a significantly more complicated and subversive
mechanism. I think a gradual tightening based off of concrete observations
in the wild is a much more prudent approach.
Keags
On Wed, Oct 8, 2025 at 10:24 AM Greg Tonoski <greg.tonoski@gmail.com> wrote:
> I'm for all the consensus proposals below and would further specify:
> - limiting the maximum size of the scriptPubKey of a transaction to 67
> bytes (to keep supporting P2PK and avoid impacting usability/compatibility
> of old, deprecated software and risk of similar corner cases); good
> riddance to P2MS;
> - limit the maximum size of script data pushes to 73 bytes (for the same
> reason, i.e. keep supporting for P2PK input: signature with its encoding
> overhead; also P2PKH) "with an exception for BIP16 redeem scripts [which
> may embed multiple public keys for multisig]",
> - rule out "OP_FALSE OP_IF" (CVE-2023-50428),
> - discontinue P2SPAM (the one that repurposed the mnemonic OP_RETURN and
> was standardized in 2014, commit a79342479f577013f2fd2573fb32585d6f4981b3
> <https://github.com/bitcoinknots/bitcoin/commit/a79342479f577013f2fd2573fb32585d6f4981b3>
> ).
>
> BTW I think we should also consider consensus-wise limit on the maximum
> size of the so-called Witness field (3600 bytes max. by policy.h) along
> with max. size (80 bytes max. by policy.h) and max. count of its items (100
> by policy.h). Any suggestions, anybody?
>
> --
> Greg Tonoski
>
> On Fri, Oct 3, 2025 at 10:09 PM Luke Dashjr <luke@dashjr.org> wrote:
>
>> If we're going this route, we should just close all the gaps for the
>> immediate future:
>>
>> - Limit (new) scriptPubKeys to 83 bytes or less. 34 doesn't seem
>> terrible. UTXOs are a huge cost to nodes, we should always keep them as
>> small as possible. Anything else can be hashed (if SHA256 is broken, we
>> need a hardfork anyway).
>>
>> - Limit script data pushes to 256 bytes, with an exception for BIP16
>> redeem scripts.
>>
>> - Make undefined witness/taproot versions invalid, including the annex
>> and OP_SUCCESS*. To make any legitimate usage of them, we need a softfork
>> anyway (see below about expiring this).
>>
>> - Limit taproot control block to 257 bytes (128 scripts max), or at least
>> way less than it currently is. 340e36 scripts is completely unrealistic.
>>
>> - Make OP_IF invalid inside Tapscript. It should be unnecessary with
>> taproot, and has only(?) seen abuse.
>>
>> We can do these all together in a temporary softfork that self-expires
>> after a year or two. This would buy time to come up with longer-term
>> solutions, and observe how it impacts the real world. Since it expires,
>> other softforks making use of upgradable mechanisms can just wait it out
>> for those mechanisms to become available again - therefore we basically
>> lose nothing. (This is intended to buy us time, not as a permanent fix.)
>>
>> Alternatively, but much more complex, we could redesign the block weight
>> metric so the above limits could be exceeded, but at a higher
>> weight-per-byte; perhaps weigh data 25% more per byte beyond the expected
>> size. This could also be a temporary softfork, perhaps with a rolling
>> window, so future softforks could be free to lower weights should they be
>> needed.
>>
>> Another idea might be to increase the weight based on
>> coin-days-destroyed/coin-age, so rapid churn has a higher feerate than
>> occasional settlements. But this risks encouraging UTXO bloat, so needs
>> careful consideration to proceed further.
>>
>> Happy to throw together a BIP and/or code if there's community support
>> for this.
>>
>> Luke
>>
>>
>> On 10/2/25 16:42, PortlandHODL wrote:
>>
>> Proposing: Softfork to after (n) block height; the creation of outpoints
>> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>>
>> This is my gathering of information per BIP 0002
>>
>> After doing some research into the number of outpoints that would have
>> violated the proposed rule there are exactly 169 outpoints. With only 8
>> being non OP_RETURN. I think after 15 years and not having discovered use
>> for 'large' ScriptPubkeys; the reward for not invalidating them at the
>> consensus level is lower than the risk of their abuse.
>>
>> -
>> *Reasons for *
>> - Makes DoS blocks likely impossible to create that would have any
>> sufficient negative impact on the network.
>> - Leaves enough room for hooks long term
>> - Would substantially reduce the divergence between consensus and
>> relay policy
>> - Incredibly little use onchain as evidenced above.
>> - Could possibly reduce codebase complexity. Legacy Script is
>> largely considered a mess though this isn't a complete disablement it
>> should reduce the total surface that is problematic.
>> - Would make it harder to use the ScriptPubkey as a 'large'
>> datacarrier.
>> - Possible UTXO set size bloat reduction.
>>
>> - *Reasons Against *
>> - Bitcoin could need it in the future? Quantum?
>> - Users could just create more outpoints.
>>
>> Thoughts?
>>
>> source of onchain data
>> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>>
>> PortlandHODL
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to bitcoindev+unsubscribe@googlegroups.com.
>> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com
>> <https://groups.google.com/d/msgid/bitcoindev/6f6b570f-7f9d-40c0-a771-378eb2c0c701n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to bitcoindev+unsubscribe@googlegroups.com.
>> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org
>> <https://groups.google.com/d/msgid/bitcoindev/001afe1d-0282-4c68-8b1c-ebcc778f57b0%40dashjr.org?utm_medium=email&utm_source=footer>
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/CAMHHROwJA5N%3DSfejF353oWFFsKVCtk5cpr9QkOv%2BZcqGDFz6oA%40mail.gmail.com
> <https://groups.google.com/d/msgid/bitcoindev/CAMHHROwJA5N%3DSfejF353oWFFsKVCtk5cpr9QkOv%2BZcqGDFz6oA%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CALeFGL0PDjtRt2rfbY4gTkoc%2B5oNQ0mn_obraE7PrtHuNYFpQw%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 10586 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
` (5 preceding siblings ...)
2025-10-03 20:02 ` Luke Dashjr
@ 2025-10-15 20:04 ` Casey Rodarmor
2025-10-16 0:06 ` Greg Maxwell
6 siblings, 1 reply; 46+ messages in thread
From: Casey Rodarmor @ 2025-10-15 20:04 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 2622 bytes --]
I think that "Bitcoin could need it in the future?" might be a good enough
reason not to do this.
Script pubkeys are the only variable-length transaction fields which can be
covered by input signatures, which might make them useful for future soft
forks. I can imagine confidential asset schemes or post-quantum coin
recovery
schemes requiring large proofs in the outputs, where the validity of the
proof
determined whether or not the transaction is valid, and thus require the
proofs to be in the outputs, and not just a hash commitment.
On Thursday, October 2, 2025 at 2:59:24 PM UTC-7 PortlandHODL wrote:
> Proposing: Softfork to after (n) block height; the creation of outpoints
> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>
> This is my gathering of information per BIP 0002
>
> After doing some research into the number of outpoints that would have
> violated the proposed rule there are exactly 169 outpoints. With only 8
> being non OP_RETURN. I think after 15 years and not having discovered use
> for 'large' ScriptPubkeys; the reward for not invalidating them at the
> consensus level is lower than the risk of their abuse.
>
> -
> *Reasons for *
> - Makes DoS blocks likely impossible to create that would have any
> sufficient negative impact on the network.
> - Leaves enough room for hooks long term
> - Would substantially reduce the divergence between consensus and
> relay policy
> - Incredibly little use onchain as evidenced above.
> - Could possibly reduce codebase complexity. Legacy Script is
> largely considered a mess though this isn't a complete disablement it
> should reduce the total surface that is problematic.
> - Would make it harder to use the ScriptPubkey as a 'large'
> datacarrier.
> - Possible UTXO set size bloat reduction.
>
> - *Reasons Against *
> - Bitcoin could need it in the future? Quantum?
> - Users could just create more outpoints.
>
> Thoughts?
>
> source of onchain data
> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>
> PortlandHODL
>
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/961e3c3a-a627-4a07-ae81-eb01f7a375a1n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 3339 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-15 20:04 ` [bitcoindev] " Casey Rodarmor
@ 2025-10-16 0:06 ` Greg Maxwell
2025-10-17 17:07 ` Brandon Black
0 siblings, 1 reply; 46+ messages in thread
From: Greg Maxwell @ 2025-10-16 0:06 UTC (permalink / raw)
To: Casey Rodarmor; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 4089 bytes --]
That concern is why the annex exists, I believe. But taproot aside, that
is a point.
But also given that there are essentially no violations and no reason to
expect any I'm not sure the proposal is worth time relative to fixes of
actual moderately serious DOS attack issues.
I guess a fair point is that given the ongoing progress towards consensus
rules being the boundary of what gets mined, it would be nice to prevent
big outputs that would bloat the utxo set. OTOH any output over 10k is
already pruned in implementations (as spending it is consensus invalid), so
the gap here is really just between 520 and 10k.
But even if jumbo outputs were being created today I think they'd still be
a less pressing issue than several of the other consensus cleanup issues.
On Wed, Oct 15, 2025 at 11:45 PM Casey Rodarmor <casey@rodarmor.com> wrote:
> I think that "Bitcoin could need it in the future?" might be a good enough
> reason not to do this.
>
> Script pubkeys are the only variable-length transaction fields which can be
> covered by input signatures, which might make them useful for future soft
> forks. I can imagine confidential asset schemes or post-quantum coin
> recovery
> schemes requiring large proofs in the outputs, where the validity of the
> proof
> determined whether or not the transaction is valid, and thus require the
> proofs to be in the outputs, and not just a hash commitment.
> On Thursday, October 2, 2025 at 2:59:24 PM UTC-7 PortlandHODL wrote:
>
>> Proposing: Softfork to after (n) block height; the creation of outpoints
>> with greater than 520 bytes in the ScriptPubkey would be consensus invalid.
>>
>> This is my gathering of information per BIP 0002
>>
>> After doing some research into the number of outpoints that would have
>> violated the proposed rule there are exactly 169 outpoints. With only 8
>> being non OP_RETURN. I think after 15 years and not having discovered use
>> for 'large' ScriptPubkeys; the reward for not invalidating them at the
>> consensus level is lower than the risk of their abuse.
>>
>> -
>> *Reasons for *
>> - Makes DoS blocks likely impossible to create that would have any
>> sufficient negative impact on the network.
>> - Leaves enough room for hooks long term
>> - Would substantially reduce the divergence between consensus and
>> relay policy
>> - Incredibly little use onchain as evidenced above.
>> - Could possibly reduce codebase complexity. Legacy Script is
>> largely considered a mess though this isn't a complete disablement it
>> should reduce the total surface that is problematic.
>> - Would make it harder to use the ScriptPubkey as a 'large'
>> datacarrier.
>> - Possible UTXO set size bloat reduction.
>>
>> - *Reasons Against *
>> - Bitcoin could need it in the future? Quantum?
>> - Users could just create more outpoints.
>>
>> Thoughts?
>>
>> source of onchain data
>> <https://github.com/portlandhodl/portlandhodl/blob/main/greater_520_pubkeys.csv>
>>
>> PortlandHODL
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/961e3c3a-a627-4a07-ae81-eb01f7a375a1n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/961e3c3a-a627-4a07-ae81-eb01f7a375a1n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAAS2fgQG%3DMT7MM-_LdWWepxisci1i%2B7TprBq0EH2PQ4mAs73Ew%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 5076 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-16 0:06 ` Greg Maxwell
@ 2025-10-17 17:07 ` Brandon Black
2025-10-17 18:05 ` 'Antoine Poinsot' via Bitcoin Development Mailing List
0 siblings, 1 reply; 46+ messages in thread
From: Brandon Black @ 2025-10-17 17:07 UTC (permalink / raw)
To: Bitcoin Development Mailing List
On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> But also given that there are essentially no violations and no reason to
> expect any I'm not sure the proposal is worth time relative to fixes of
> actual moderately serious DOS attack issues.
I believe this limit would also stop most (all?) of PortlandHODL's
DoSblocks without having to make some of the other changes in GCC. I
think it's worthwhile to compare this approach to those proposed by
Antoine in solving these DoS vectors.
Best,
--Brandon
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-17 17:07 ` Brandon Black
@ 2025-10-17 18:05 ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2025-10-18 1:01 ` Antoine Riard
2025-10-20 15:22 ` Greg Maxwell
0 siblings, 2 replies; 46+ messages in thread
From: 'Antoine Poinsot' via Bitcoin Development Mailing List @ 2025-10-17 18:05 UTC (permalink / raw)
To: Brandon Black; +Cc: Bitcoin Development Mailing List
Hi,
This approach was discussed last year when evaluating the best way to mitigate DoS blocks in terms
of gains compared to confiscatory surface. Limiting the size of created scriptPubKeys is not a
sufficient mitigation on its own, and has a non-trivial confiscatory surface.
One of the goal of BIP54 is to address objections to Matt's earlier proposal, notably the (in my
opinion reasonable) confiscation concerns voiced by Russell O'Connor. Limiting the size of
scriptPubKeys would in this regard be moving in the opposite direction.
Various approaches of limiting the size of spent scriptPubKeys were discussed, in forms that would
mitigate the confiscatory surface, to adopt in addition to (what eventually became) the BIP54 sigops
limit. However i decided against including this additional measure in BIP54 because:
- of the inherent complexity of the discussed schemes, which would make it hard to reason about
constructing transactions spending legacy inputs, and equally hard to evaluate the reduction of
the confiscatory surface;
- more importantly, there is steep diminishing returns to piling on more mitigations. The BIP54
limit on its own prevents an externally-motivated attacker from *unevenly* stalling the network
for dozens of minutes, and a revenue-maximizing miner from regularly stalling its competitions
for dozens of seconds, at a minimized cost in confiscatory surface. Additional mitigations reduce
the worst case validation time by a smaller factor at a higher cost in terms of confiscatory
surface. It "feels right" to further reduce those numbers, but it's less clear what the tangible
gains would be.
Furthermore, it's always possible to get the biggest bang for our buck in a first step and going the
extra mile in a later, more controversial, soft fork. I previously floated the idea of a "cleanup
v2" in private discussions, and i think besides a reduction of the maximum scriptPubKey size it
should feature a consensus-enforced maximum transaction size for the reasons stated here:
https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8. I wouldn't hold my
breath on such a "cleanup v2", but it may be useful to have it documented somewhere.
I'm trying to not go into much details regarding which mitigations were considered in designing
BIP54, because they are tightly related to the design of various DoS blocks. But i'm always happy to
rehash the decisions made there and (re-)consider alternative approaches on the semi-private Delving
thread [0] dedicated to this purpose. Feel free to ping me to get access if i know you.
Best,
Antoine Poinsot
[0]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <freedom@reardencode.com> wrote:
>
>
> On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>
> > But also given that there are essentially no violations and no reason to
> > expect any I'm not sure the proposal is worth time relative to fixes of
> > actual moderately serious DOS attack issues.
>
>
> I believe this limit would also stop most (all?) of PortlandHODL's
> DoSblocks without having to make some of the other changes in GCC. I
> think it's worthwhile to compare this approach to those proposed by
> Antoine in solving these DoS vectors.
>
> Best,
>
> --Brandon
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/OAoV-Uev9IosyhtUCyeIhclsVq-xUBZgGFROALaCKZkEFRNWSqbfDsVyiXnZ8B1TxKpfxmaULuwe4WpGHLI_iMdvPr5B0gM0nDvlwrKjChc%3D%40protonmail.com.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-17 18:05 ` 'Antoine Poinsot' via Bitcoin Development Mailing List
@ 2025-10-18 1:01 ` Antoine Riard
2025-10-18 4:03 ` Greg Maxwell
2025-10-20 15:22 ` Greg Maxwell
1 sibling, 1 reply; 46+ messages in thread
From: Antoine Riard @ 2025-10-18 1:01 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 6697 bytes --]
Hi list,
Thanks to the annex covered by the signature, I don't see how the concern
about limiting
the extensibility of bitcoin script with future (post-quantum)
cryptographic schemes.
Previous proposal of the annex were deliberately designed with
variable-length fields
to flexibly accomodate a wide range of things.
I believe there is one thing that has not been proposed to limit
unpredictable utterance
of spams on the blockchain, namely congestion control of categories of
outputs (e.g "fat"
scriptpubkeys). Let's say P a block period, T a type of scriptpubkey and L
a limiting
threshold for the number of T occurences during the period P. Beyond the L
threshold, any
additional T scriptpubkey is making the block invalid. Or alternatively,
any additional
T generating / spending transaction must pay some weight penalty...
Congestion control, which of course comes with its lot of shenanigans, is
not very a novel
idea as I believe it has been floated few times in the context of lightning
to solve mass
closure, where channels out-priced at current feerate would have their
safety timelocks scale
ups.
No need anymore to come to social consensus on what is quantitative "spam"
or not. The blockchain
would automatically throttle out the block space spamming transaction.
Qualitative spam it's another
question, for anyone who has ever read shannon's theory of communication
only effective thing can
be to limit the size of data payload. But probably we're kickly back to a
non-mathematically solvable
linguistical question again [0].
Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor
of prioritizing
a timewarp fix and limiting dosy spends by old redeem scripts, rather than
engaging in shooting
ourselves in the foot with ill-designed "spam" consensus mitigations.
[0] If you have a soul of logician, it would be an interesting
demonstration to come with
to establish that we cannot come up with mathematically or
cryptographically consensus means
to solve qualitative "spam", which in a very pure sense is a linguistical
issue.
Best,
Antoine
OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit :
> Hi,
>
> This approach was discussed last year when evaluating the best way to
> mitigate DoS blocks in terms
> of gains compared to confiscatory surface. Limiting the size of created
> scriptPubKeys is not a
> sufficient mitigation on its own, and has a non-trivial confiscatory
> surface.
>
> One of the goal of BIP54 is to address objections to Matt's earlier
> proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
> Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite direction.
>
> Various approaches of limiting the size of spent scriptPubKeys were
> discussed, in forms that would
> mitigate the confiscatory surface, to adopt in addition to (what
> eventually became) the BIP54 sigops
> limit. However i decided against including this additional measure in
> BIP54 because:
> - of the inherent complexity of the discussed schemes, which would make it
> hard to reason about
> constructing transactions spending legacy inputs, and equally hard to
> evaluate the reduction of
> the confiscatory surface;
> - more importantly, there is steep diminishing returns to piling on more
> mitigations. The BIP54
> limit on its own prevents an externally-motivated attacker from *unevenly*
> stalling the network
> for dozens of minutes, and a revenue-maximizing miner from regularly
> stalling its competitions
> for dozens of seconds, at a minimized cost in confiscatory surface.
> Additional mitigations reduce
> the worst case validation time by a smaller factor at a higher cost in
> terms of confiscatory
> surface. It "feels right" to further reduce those numbers, but it's less
> clear what the tangible
> gains would be.
>
> Furthermore, it's always possible to get the biggest bang for our buck in
> a first step and going the
> extra mile in a later, more controversial, soft fork. I previously floated
> the idea of a "cleanup
> v2" in private discussions, and i think besides a reduction of the maximum
> scriptPubKey size it
> should feature a consensus-enforced maximum transaction size for the
> reasons stated here:
>
> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
> I wouldn't hold my
> breath on such a "cleanup v2", but it may be useful to have it documented
> somewhere.
>
> I'm trying to not go into much details regarding which mitigations were
> considered in designing
> BIP54, because they are tightly related to the design of various DoS
> blocks. But i'm always happy to
> rehash the decisions made there and (re-)consider alternative approaches
> on the semi-private Delving
> thread [0] dedicated to this purpose. Feel free to ping me to get access
> if i know you.
>
> Best,
> Antoine Poinsot
>
> [0]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>
>
>
>
> On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
> fre...@reardencode.com> wrote:
>
> >
> >
> > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> >
> > > But also given that there are essentially no violations and no reason
> to
> > > expect any I'm not sure the proposal is worth time relative to fixes of
> > > actual moderately serious DOS attack issues.
> >
> >
> > I believe this limit would also stop most (all?) of PortlandHODL's
> > DoSblocks without having to make some of the other changes in GCC. I
> > think it's worthwhile to compare this approach to those proposed by
> > Antoine in solving these DoS vectors.
> >
> > Best,
> >
> > --Brandon
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+...@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 8553 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-18 1:01 ` Antoine Riard
@ 2025-10-18 4:03 ` Greg Maxwell
2025-10-18 12:06 ` PortlandHODL
0 siblings, 1 reply; 46+ messages in thread
From: Greg Maxwell @ 2025-10-18 4:03 UTC (permalink / raw)
To: Antoine Riard; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 8778 bytes --]
Limits on block construction that cross transactions make it harder to
accurately estimate fees and greatly complicate optimal block
construction-- the latter being important because smarter and more
computer powered mining code generating higher profits is a pro
centralization factor.
In terms of effectiveness the "spam" will just make itself
indistinguishable from the most common transaction traffic from the
perspective of such metrics-- and might well drive up "spam" levels
because the higher embedding cost may make some of them use more
transactions. The competition for these buckets by other traffic could
make it effectively a block size reduction even against very boring
ordinary transactions. ... which is probably not what most people want.
I think it's important to keep in mind that bitcoin fee levels even at
0.1s/vb are far beyond what other hosting services and other blockchains
cost-- so anyone still embedding data in bitcoin *really* want to be there
for some reason and aren't too fee sensitive or else they'd already be
using something else... some are even in favor of higher costs since the
high fees are what create the scarcity needed for their seigniorage.
But yeah I think your comments on priorities are correct.
On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoine.riard@gmail.com>
wrote:
> Hi list,
>
> Thanks to the annex covered by the signature, I don't see how the concern
> about limiting
> the extensibility of bitcoin script with future (post-quantum)
> cryptographic schemes.
> Previous proposal of the annex were deliberately designed with
> variable-length fields
> to flexibly accomodate a wide range of things.
>
> I believe there is one thing that has not been proposed to limit
> unpredictable utterance
> of spams on the blockchain, namely congestion control of categories of
> outputs (e.g "fat"
> scriptpubkeys). Let's say P a block period, T a type of scriptpubkey and L
> a limiting
> threshold for the number of T occurences during the period P. Beyond the L
> threshold, any
> additional T scriptpubkey is making the block invalid. Or alternatively,
> any additional
> T generating / spending transaction must pay some weight penalty...
>
> Congestion control, which of course comes with its lot of shenanigans, is
> not very a novel
> idea as I believe it has been floated few times in the context of
> lightning to solve mass
> closure, where channels out-priced at current feerate would have their
> safety timelocks scale
> ups.
>
> No need anymore to come to social consensus on what is quantitative "spam"
> or not. The blockchain
> would automatically throttle out the block space spamming transaction.
> Qualitative spam it's another
> question, for anyone who has ever read shannon's theory of communication
> only effective thing can
> be to limit the size of data payload. But probably we're kickly back to a
> non-mathematically solvable
> linguistical question again [0].
>
> Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor
> of prioritizing
> a timewarp fix and limiting dosy spends by old redeem scripts, rather than
> engaging in shooting
> ourselves in the foot with ill-designed "spam" consensus mitigations.
>
> [0] If you have a soul of logician, it would be an interesting
> demonstration to come with
> to establish that we cannot come up with mathematically or
> cryptographically consensus means
> to solve qualitative "spam", which in a very pure sense is a linguistical
> issue.
>
> Best,
> Antoine
> OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
> Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit :
>
>> Hi,
>>
>> This approach was discussed last year when evaluating the best way to
>> mitigate DoS blocks in terms
>> of gains compared to confiscatory surface. Limiting the size of created
>> scriptPubKeys is not a
>> sufficient mitigation on its own, and has a non-trivial confiscatory
>> surface.
>>
>> One of the goal of BIP54 is to address objections to Matt's earlier
>> proposal, notably the (in my
>> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
>> Limiting the size of
>> scriptPubKeys would in this regard be moving in the opposite direction.
>>
>> Various approaches of limiting the size of spent scriptPubKeys were
>> discussed, in forms that would
>> mitigate the confiscatory surface, to adopt in addition to (what
>> eventually became) the BIP54 sigops
>> limit. However i decided against including this additional measure in
>> BIP54 because:
>> - of the inherent complexity of the discussed schemes, which would make
>> it hard to reason about
>> constructing transactions spending legacy inputs, and equally hard to
>> evaluate the reduction of
>> the confiscatory surface;
>> - more importantly, there is steep diminishing returns to piling on more
>> mitigations. The BIP54
>> limit on its own prevents an externally-motivated attacker from
>> *unevenly* stalling the network
>> for dozens of minutes, and a revenue-maximizing miner from regularly
>> stalling its competitions
>> for dozens of seconds, at a minimized cost in confiscatory surface.
>> Additional mitigations reduce
>> the worst case validation time by a smaller factor at a higher cost in
>> terms of confiscatory
>> surface. It "feels right" to further reduce those numbers, but it's less
>> clear what the tangible
>> gains would be.
>>
>> Furthermore, it's always possible to get the biggest bang for our buck in
>> a first step and going the
>> extra mile in a later, more controversial, soft fork. I previously
>> floated the idea of a "cleanup
>> v2" in private discussions, and i think besides a reduction of the
>> maximum scriptPubKey size it
>> should feature a consensus-enforced maximum transaction size for the
>> reasons stated here:
>>
>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
>> I wouldn't hold my
>> breath on such a "cleanup v2", but it may be useful to have it documented
>> somewhere.
>>
>> I'm trying to not go into much details regarding which mitigations were
>> considered in designing
>> BIP54, because they are tightly related to the design of various DoS
>> blocks. But i'm always happy to
>> rehash the decisions made there and (re-)consider alternative approaches
>> on the semi-private Delving
>> thread [0] dedicated to this purpose. Feel free to ping me to get access
>> if i know you.
>>
>> Best,
>> Antoine Poinsot
>>
>> [0]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>>
>>
>>
>>
>> On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
>> fre...@reardencode.com> wrote:
>>
>> >
>> >
>> > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>> >
>> > > But also given that there are essentially no violations and no reason
>> to
>> > > expect any I'm not sure the proposal is worth time relative to fixes
>> of
>> > > actual moderately serious DOS attack issues.
>> >
>> >
>> > I believe this limit would also stop most (all?) of PortlandHODL's
>> > DoSblocks without having to make some of the other changes in GCC. I
>> > think it's worthwhile to compare this approach to those proposed by
>> > Antoine in solving these DoS vectors.
>> >
>> > Best,
>> >
>> > --Brandon
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups "Bitcoin Development Mailing List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to bitcoindev+...@googlegroups.com.
>> > To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAAS2fgTeerZWCJeMDFXF9sR4vowGsVcbp3M_FypDfSZW2qLZ%2BQ%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 10328 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-18 4:03 ` Greg Maxwell
@ 2025-10-18 12:06 ` PortlandHODL
2025-10-18 16:44 ` Greg Tonoski
` (2 more replies)
0 siblings, 3 replies; 46+ messages in thread
From: PortlandHODL @ 2025-10-18 12:06 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 17644 bytes --]
Hey,
First, thank you to everyone who responded, and please continue to do so.
There were many thought provoking responses and this did shift my
perspective quite a bit from the original post, which in of itself was the
goal to a degree.
I am currently only going to respond to all of the current concerns. Acks;
though I like them will be ignored unless new discoveries are included.
Tl;dr (Portlands Perspective)
- Confiscation is a problem because of presigned transactions
- DoS mitigation could also occur through marking UTXOs as unspendable if
> 520 bytes, this would preserve the proof of publication.
- Timeout / Sunset logic is compelling
- The (n) value of acceptable needed bytes is contentious with the lower
suggested limit being 67
- Congestion control is worth a look?
Next Step:
- Deeper discussion at the individual level: Antoine Poinsot and GCC
overlap?
- Write an implementation.
- Decide to pursue BIP
Responses
Andrew Poelstra:
> There is a risk of confiscation of coins which have pre-signed but
> unpublished transactions spending them to new outputs with large
> scriptPubKeys. Due to long-standing standardness rules, and the presence
> of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> such transactions exist.
PortlandHODL: This is a risk that can be incurred and likely not possible
to mitigate as there could be possible chains of transactions so even when
recursively iterating over a chain there is a chance that a presigned
breaks this rule. Every idea I have had from block redemption limits on
prevouts seems to just be a coverage issue where you can make the
confiscation less likely but not completely mitigated.
Second, there are already TXs that effectively have been confiscated at the
policy level (P2SH Cleanstack violation) where the user can not find any
miner with a policy to accept these into their mempool. (3 years)
/dev /fd0
> so it would be great if this was restricted to OP_RETURN
PortlandHODL: I reject this completely as this would remove the UTXOset
omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
restriction and instead just use another op_code, this also do not hit on
some of the most important factors such as DoS mitigation and legacy script
attack surface reduction.
Peter Todd
> NACK ...
PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
without including any additional context or reasoning.
jeremy
> I think that this type of rule is OK if we do it as a "sunsetting"
restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
years, 5 years, 10 years).
If action is taken, this is the most reasonable approach. Alleviating
confiscatory concerns through deferral.
> You can argue against this example probably, but it is worth considering
that absence of evidence of use is not evidence of absence of use and I
myself feel that overall our understanding of Bitcoin transaction
programming possibilities is still early. If you don't like this example,
I can give you others (probably).
Agreed and this also falls into the reasoning for deciding to utilize point
1 in your response. My thoughts on this would be along the lines of proof
of publication as this change only has the effect of stripping away the
executable portion of a script between 521 and 10_000 bytes or the
published data portion if > 10_000 bytes which the same data could likely
be published in chunked segments using outpoints.
Andrew Poelstra:
> Aside from proof-of-publication (i.e. data storage directly in the UTXO
> set) there is no usage of script which can't be equally (or better)
> accomplished by using a Segwit v0 or Taproot script.
This sums up the majority of future usecase concern
Anthony Towns:
> (If you restricted the change to only applying to scripts that used
non-push operators, that would probably still provide upgrade flexibility
while also preventing potential script abuses. But it wouldn't do anything
to prevent publishing data)
Could this not be done as segments in multiple outpoints using a
coordination outpoint? I fail to see why publication proof must be in a
single chunk. This does though however bring another alternative to mind,
just making these outpoints unspendable but not invalidate the block
through inclusion...
> As far as the "but contiguous data will be regulated more strictly"
argument goes; I don't think "your honour, my offensive content has
strings of 4d0802 every 520 bytes
Correct, this was never meant to resolve this issue.
Luke Dashjr:
> If we're going this route, we should just close all the gaps for the
immediate future:
To put it nicely, this is completely beyond the scope of what is being
proposed.
Guus Ellenkamp:
> If there are really so few OP_RETURN outputs more than 144 bytes, then
why increase the limit if that change is so controversial? It seems
people who want to use a larger OP_RETURN size do it anyway, even with
the current default limits.
Completely off topic and irrelevant
Greg Tonoski:
> Limiting the maximum size of the scriptPubKey of a transaction to 67
bytes.
This leave no room to deal with broken hashing algorithms and very little
future upgradability for hooks. The rest of these points should be merged
with Lukes response and either hijack my thread or start a new one with the
increased scope, any approach I take will only be related to the
ScriptPubkey
Keagan McClelland:
> Hard NACK on capping the witness size as that would effectively ban large
scripts even in the P2SH wrapper which undermines Bitcoin's ability to be
an effectively programmable money.
This has nothing to do with the witness size or even the P2SH wrapper
Casey Rodarmor:
> I think that "Bitcoin could need it in the future?" might be a good enough
reason not to do this.
> Script pubkeys are the only variable-length transaction fields which can
be
covered by input signatures, which might make them useful for future soft
forks. I can imagine confidential asset schemes or post-quantum coin
recovery
schemes requiring large proofs in the outputs, where the validity of the
proof
determined whether or not the transaction is valid, and thus require the
proofs to be in the outputs, and not just a hash commitment.
Would the ability to publish the data alone be enough? Example make the
output unspendable but allow for the existence of the bytes to be covered
through the signature?
Antoine Poinsot:
> Limiting the size of created scriptPubKeys is not a sufficient mitigation
on its own
I fail to see how this would not be sufficient? To DoS you need 2 things
inputs with ScriptPubkey redemptions + heavy op_codes that require unique
checks. Example DUPing stack element again and again doesn't work. This
then leads to the next part is you could get up to unique complex
operations with the current (n) limit included per input.
> One of the goal of BIP54 is to address objections to Matt's earlier
proposal, notably the (in my
opinion reasonable) confiscation concerns voiced by Russell O'Connor.
Limiting the size of
scriptPubKeys would in this regard be moving in the opposite direction.
Some notes is I would actually go as far as to say the confiscation risk is
higher with the TX limit proposed in BIP54 as we actually have proof of
redemption of TXs that break that rule and the input set to do this already
exists on-chain no need to even wonder about the whole presigned.
bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
Please let me know if I am incorrect on any of this.
> Furthermore, it's always possible to get the biggest bang for our buck in
a first step
Agreed on bang for the buck regarding DoS.
My final point here would be that I would like to discuss more, and this is
response is from the initial view of your response and could be incomplete
or incorrect, This is just my in the moment response.
Antoine Riard:
> Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor
of prioritizing
a timewarp fix and limiting dosy spends by old redeem scripts
The idea of congestion control is interesting, but this solution should
significantly reduce the total DoS severity of known vectors.
On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
> Limits on block construction that cross transactions make it harder to
> accurately estimate fees and greatly complicate optimal block
> construction-- the latter being important because smarter and more
> computer powered mining code generating higher profits is a pro
> centralization factor.
>
> In terms of effectiveness the "spam" will just make itself
> indistinguishable from the most common transaction traffic from the
> perspective of such metrics-- and might well drive up "spam" levels
> because the higher embedding cost may make some of them use more
> transactions. The competition for these buckets by other traffic could
> make it effectively a block size reduction even against very boring
> ordinary transactions. ... which is probably not what most people want.
>
> I think it's important to keep in mind that bitcoin fee levels even at
> 0.1s/vb are far beyond what other hosting services and other blockchains
> cost-- so anyone still embedding data in bitcoin *really* want to be there
> for some reason and aren't too fee sensitive or else they'd already be
> using something else... some are even in favor of higher costs since the
> high fees are what create the scarcity needed for their seigniorage.
>
> But yeah I think your comments on priorities are correct.
>
>
> On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com> wrote:
>
>> Hi list,
>>
>> Thanks to the annex covered by the signature, I don't see how the concern
>> about limiting
>> the extensibility of bitcoin script with future (post-quantum)
>> cryptographic schemes.
>> Previous proposal of the annex were deliberately designed with
>> variable-length fields
>> to flexibly accomodate a wide range of things.
>>
>> I believe there is one thing that has not been proposed to limit
>> unpredictable utterance
>> of spams on the blockchain, namely congestion control of categories of
>> outputs (e.g "fat"
>> scriptpubkeys). Let's say P a block period, T a type of scriptpubkey and
>> L a limiting
>> threshold for the number of T occurences during the period P. Beyond the
>> L threshold, any
>> additional T scriptpubkey is making the block invalid. Or alternatively,
>> any additional
>> T generating / spending transaction must pay some weight penalty...
>>
>> Congestion control, which of course comes with its lot of shenanigans, is
>> not very a novel
>> idea as I believe it has been floated few times in the context of
>> lightning to solve mass
>> closure, where channels out-priced at current feerate would have their
>> safety timelocks scale
>> ups.
>>
>> No need anymore to come to social consensus on what is quantitative
>> "spam" or not. The blockchain
>> would automatically throttle out the block space spamming transaction.
>> Qualitative spam it's another
>> question, for anyone who has ever read shannon's theory of communication
>> only effective thing can
>> be to limit the size of data payload. But probably we're kickly back to a
>> non-mathematically solvable
>> linguistical question again [0].
>>
>> Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor
>> of prioritizing
>> a timewarp fix and limiting dosy spends by old redeem scripts, rather
>> than engaging in shooting
>> ourselves in the foot with ill-designed "spam" consensus mitigations.
>>
>> [0] If you have a soul of logician, it would be an interesting
>> demonstration to come with
>> to establish that we cannot come up with mathematically or
>> cryptographically consensus means
>> to solve qualitative "spam", which in a very pure sense is a linguistical
>> issue.
>>
>> Best,
>> Antoine
>> OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
>> Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit :
>>
>>> Hi,
>>>
>>> This approach was discussed last year when evaluating the best way to
>>> mitigate DoS blocks in terms
>>> of gains compared to confiscatory surface. Limiting the size of created
>>> scriptPubKeys is not a
>>> sufficient mitigation on its own, and has a non-trivial confiscatory
>>> surface.
>>>
>>> One of the goal of BIP54 is to address objections to Matt's earlier
>>> proposal, notably the (in my
>>> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
>>> Limiting the size of
>>> scriptPubKeys would in this regard be moving in the opposite direction.
>>>
>>> Various approaches of limiting the size of spent scriptPubKeys were
>>> discussed, in forms that would
>>> mitigate the confiscatory surface, to adopt in addition to (what
>>> eventually became) the BIP54 sigops
>>> limit. However i decided against including this additional measure in
>>> BIP54 because:
>>> - of the inherent complexity of the discussed schemes, which would make
>>> it hard to reason about
>>> constructing transactions spending legacy inputs, and equally hard to
>>> evaluate the reduction of
>>> the confiscatory surface;
>>> - more importantly, there is steep diminishing returns to piling on more
>>> mitigations. The BIP54
>>> limit on its own prevents an externally-motivated attacker from
>>> *unevenly* stalling the network
>>> for dozens of minutes, and a revenue-maximizing miner from regularly
>>> stalling its competitions
>>> for dozens of seconds, at a minimized cost in confiscatory surface.
>>> Additional mitigations reduce
>>> the worst case validation time by a smaller factor at a higher cost in
>>> terms of confiscatory
>>> surface. It "feels right" to further reduce those numbers, but it's less
>>> clear what the tangible
>>> gains would be.
>>>
>>> Furthermore, it's always possible to get the biggest bang for our buck
>>> in a first step and going the
>>> extra mile in a later, more controversial, soft fork. I previously
>>> floated the idea of a "cleanup
>>> v2" in private discussions, and i think besides a reduction of the
>>> maximum scriptPubKey size it
>>> should feature a consensus-enforced maximum transaction size for the
>>> reasons stated here:
>>>
>>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
>>> I wouldn't hold my
>>> breath on such a "cleanup v2", but it may be useful to have it
>>> documented somewhere.
>>>
>>> I'm trying to not go into much details regarding which mitigations were
>>> considered in designing
>>> BIP54, because they are tightly related to the design of various DoS
>>> blocks. But i'm always happy to
>>> rehash the decisions made there and (re-)consider alternative approaches
>>> on the semi-private Delving
>>> thread [0] dedicated to this purpose. Feel free to ping me to get access
>>> if i know you.
>>>
>>> Best,
>>> Antoine Poinsot
>>>
>>> [0]:
>>> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>>>
>>>
>>>
>>>
>>> On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
>>> fre...@reardencode.com> wrote:
>>>
>>> >
>>> >
>>> > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>>> >
>>> > > But also given that there are essentially no violations and no
>>> reason to
>>> > > expect any I'm not sure the proposal is worth time relative to fixes
>>> of
>>> > > actual moderately serious DOS attack issues.
>>> >
>>> >
>>> > I believe this limit would also stop most (all?) of PortlandHODL's
>>> > DoSblocks without having to make some of the other changes in GCC. I
>>> > think it's worthwhile to compare this approach to those proposed by
>>> > Antoine in solving these DoS vectors.
>>> >
>>> > Best,
>>> >
>>> > --Brandon
>>> >
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> Groups "Bitcoin Development Mailing List" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an email to bitcoindev+...@googlegroups.com.
>>> > To view this discussion visit
>>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>>>
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to bitcoindev+...@googlegroups.com.
>>
> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com
>> <https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 20637 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-18 12:06 ` PortlandHODL
@ 2025-10-18 16:44 ` Greg Tonoski
2025-10-18 16:54 ` /dev /fd0
2025-10-22 8:07 ` 'moonsettler' via Bitcoin Development Mailing List
2 siblings, 0 replies; 46+ messages in thread
From: Greg Tonoski @ 2025-10-18 16:44 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 973 bytes --]
>
> > Limiting the maximum size of the scriptPubKey of a transaction to 67
> bytes.
>
> This leave no room to deal with broken hashing algorithms and very little
> future upgradability for hooks.
>
Can I ask for an example of such hooks for which room for "future
upgradability" may be needed, please? I am not familiar with the subject
and would like to learn more about it in order to evaluate the argument.
I disagree with the premise that larger maximum size of scriptPubKey is
necessary for dealing with "broken hashing algorithms". Besides, I would
suggest YAGNI principle.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAMHHROzPZwh2boUW_cgMZZUVm5hK%2BSi0OHWLMQRL8a720EsMOw%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 1512 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-18 12:06 ` PortlandHODL
2025-10-18 16:44 ` Greg Tonoski
@ 2025-10-18 16:54 ` /dev /fd0
2025-10-22 8:07 ` 'moonsettler' via Bitcoin Development Mailing List
2 siblings, 0 replies; 46+ messages in thread
From: /dev /fd0 @ 2025-10-18 16:54 UTC (permalink / raw)
To: PortlandHODL; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 19322 bytes --]
Hi portlandhold,
> PortlandHODL: I reject this completely as this would remove the UTXOset
omission for the scriptPubkey
Your proposed solution would affect the UTXO set negatively if someone is
really motivated to use scriptpubkey for arbitrary data. They will use
multiple outputs as people do with [DNS records][0].
> and encourage miners to subvert the OP_RETURN restriction and instead
just use another op_code
What would motivate users to follow this approach, considering that storing
data in the witness is cheaper?
[0]: https://asherfalcon.com/blog/posts/2
[1]: https://docs.ordinals.com/guides/batch-inscribing.html
/dev/fd0
floppy disk guy
On Sat, Oct 18, 2025 at 6:45 PM PortlandHODL <admin@qrsnap.io> wrote:
> Hey,
>
> First, thank you to everyone who responded, and please continue to do so.
> There were many thought provoking responses and this did shift my
> perspective quite a bit from the original post, which in of itself was the
> goal to a degree.
>
> I am currently only going to respond to all of the current concerns. Acks;
> though I like them will be ignored unless new discoveries are included.
>
> Tl;dr (Portlands Perspective)
> - Confiscation is a problem because of presigned transactions
> - DoS mitigation could also occur through marking UTXOs as unspendable if
> > 520 bytes, this would preserve the proof of publication.
> - Timeout / Sunset logic is compelling
> - The (n) value of acceptable needed bytes is contentious with the lower
> suggested limit being 67
> - Congestion control is worth a look?
>
> Next Step:
> - Deeper discussion at the individual level: Antoine Poinsot and GCC
> overlap?
> - Write an implementation.
> - Decide to pursue BIP
>
> Responses
>
> Andrew Poelstra:
> > There is a risk of confiscation of coins which have pre-signed but
> > unpublished transactions spending them to new outputs with large
> > scriptPubKeys. Due to long-standing standardness rules, and the presence
> > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > such transactions exist.
>
> PortlandHODL: This is a risk that can be incurred and likely not possible
> to mitigate as there could be possible chains of transactions so even when
> recursively iterating over a chain there is a chance that a presigned
> breaks this rule. Every idea I have had from block redemption limits on
> prevouts seems to just be a coverage issue where you can make the
> confiscation less likely but not completely mitigated.
>
> Second, there are already TXs that effectively have been confiscated at
> the policy level (P2SH Cleanstack violation) where the user can not find
> any miner with a policy to accept these into their mempool. (3 years)
>
> /dev /fd0
> > so it would be great if this was restricted to OP_RETURN
>
> PortlandHODL: I reject this completely as this would remove the UTXOset
> omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
> restriction and instead just use another op_code, this also do not hit on
> some of the most important factors such as DoS mitigation and legacy script
> attack surface reduction.
>
> Peter Todd
> > NACK ...
>
> PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
> without including any additional context or reasoning.
>
> jeremy
> > I think that this type of rule is OK if we do it as a "sunsetting"
> restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
> years, 5 years, 10 years).
>
> If action is taken, this is the most reasonable approach. Alleviating
> confiscatory concerns through deferral.
>
> > You can argue against this example probably, but it is worth considering
> that absence of evidence of use is not evidence of absence of use and I
> myself feel that overall our understanding of Bitcoin transaction
> programming possibilities is still early. If you don't like this example,
> I can give you others (probably).
>
> Agreed and this also falls into the reasoning for deciding to utilize
> point 1 in your response. My thoughts on this would be along the lines of
> proof of publication as this change only has the effect of stripping away
> the executable portion of a script between 521 and 10_000 bytes or the
> published data portion if > 10_000 bytes which the same data could likely
> be published in chunked segments using outpoints.
>
> Andrew Poelstra:
> > Aside from proof-of-publication (i.e. data storage directly in the UTXO
> > set) there is no usage of script which can't be equally (or better)
> > accomplished by using a Segwit v0 or Taproot script.
>
> This sums up the majority of future usecase concern
>
> Anthony Towns:
> > (If you restricted the change to only applying to scripts that used
> non-push operators, that would probably still provide upgrade flexibility
> while also preventing potential script abuses. But it wouldn't do anything
> to prevent publishing data)
>
> Could this not be done as segments in multiple outpoints using a
> coordination outpoint? I fail to see why publication proof must be in a
> single chunk. This does though however bring another alternative to mind,
> just making these outpoints unspendable but not invalidate the block
> through inclusion...
>
> > As far as the "but contiguous data will be regulated more strictly"
> argument goes; I don't think "your honour, my offensive content has
> strings of 4d0802 every 520 bytes
>
> Correct, this was never meant to resolve this issue.
>
> Luke Dashjr:
> > If we're going this route, we should just close all the gaps for the
> immediate future:
>
> To put it nicely, this is completely beyond the scope of what is being
> proposed.
>
> Guus Ellenkamp:
> > If there are really so few OP_RETURN outputs more than 144 bytes, then
> why increase the limit if that change is so controversial? It seems
> people who want to use a larger OP_RETURN size do it anyway, even with
> the current default limits.
>
> Completely off topic and irrelevant
>
> Greg Tonoski:
> > Limiting the maximum size of the scriptPubKey of a transaction to 67
> bytes.
>
> This leave no room to deal with broken hashing algorithms and very little
> future upgradability for hooks. The rest of these points should be merged
> with Lukes response and either hijack my thread or start a new one with the
> increased scope, any approach I take will only be related to the
> ScriptPubkey
>
> Keagan McClelland:
> > Hard NACK on capping the witness size as that would effectively ban
> large scripts even in the P2SH wrapper which undermines Bitcoin's ability
> to be an effectively programmable money.
>
> This has nothing to do with the witness size or even the P2SH wrapper
>
> Casey Rodarmor:
> > I think that "Bitcoin could need it in the future?" might be a good
> enough
> reason not to do this.
>
> > Script pubkeys are the only variable-length transaction fields which can
> be
> covered by input signatures, which might make them useful for future soft
> forks. I can imagine confidential asset schemes or post-quantum coin
> recovery
> schemes requiring large proofs in the outputs, where the validity of the
> proof
> determined whether or not the transaction is valid, and thus require the
> proofs to be in the outputs, and not just a hash commitment.
>
> Would the ability to publish the data alone be enough? Example make the
> output unspendable but allow for the existence of the bytes to be covered
> through the signature?
>
>
> Antoine Poinsot:
> > Limiting the size of created scriptPubKeys is not a sufficient
> mitigation on its own
> I fail to see how this would not be sufficient? To DoS you need 2 things
> inputs with ScriptPubkey redemptions + heavy op_codes that require unique
> checks. Example DUPing stack element again and again doesn't work. This
> then leads to the next part is you could get up to unique complex
> operations with the current (n) limit included per input.
>
> > One of the goal of BIP54 is to address objections to Matt's earlier
> proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
> Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite direction.
>
> Some notes is I would actually go as far as to say the confiscation risk
> is higher with the TX limit proposed in BIP54 as we actually have proof of
> redemption of TXs that break that rule and the input set to do this already
> exists on-chain no need to even wonder about the whole presigned.
> bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>
> Please let me know if I am incorrect on any of this.
>
> > Furthermore, it's always possible to get the biggest bang for our buck
> in a first step
>
> Agreed on bang for the buck regarding DoS.
>
> My final point here would be that I would like to discuss more, and this
> is response is from the initial view of your response and could be
> incomplete or incorrect, This is just my in the moment response.
>
> Antoine Riard:
> > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
> favor of prioritizing
> a timewarp fix and limiting dosy spends by old redeem scripts
>
> The idea of congestion control is interesting, but this solution should
> significantly reduce the total DoS severity of known vectors.
>
> On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
>
>> Limits on block construction that cross transactions make it harder to
>> accurately estimate fees and greatly complicate optimal block
>> construction-- the latter being important because smarter and more
>> computer powered mining code generating higher profits is a pro
>> centralization factor.
>>
>> In terms of effectiveness the "spam" will just make itself
>> indistinguishable from the most common transaction traffic from the
>> perspective of such metrics-- and might well drive up "spam" levels
>> because the higher embedding cost may make some of them use more
>> transactions. The competition for these buckets by other traffic could
>> make it effectively a block size reduction even against very boring
>> ordinary transactions. ... which is probably not what most people want.
>>
>> I think it's important to keep in mind that bitcoin fee levels even at
>> 0.1s/vb are far beyond what other hosting services and other blockchains
>> cost-- so anyone still embedding data in bitcoin *really* want to be there
>> for some reason and aren't too fee sensitive or else they'd already be
>> using something else... some are even in favor of higher costs since the
>> high fees are what create the scarcity needed for their seigniorage.
>>
>> But yeah I think your comments on priorities are correct.
>>
>>
>> On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
>> wrote:
>>
>>> Hi list,
>>>
>>> Thanks to the annex covered by the signature, I don't see how the
>>> concern about limiting
>>> the extensibility of bitcoin script with future (post-quantum)
>>> cryptographic schemes.
>>> Previous proposal of the annex were deliberately designed with
>>> variable-length fields
>>> to flexibly accomodate a wide range of things.
>>>
>>> I believe there is one thing that has not been proposed to limit
>>> unpredictable utterance
>>> of spams on the blockchain, namely congestion control of categories of
>>> outputs (e.g "fat"
>>> scriptpubkeys). Let's say P a block period, T a type of scriptpubkey and
>>> L a limiting
>>> threshold for the number of T occurences during the period P. Beyond the
>>> L threshold, any
>>> additional T scriptpubkey is making the block invalid. Or alternatively,
>>> any additional
>>> T generating / spending transaction must pay some weight penalty...
>>>
>>> Congestion control, which of course comes with its lot of shenanigans,
>>> is not very a novel
>>> idea as I believe it has been floated few times in the context of
>>> lightning to solve mass
>>> closure, where channels out-priced at current feerate would have their
>>> safety timelocks scale
>>> ups.
>>>
>>> No need anymore to come to social consensus on what is quantitative
>>> "spam" or not. The blockchain
>>> would automatically throttle out the block space spamming transaction.
>>> Qualitative spam it's another
>>> question, for anyone who has ever read shannon's theory of communication
>>> only effective thing can
>>> be to limit the size of data payload. But probably we're kickly back to
>>> a non-mathematically solvable
>>> linguistical question again [0].
>>>
>>> Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
>>> favor of prioritizing
>>> a timewarp fix and limiting dosy spends by old redeem scripts, rather
>>> than engaging in shooting
>>> ourselves in the foot with ill-designed "spam" consensus mitigations.
>>>
>>> [0] If you have a soul of logician, it would be an interesting
>>> demonstration to come with
>>> to establish that we cannot come up with mathematically or
>>> cryptographically consensus means
>>> to solve qualitative "spam", which in a very pure sense is a
>>> linguistical issue.
>>>
>>> Best,
>>> Antoine
>>> OTS hash:
>>> 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
>>> Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit :
>>>
>>>> Hi,
>>>>
>>>> This approach was discussed last year when evaluating the best way to
>>>> mitigate DoS blocks in terms
>>>> of gains compared to confiscatory surface. Limiting the size of created
>>>> scriptPubKeys is not a
>>>> sufficient mitigation on its own, and has a non-trivial confiscatory
>>>> surface.
>>>>
>>>> One of the goal of BIP54 is to address objections to Matt's earlier
>>>> proposal, notably the (in my
>>>> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
>>>> Limiting the size of
>>>> scriptPubKeys would in this regard be moving in the opposite direction.
>>>>
>>>> Various approaches of limiting the size of spent scriptPubKeys were
>>>> discussed, in forms that would
>>>> mitigate the confiscatory surface, to adopt in addition to (what
>>>> eventually became) the BIP54 sigops
>>>> limit. However i decided against including this additional measure in
>>>> BIP54 because:
>>>> - of the inherent complexity of the discussed schemes, which would make
>>>> it hard to reason about
>>>> constructing transactions spending legacy inputs, and equally hard to
>>>> evaluate the reduction of
>>>> the confiscatory surface;
>>>> - more importantly, there is steep diminishing returns to piling on
>>>> more mitigations. The BIP54
>>>> limit on its own prevents an externally-motivated attacker from
>>>> *unevenly* stalling the network
>>>> for dozens of minutes, and a revenue-maximizing miner from regularly
>>>> stalling its competitions
>>>> for dozens of seconds, at a minimized cost in confiscatory surface.
>>>> Additional mitigations reduce
>>>> the worst case validation time by a smaller factor at a higher cost in
>>>> terms of confiscatory
>>>> surface. It "feels right" to further reduce those numbers, but it's
>>>> less clear what the tangible
>>>> gains would be.
>>>>
>>>> Furthermore, it's always possible to get the biggest bang for our buck
>>>> in a first step and going the
>>>> extra mile in a later, more controversial, soft fork. I previously
>>>> floated the idea of a "cleanup
>>>> v2" in private discussions, and i think besides a reduction of the
>>>> maximum scriptPubKey size it
>>>> should feature a consensus-enforced maximum transaction size for the
>>>> reasons stated here:
>>>>
>>>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
>>>> I wouldn't hold my
>>>> breath on such a "cleanup v2", but it may be useful to have it
>>>> documented somewhere.
>>>>
>>>> I'm trying to not go into much details regarding which mitigations were
>>>> considered in designing
>>>> BIP54, because they are tightly related to the design of various DoS
>>>> blocks. But i'm always happy to
>>>> rehash the decisions made there and (re-)consider alternative
>>>> approaches on the semi-private Delving
>>>> thread [0] dedicated to this purpose. Feel free to ping me to get
>>>> access if i know you.
>>>>
>>>> Best,
>>>> Antoine Poinsot
>>>>
>>>> [0]:
>>>> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>>>>
>>>>
>>>>
>>>>
>>>> On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
>>>> fre...@reardencode.com> wrote:
>>>>
>>>> >
>>>> >
>>>> > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>>>> >
>>>> > > But also given that there are essentially no violations and no
>>>> reason to
>>>> > > expect any I'm not sure the proposal is worth time relative to
>>>> fixes of
>>>> > > actual moderately serious DOS attack issues.
>>>> >
>>>> >
>>>> > I believe this limit would also stop most (all?) of PortlandHODL's
>>>> > DoSblocks without having to make some of the other changes in GCC. I
>>>> > think it's worthwhile to compare this approach to those proposed by
>>>> > Antoine in solving these DoS vectors.
>>>> >
>>>> > Best,
>>>> >
>>>> > --Brandon
>>>> >
>>>> > --
>>>> > You received this message because you are subscribed to the Google
>>>> Groups "Bitcoin Development Mailing List" group.
>>>> > To unsubscribe from this group and stop receiving emails from it,
>>>> send an email to bitcoindev+...@googlegroups.com.
>>>> > To view this discussion visit
>>>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>>>>
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Bitcoin Development Mailing List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to bitcoindev+...@googlegroups.com.
>>>
>> To view this discussion visit
>>> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CALiT-ZpYav87WPn-hrFEcSrvLet95_B%3DMPzqi6kk%3D_nSnQj1VQ%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 21517 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-17 18:05 ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2025-10-18 1:01 ` Antoine Riard
@ 2025-10-20 15:22 ` Greg Maxwell
2025-10-21 19:05 ` Garlo Nicon
1 sibling, 1 reply; 46+ messages in thread
From: Greg Maxwell @ 2025-10-20 15:22 UTC (permalink / raw)
To: Antoine Poinsot; +Cc: Brandon Black, Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 8659 bytes --]
Perhaps it's also worth explicitly pointing out for people following at
home how this proposal has a very real confiscation risk: bare
multisigs can easily violate the proposed limit-- if uncompressed points
are used an "of 8" policy is sufficient, otherwise I think 16 is needed but
both are within the limit of 20 in checkmultisig. This is much worse than
other confiscation concerns that have gummed up most (all?) other cleanup
proposals, because rather than requiring some very contrived thing that
probably no one would have ever done except as lols bare multisigs is a
thing that has actually seen real use and could have been created by
someone doing something completely boring... doubly so because the
inadvertent P2SH script size limit may have explicitly pushed people into
using bare CMS for a large policy when otherwise bare CMS is at least a
little weird.
Aside, some of the confiscatory concerns could be greatly mitigated in
proposals along these lines could be greatly mitigated if the rule was only
applied to transactions which either have no post-activation active
nLocktime or have at least one input with a post activation height. Such a
move could also be done incrementally, limiting it for new coins and then
after giving a longer period to unearth any confiscation risk applying it
more generally if none arises. It still wouldn't completely eliminate a
confiscation risk as there could be an unconfirmed *chain* of transactions,
but perhaps a more limited rule would be easier to argue had an
insignificant risk. Similarly, other such carveouts could be made for more
likely script forms.
One even more conservative possibility would be to trace the "maximum reorg
height" (MRH) of every output, which would be the height of the highest
coinbase transaction in its casual history. If a transaction has any input
with a MRH which is post-activation then it couldn't be part of an
unconfirmed chain that predated the rule activation. The biggest downside
is that implementations don't currently track this metric in their utxo set
and doing so would add a few bytes to each utxo entry and a complete
resync/reindex in order to enforce the rule. I believe this would
essentially eliminate the confiscation risk.
I'd generally say I still think the proposal has little value relative to
the inherent costs of any consensus rule change and potentially has an
unacceptable confiscation risk-- MRH tracking might make that acceptable,
but comes at a high cost which I think would clearly not be justified.
OTOH, MRH tracking might also be attractive for other cleanup proposals
having the similar confiscation risk issues and if so then its cost may be
worthwhile.
I say it has little value because while keeping crap out of the UTXO set
has extremely high value, anyone who wants to stuff crap in will just use
multiple crappy outputs instead which is even worse than using a single big
one especially given that >10k is already unspendable, prunable, and
already pruned by implementations. Worse because each distinct output has
overheads and because even non-op-return becomes prunable if its over 10k
now, while a dozen 520 byte outputs wouldn't be prunable under this
proposal. So, paradoxically this limit might increase the amount of
non-prunable data. A variant that just made >520 unspendable would be
better in this respect, but I doubt it would at all satisfy the proponents'
motivations. Likewise, one that expanded the threshold to more like 1350
would at least avoid the bare CMS concerns but also would probably not
satisfy the proposals proponents.
On Fri, Oct 17, 2025 at 6:45 PM 'Antoine Poinsot' via Bitcoin Development
Mailing List <bitcoindev@googlegroups.com> wrote:
> Hi,
>
> This approach was discussed last year when evaluating the best way to
> mitigate DoS blocks in terms
> of gains compared to confiscatory surface. Limiting the size of created
> scriptPubKeys is not a
> sufficient mitigation on its own, and has a non-trivial confiscatory
> surface.
>
> One of the goal of BIP54 is to address objections to Matt's earlier
> proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
> Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite direction.
>
> Various approaches of limiting the size of spent scriptPubKeys were
> discussed, in forms that would
> mitigate the confiscatory surface, to adopt in addition to (what
> eventually became) the BIP54 sigops
> limit. However i decided against including this additional measure in
> BIP54 because:
> - of the inherent complexity of the discussed schemes, which would make it
> hard to reason about
> constructing transactions spending legacy inputs, and equally hard to
> evaluate the reduction of
> the confiscatory surface;
> - more importantly, there is steep diminishing returns to piling on more
> mitigations. The BIP54
> limit on its own prevents an externally-motivated attacker from
> *unevenly* stalling the network
> for dozens of minutes, and a revenue-maximizing miner from regularly
> stalling its competitions
> for dozens of seconds, at a minimized cost in confiscatory surface.
> Additional mitigations reduce
> the worst case validation time by a smaller factor at a higher cost in
> terms of confiscatory
> surface. It "feels right" to further reduce those numbers, but it's less
> clear what the tangible
> gains would be.
>
> Furthermore, it's always possible to get the biggest bang for our buck in
> a first step and going the
> extra mile in a later, more controversial, soft fork. I previously floated
> the idea of a "cleanup
> v2" in private discussions, and i think besides a reduction of the maximum
> scriptPubKey size it
> should feature a consensus-enforced maximum transaction size for the
> reasons stated here:
>
> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
> I wouldn't hold my
> breath on such a "cleanup v2", but it may be useful to have it documented
> somewhere.
>
> I'm trying to not go into much details regarding which mitigations were
> considered in designing
> BIP54, because they are tightly related to the design of various DoS
> blocks. But i'm always happy to
> rehash the decisions made there and (re-)consider alternative approaches
> on the semi-private Delving
> thread [0] dedicated to this purpose. Feel free to ping me to get access
> if i know you.
>
> Best,
> Antoine Poinsot
>
> [0]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>
>
>
>
> On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
> freedom@reardencode.com> wrote:
>
> >
> >
> > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> >
> > > But also given that there are essentially no violations and no reason
> to
> > > expect any I'm not sure the proposal is worth time relative to fixes of
> > > actual moderately serious DOS attack issues.
> >
> >
> > I believe this limit would also stop most (all?) of PortlandHODL's
> > DoSblocks without having to make some of the other changes in GCC. I
> > think it's worthwhile to compare this approach to those proposed by
> > Antoine in solving these DoS vectors.
> >
> > Best,
> >
> > --Brandon
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+unsubscribe@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/OAoV-Uev9IosyhtUCyeIhclsVq-xUBZgGFROALaCKZkEFRNWSqbfDsVyiXnZ8B1TxKpfxmaULuwe4WpGHLI_iMdvPr5B0gM0nDvlwrKjChc%3D%40protonmail.com
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAAS2fgQEdVVcb%3DDfP7XoRxfXfq1unKBD0joffddOuTsn2Zmcng%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 10430 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-20 15:22 ` Greg Maxwell
@ 2025-10-21 19:05 ` Garlo Nicon
0 siblings, 0 replies; 46+ messages in thread
From: Garlo Nicon @ 2025-10-21 19:05 UTC (permalink / raw)
To: Greg Maxwell
Cc: Antoine Poinsot, Brandon Black, Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 11189 bytes --]
> bare multisigs can easily violate the proposed limit
Good point. I can imagine a Script, which could allow something like
1-of-460 multisig: if all public keys would be hashed, then it would take
21 bytes to push any given hash. And then, pushing 460 hashes would take
460*21=9660 bytes. Then, to clean it up, OP_2DROP would consume two
elements, so that means 230 bytes, to clean it up with OP_2DROPs, and a
single OP_DROP.
I think it could be something like that:
OP_TOALTSTACK <9660 bytes> OP_FROMALTSTACK OP_PICK OP_TOALTSTACK
<OP_2DROP*229> OP_DROP
OP_DUP OP_HASH160 OP_FROMALTSTACK OP_EQUALVERIFY OP_CHECKSIG
And then, this is how it could be executed:
<sig> <pubkey> <number> //input stack
<sig> <pubkey> //toaltstack
<sig> <pubkey> <lots_of_hashes> //pushing hashes
<sig> <pubkey> <lots_of_hashes> <number> //pushing number
<sig> <pubkey> <lots_of_hashes> <hash> //picking hash
<sig> <pubkey> <lots_of_hashes> //toaltstack
<sig> <pubkey> //dropping the rest
<sig> <pubkey> <pubkey> //dup
<sig> <pubkey> <hash> //hash160
<sig> <pubkey> <hash> <hash> //fromaltstack
<sig> <pubkey> //equalverify
OP_TRUE //checksig
Which means, that it would take 9660 bytes, plus 230 bytes of dropping, so
it sums up to 9890 bytes. Then, 9 bytes are consumed by the rest of the
envelope, and that leaves 101 bytes for signature and public key. Seems
doable. But I didn't check it in regtest yet.
pon., 20 paź 2025 o 18:48 Greg Maxwell <gmaxwell@gmail.com> napisał(a):
> Perhaps it's also worth explicitly pointing out for people following at
> home how this proposal has a very real confiscation risk: bare
> multisigs can easily violate the proposed limit-- if uncompressed points
> are used an "of 8" policy is sufficient, otherwise I think 16 is needed but
> both are within the limit of 20 in checkmultisig. This is much worse than
> other confiscation concerns that have gummed up most (all?) other cleanup
> proposals, because rather than requiring some very contrived thing that
> probably no one would have ever done except as lols bare multisigs is a
> thing that has actually seen real use and could have been created by
> someone doing something completely boring... doubly so because the
> inadvertent P2SH script size limit may have explicitly pushed people into
> using bare CMS for a large policy when otherwise bare CMS is at least a
> little weird.
>
> Aside, some of the confiscatory concerns could be greatly mitigated in
> proposals along these lines could be greatly mitigated if the rule was only
> applied to transactions which either have no post-activation active
> nLocktime or have at least one input with a post activation height. Such a
> move could also be done incrementally, limiting it for new coins and then
> after giving a longer period to unearth any confiscation risk applying it
> more generally if none arises. It still wouldn't completely eliminate a
> confiscation risk as there could be an unconfirmed *chain* of transactions,
> but perhaps a more limited rule would be easier to argue had an
> insignificant risk. Similarly, other such carveouts could be made for more
> likely script forms.
>
> One even more conservative possibility would be to trace the "maximum
> reorg height" (MRH) of every output, which would be the height of the
> highest coinbase transaction in its casual history. If a transaction has
> any input with a MRH which is post-activation then it couldn't be part of
> an unconfirmed chain that predated the rule activation. The biggest
> downside is that implementations don't currently track this metric in their
> utxo set and doing so would add a few bytes to each utxo entry and a
> complete resync/reindex in order to enforce the rule. I believe this would
> essentially eliminate the confiscation risk.
>
> I'd generally say I still think the proposal has little value relative to
> the inherent costs of any consensus rule change and potentially has an
> unacceptable confiscation risk-- MRH tracking might make that acceptable,
> but comes at a high cost which I think would clearly not be justified.
> OTOH, MRH tracking might also be attractive for other cleanup proposals
> having the similar confiscation risk issues and if so then its cost may be
> worthwhile.
>
> I say it has little value because while keeping crap out of the UTXO set
> has extremely high value, anyone who wants to stuff crap in will just use
> multiple crappy outputs instead which is even worse than using a single big
> one especially given that >10k is already unspendable, prunable, and
> already pruned by implementations. Worse because each distinct output has
> overheads and because even non-op-return becomes prunable if its over 10k
> now, while a dozen 520 byte outputs wouldn't be prunable under this
> proposal. So, paradoxically this limit might increase the amount of
> non-prunable data. A variant that just made >520 unspendable would be
> better in this respect, but I doubt it would at all satisfy the proponents'
> motivations. Likewise, one that expanded the threshold to more like 1350
> would at least avoid the bare CMS concerns but also would probably not
> satisfy the proposals proponents.
>
>
>
> On Fri, Oct 17, 2025 at 6:45 PM 'Antoine Poinsot' via Bitcoin Development
> Mailing List <bitcoindev@googlegroups.com> wrote:
>
>> Hi,
>>
>> This approach was discussed last year when evaluating the best way to
>> mitigate DoS blocks in terms
>> of gains compared to confiscatory surface. Limiting the size of created
>> scriptPubKeys is not a
>> sufficient mitigation on its own, and has a non-trivial confiscatory
>> surface.
>>
>> One of the goal of BIP54 is to address objections to Matt's earlier
>> proposal, notably the (in my
>> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
>> Limiting the size of
>> scriptPubKeys would in this regard be moving in the opposite direction.
>>
>> Various approaches of limiting the size of spent scriptPubKeys were
>> discussed, in forms that would
>> mitigate the confiscatory surface, to adopt in addition to (what
>> eventually became) the BIP54 sigops
>> limit. However i decided against including this additional measure in
>> BIP54 because:
>> - of the inherent complexity of the discussed schemes, which would make
>> it hard to reason about
>> constructing transactions spending legacy inputs, and equally hard to
>> evaluate the reduction of
>> the confiscatory surface;
>> - more importantly, there is steep diminishing returns to piling on more
>> mitigations. The BIP54
>> limit on its own prevents an externally-motivated attacker from
>> *unevenly* stalling the network
>> for dozens of minutes, and a revenue-maximizing miner from regularly
>> stalling its competitions
>> for dozens of seconds, at a minimized cost in confiscatory surface.
>> Additional mitigations reduce
>> the worst case validation time by a smaller factor at a higher cost in
>> terms of confiscatory
>> surface. It "feels right" to further reduce those numbers, but it's
>> less clear what the tangible
>> gains would be.
>>
>> Furthermore, it's always possible to get the biggest bang for our buck in
>> a first step and going the
>> extra mile in a later, more controversial, soft fork. I previously
>> floated the idea of a "cleanup
>> v2" in private discussions, and i think besides a reduction of the
>> maximum scriptPubKey size it
>> should feature a consensus-enforced maximum transaction size for the
>> reasons stated here:
>>
>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
>> I wouldn't hold my
>> breath on such a "cleanup v2", but it may be useful to have it documented
>> somewhere.
>>
>> I'm trying to not go into much details regarding which mitigations were
>> considered in designing
>> BIP54, because they are tightly related to the design of various DoS
>> blocks. But i'm always happy to
>> rehash the decisions made there and (re-)consider alternative approaches
>> on the semi-private Delving
>> thread [0] dedicated to this purpose. Feel free to ping me to get access
>> if i know you.
>>
>> Best,
>> Antoine Poinsot
>>
>> [0]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>>
>>
>>
>>
>> On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
>> freedom@reardencode.com> wrote:
>>
>> >
>> >
>> > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>> >
>> > > But also given that there are essentially no violations and no reason
>> to
>> > > expect any I'm not sure the proposal is worth time relative to fixes
>> of
>> > > actual moderately serious DOS attack issues.
>> >
>> >
>> > I believe this limit would also stop most (all?) of PortlandHODL's
>> > DoSblocks without having to make some of the other changes in GCC. I
>> > think it's worthwhile to compare this approach to those proposed by
>> > Antoine in solving these DoS vectors.
>> >
>> > Best,
>> >
>> > --Brandon
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups "Bitcoin Development Mailing List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to bitcoindev+unsubscribe@googlegroups.com.
>> > To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to bitcoindev+unsubscribe@googlegroups.com.
>> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/OAoV-Uev9IosyhtUCyeIhclsVq-xUBZgGFROALaCKZkEFRNWSqbfDsVyiXnZ8B1TxKpfxmaULuwe4WpGHLI_iMdvPr5B0gM0nDvlwrKjChc%3D%40protonmail.com
>> .
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/CAAS2fgQEdVVcb%3DDfP7XoRxfXfq1unKBD0joffddOuTsn2Zmcng%40mail.gmail.com
> <https://groups.google.com/d/msgid/bitcoindev/CAAS2fgQEdVVcb%3DDfP7XoRxfXfq1unKBD0joffddOuTsn2Zmcng%40mail.gmail.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAN7kyNi2xxEY1LZTb_WMXKtFiDf8Epi3VN7HLhsNimOAEMH1xg%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 13597 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-18 12:06 ` PortlandHODL
2025-10-18 16:44 ` Greg Tonoski
2025-10-18 16:54 ` /dev /fd0
@ 2025-10-22 8:07 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-27 23:44 ` Michael Tidwell
2 siblings, 1 reply; 46+ messages in thread
From: 'moonsettler' via Bitcoin Development Mailing List @ 2025-10-22 8:07 UTC (permalink / raw)
To: PortlandHODL; +Cc: Bitcoin Development Mailing List
> Confiscation is a problem because of presigned transactions
Allow 10000 bytes of total scriptPubKey size in each block counting only those outputs that are larger than x (520 as proposed).
The code change is pretty minimal from the most obvious implementation of the original rule.
That makes it technically non-confiscatory. Still non-standard, but if anyone out there so obnoxiously foot-gunned themselves, they can't claim they were rugged by the devs.
BR,
moonsettler
On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <admin@qrsnap.io> wrote:
> Hey,
>
> First, thank you to everyone who responded, and please continue to do so. There were many thought provoking responses and this did shift my perspective quite a bit from the original post, which in of itself was the goal to a degree.
>
> I am currently only going to respond to all of the current concerns. Acks; though I like them will be ignored unless new discoveries are included.
>
> Tl;dr (Portlands Perspective)
> - Confiscation is a problem because of presigned transactions
> - DoS mitigation could also occur through marking UTXOs as unspendable if > 520 bytes, this would preserve the proof of publication.
> - Timeout / Sunset logic is compelling
> - The (n) value of acceptable needed bytes is contentious with the lower suggested limit being 67
> - Congestion control is worth a look?
>
> Next Step:
> - Deeper discussion at the individual level: Antoine Poinsot and GCC overlap?
> - Write an implementation.
> - Decide to pursue BIP
>
> Responses
>
> Andrew Poelstra:
> > There is a risk of confiscation of coins which have pre-signed but
> > unpublished transactions spending them to new outputs with large
> > scriptPubKeys. Due to long-standing standardness rules, and the presence
> > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > such transactions exist.
>
> PortlandHODL: This is a risk that can be incurred and likely not possible to mitigate as there could be possible chains of transactions so even when recursively iterating over a chain there is a chance that a presigned breaks this rule. Every idea I have had from block redemption limits on prevouts seems to just be a coverage issue where you can make the confiscation less likely but not completely mitigated.
>
> Second, there are already TXs that effectively have been confiscated at the policy level (P2SH Cleanstack violation) where the user can not find any miner with a policy to accept these into their mempool. (3 years)
>
> /dev /fd0
> > so it would be great if this was restricted to OP_RETURN
>
> PortlandHODL: I reject this completely as this would remove the UTXOset omission for the scriptPubkey and encourage miners to subvert the OP_RETURN restriction and instead just use another op_code, this also do not hit on some of the most important factors such as DoS mitigation and legacy script attack surface reduction.
>
> Peter Todd
> > NACK ...
>
> PortlandHODL: You NACK'd for the same reasons that I stated in my OP, without including any additional context or reasoning.
>
> jeremy
> > I think that this type of rule is OK if we do it as a "sunsetting" restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2 years, 5 years, 10 years).
>
> If action is taken, this is the most reasonable approach. Alleviating confiscatory concerns through deferral.
>
> > You can argue against this example probably, but it is worth considering that absence of evidence of use is not evidence of absence of use and I myself feel that overall our understanding of Bitcoin transaction programming possibilities is still early. If you don't like this example, I can give you others (probably).
>
> Agreed and this also falls into the reasoning for deciding to utilize point 1 in your response. My thoughts on this would be along the lines of proof of publication as this change only has the effect of stripping away the executable portion of a script between 521 and 10_000 bytes or the published data portion if > 10_000 bytes which the same data could likely be published in chunked segments using outpoints.
>
> Andrew Poelstra:
> > Aside from proof-of-publication (i.e. data storage directly in the UTXO
> > set) there is no usage of script which can't be equally (or better)
> > accomplished by using a Segwit v0 or Taproot script.
>
> This sums up the majority of future usecase concern
>
> Anthony Towns:
> > (If you restricted the change to only applying to scripts that used
> non-push operators, that would probably still provide upgrade flexibility
> while also preventing potential script abuses. But it wouldn't do anything
> to prevent publishing data)
>
> Could this not be done as segments in multiple outpoints using a coordination outpoint? I fail to see why publication proof must be in a single chunk. This does though however bring another alternative to mind, just making these outpoints unspendable but not invalidate the block through inclusion...
>
> > As far as the "but contiguous data will be regulated more strictly"
> argument goes; I don't think "your honour, my offensive content has
> strings of 4d0802 every 520 bytes
>
> Correct, this was never meant to resolve this issue.
>
> Luke Dashjr:
> > If we're going this route, we should just close all the gaps for the immediate future:
>
> To put it nicely, this is completely beyond the scope of what is being proposed.
>
> Guus Ellenkamp:
> > If there are really so few OP_RETURN outputs more than 144 bytes, then
> why increase the limit if that change is so controversial? It seems
> people who want to use a larger OP_RETURN size do it anyway, even with
> the current default limits.
>
> Completely off topic and irrelevant
>
> Greg Tonoski:
> > Limiting the maximum size of the scriptPubKey of a transaction to 67 bytes.
>
> This leave no room to deal with broken hashing algorithms and very little future upgradability for hooks. The rest of these points should be merged with Lukes response and either hijack my thread or start a new one with the increased scope, any approach I take will only be related to the ScriptPubkey
>
> Keagan McClelland:
> > Hard NACK on capping the witness size as that would effectively ban large scripts even in the P2SH wrapper which undermines Bitcoin's ability to be an effectively programmable money.
>
> This has nothing to do with the witness size or even the P2SH wrapper
>
> Casey Rodarmor:
> > I think that "Bitcoin could need it in the future?" might be a good enough
> reason not to do this.
>
> > Script pubkeys are the only variable-length transaction fields which can be
> covered by input signatures, which might make them useful for future soft
> forks. I can imagine confidential asset schemes or post-quantum coin recovery
> schemes requiring large proofs in the outputs, where the validity of the proof
> determined whether or not the transaction is valid, and thus require the
> proofs to be in the outputs, and not just a hash commitment.
>
> Would the ability to publish the data alone be enough? Example make the output unspendable but allow for the existence of the bytes to be covered through the signature?
>
>
> Antoine Poinsot:
> > Limiting the size of created scriptPubKeys is not a sufficient mitigation on its own
> I fail to see how this would not be sufficient? To DoS you need 2 things inputs with ScriptPubkey redemptions + heavy op_codes that require unique checks. Example DUPing stack element again and again doesn't work. This then leads to the next part is you could get up to unique complex operations with the current (n) limit included per input.
>
> > One of the goal of BIP54 is to address objections to Matt's earlier proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Connor. Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite direction.
>
> Some notes is I would actually go as far as to say the confiscation risk is higher with the TX limit proposed in BIP54 as we actually have proof of redemption of TXs that break that rule and the input set to do this already exists on-chain no need to even wonder about the whole presigned. bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>
> Please let me know if I am incorrect on any of this.
>
> > Furthermore, it's always possible to get the biggest bang for our buck in a first step
>
> Agreed on bang for the buck regarding DoS.
>
> My final point here would be that I would like to discuss more, and this is response is from the initial view of your response and could be incomplete or incorrect, This is just my in the moment response.
>
> Antoine Riard:
> > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor of prioritizing
> a timewarp fix and limiting dosy spends by old redeem scripts
>
> The idea of congestion control is interesting, but this solution should significantly reduce the total DoS severity of known vectors.
>
> On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
>
> > Limits on block construction that cross transactions make it harder to accurately estimate fees and greatly complicate optimal block construction-- the latter being important because smarter and more computer powered mining code generating higher profits is a pro centralization factor.
> >
> > In terms of effectiveness the "spam" will just make itself indistinguishable from the most common transaction traffic from the perspective of such metrics-- and might well drive up "spam" levels because the higher embedding cost may make some of them use more transactions. The competition for these buckets by other traffic could make it effectively a block size reduction even against very boring ordinary transactions. ... which is probably not what most people want.
> >
> > I think it's important to keep in mind that bitcoin fee levels even at 0.1s/vb are far beyond what other hosting services and other blockchains cost-- so anyone still embedding data in bitcoin *really* want to be there for some reason and aren't too fee sensitive or else they'd already be using something else... some are even in favor of higher costs since the high fees are what create the scarcity needed for their seigniorage.
> >
> > But yeah I think your comments on priorities are correct.
> >
> >
> >
> > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com> wrote:
> >
> > > Hi list,
> > >
> > > Thanks to the annex covered by the signature, I don't see how the concern about limiting
> > > the extensibility of bitcoin script with future (post-quantum) cryptographic schemes.
> > > Previous proposal of the annex were deliberately designed with variable-length fields
> > > to flexibly accomodate a wide range of things.
> > >
> > > I believe there is one thing that has not been proposed to limit unpredictable utterance
> > > of spams on the blockchain, namely congestion control of categories of outputs (e.g "fat"
> > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey and L a limiting
> > > threshold for the number of T occurences during the period P. Beyond the L threshold, any
> > > additional T scriptpubkey is making the block invalid. Or alternatively, any additional
> > > T generating / spending transaction must pay some weight penalty...
> > >
> > > Congestion control, which of course comes with its lot of shenanigans, is not very a novel
> > > idea as I believe it has been floated few times in the context of lightning to solve mass
> > > closure, where channels out-priced at current feerate would have their safety timelocks scale
> > > ups.
> > >
> > > No need anymore to come to social consensus on what is quantitative "spam" or not. The blockchain
> > > would automatically throttle out the block space spamming transaction. Qualitative spam it's another
> > > question, for anyone who has ever read shannon's theory of communication only effective thing can
> > > be to limit the size of data payload. But probably we're kickly back to a non-mathematically solvable
> > > linguistical question again [0].
> > >
> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor of prioritizing
> > > a timewarp fix and limiting dosy spends by old redeem scripts, rather than engaging in shooting
> > > ourselves in the foot with ill-designed "spam" consensus mitigations.
> > >
> > > [0] If you have a soul of logician, it would be an interesting demonstration to come with
> > > to establish that we cannot come up with mathematically or cryptographically consensus means
> > > to solve qualitative "spam", which in a very pure sense is a linguistical issue.
> > >
> > > Best,
> > > Antoine
> > > OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
> > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit :
> > >
> > > > Hi,
> > > >
> > > > This approach was discussed last year when evaluating the best way to mitigate DoS blocks in terms
> > > > of gains compared to confiscatory surface. Limiting the size of created scriptPubKeys is not a
> > > > sufficient mitigation on its own, and has a non-trivial confiscatory surface.
> > > >
> > > > One of the goal of BIP54 is to address objections to Matt's earlier proposal, notably the (in my
> > > > opinion reasonable) confiscation concerns voiced by Russell O'Connor. Limiting the size of
> > > > scriptPubKeys would in this regard be moving in the opposite direction.
> > > >
> > > > Various approaches of limiting the size of spent scriptPubKeys were discussed, in forms that would
> > > > mitigate the confiscatory surface, to adopt in addition to (what eventually became) the BIP54 sigops
> > > > limit. However i decided against including this additional measure in BIP54 because:
> > > > - of the inherent complexity of the discussed schemes, which would make it hard to reason about
> > > > constructing transactions spending legacy inputs, and equally hard to evaluate the reduction of
> > > > the confiscatory surface;
> > > > - more importantly, there is steep diminishing returns to piling on more mitigations. The BIP54
> > > > limit on its own prevents an externally-motivated attacker from *unevenly* stalling the network
> > > > for dozens of minutes, and a revenue-maximizing miner from regularly stalling its competitions
> > > > for dozens of seconds, at a minimized cost in confiscatory surface. Additional mitigations reduce
> > > > the worst case validation time by a smaller factor at a higher cost in terms of confiscatory
> > > > surface. It "feels right" to further reduce those numbers, but it's less clear what the tangible
> > > > gains would be.
> > > >
> > > > Furthermore, it's always possible to get the biggest bang for our buck in a first step and going the
> > > > extra mile in a later, more controversial, soft fork. I previously floated the idea of a "cleanup
> > > > v2" in private discussions, and i think besides a reduction of the maximum scriptPubKey size it
> > > > should feature a consensus-enforced maximum transaction size for the reasons stated here:
> > > > https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8. I wouldn't hold my
> > > > breath on such a "cleanup v2", but it may be useful to have it documented somewhere.
> > > >
> > > > I'm trying to not go into much details regarding which mitigations were considered in designing
> > > > BIP54, because they are tightly related to the design of various DoS blocks. But i'm always happy to
> > > > rehash the decisions made there and (re-)consider alternative approaches on the semi-private Delving
> > > > thread [0] dedicated to this purpose. Feel free to ping me to get access if i know you.
> > > >
> > > > Best,
> > > > Antoine Poinsot
> > > >
> > > > [0]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > >
> > > >
> > > >
> > > >
> > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <fre...@reardencode.com> wrote:
> > > >
> > > > >
> > > > >
> > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> > > > >
> > > > > > But also given that there are essentially no violations and no reason to
> > > > > > expect any I'm not sure the proposal is worth time relative to fixes of
> > > > > > actual moderately serious DOS attack issues.
> > > > >
> > > > >
> > > > > I believe this limit would also stop most (all?) of PortlandHODL's
> > > > > DoSblocks without having to make some of the other changes in GCC. I
> > > > > think it's worthwhile to compare this approach to those proposed by
> > > > > Antoine in solving these DoS vectors.
> > > > >
> > > > > Best,
> > > > >
> > > > > --Brandon
> > > > >
> > > > > --
> > > > > You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> > > > > To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
> > > > > To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
> > >
> > > --
> > > You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> > > To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
> >
> > > To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/o5Ewtxc0zflnX5G_CXkntdAO8ZRi_seKPovZ-bJs8Lq-CY-ClLMyINd-eQ0vcIETWMdD5fCE_31HbTBy3U7iEopZjT0H5LNTXTc3eAL6hLE%3D%40protonmail.com.
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-22 8:07 ` 'moonsettler' via Bitcoin Development Mailing List
@ 2025-10-27 23:44 ` Michael Tidwell
2025-10-30 2:26 ` Greg Maxwell
0 siblings, 1 reply; 46+ messages in thread
From: Michael Tidwell @ 2025-10-27 23:44 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 21174 bytes --]
> MRH tracking might make that acceptable, but comes at a high cost which I
think would clearly not be justified.
Greg, I want to ask/challenge how bad this is, this seems like a generally
reusable primitive that could make other upgrades more feasible that also
have the same strict confiscation risk profile.
IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
Poelstra,
> I don't think this is a great idea -- it would be technically hard to
implement and slow deployment indefinitely.
I would like to know how much of a deal breaker this is in your opinion. Is
MRH tracking off the table? In terms of the hypothetical presigned
transactions that may exist using P2MS, is this a hard enough reason to
require a MRH idea?
Greg,
> So, paradoxically this limit might increase the amount of non-prunable
data
I believe you're pointing out the idea of non economically-rational
spammers? We already see actors ignoring cheaper witness inscription
methods. If spam shifts to many sub-520 fake pubkey outputs (which I
believe is less harmful than stamps), that imo is a separate UTXO cost
discussion. (like a SF to add weight to outputs). Anywho, this point alone
doesn't seem sufficient to add as a clear negative reason for someone
opposed to the proposal.
Thanks,
Tidwell
On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
> > Confiscation is a problem because of presigned transactions
>
> Allow 10000 bytes of total scriptPubKey size in each block counting only
> those outputs that are larger than x (520 as proposed).
> The code change is pretty minimal from the most obvious implementation of
> the original rule.
>
> That makes it technically non-confiscatory. Still non-standard, but if
> anyone out there so obnoxiously foot-gunned themselves, they can't claim
> they were rugged by the devs.
>
> BR,
> moonsettler
>
> On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io>
> wrote:
>
> > Hey,
> >
> > First, thank you to everyone who responded, and please continue to do
> so. There were many thought provoking responses and this did shift my
> perspective quite a bit from the original post, which in of itself was the
> goal to a degree.
> >
> > I am currently only going to respond to all of the current concerns.
> Acks; though I like them will be ignored unless new discoveries are
> included.
> >
> > Tl;dr (Portlands Perspective)
> > - Confiscation is a problem because of presigned transactions
> > - DoS mitigation could also occur through marking UTXOs as unspendable
> if > 520 bytes, this would preserve the proof of publication.
> > - Timeout / Sunset logic is compelling
> > - The (n) value of acceptable needed bytes is contentious with the lower
> suggested limit being 67
> > - Congestion control is worth a look?
> >
> > Next Step:
> > - Deeper discussion at the individual level: Antoine Poinsot and GCC
> overlap?
> > - Write an implementation.
> > - Decide to pursue BIP
> >
> > Responses
> >
> > Andrew Poelstra:
> > > There is a risk of confiscation of coins which have pre-signed but
> > > unpublished transactions spending them to new outputs with large
> > > scriptPubKeys. Due to long-standing standardness rules, and the
> presence
> > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > > such transactions exist.
> >
> > PortlandHODL: This is a risk that can be incurred and likely not
> possible to mitigate as there could be possible chains of transactions so
> even when recursively iterating over a chain there is a chance that a
> presigned breaks this rule. Every idea I have had from block redemption
> limits on prevouts seems to just be a coverage issue where you can make the
> confiscation less likely but not completely mitigated.
> >
> > Second, there are already TXs that effectively have been confiscated at
> the policy level (P2SH Cleanstack violation) where the user can not find
> any miner with a policy to accept these into their mempool. (3 years)
> >
> > /dev /fd0
> > > so it would be great if this was restricted to OP_RETURN
> >
> > PortlandHODL: I reject this completely as this would remove the UTXOset
> omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
> restriction and instead just use another op_code, this also do not hit on
> some of the most important factors such as DoS mitigation and legacy script
> attack surface reduction.
> >
> > Peter Todd
> > > NACK ...
> >
> > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
> without including any additional context or reasoning.
> >
> > jeremy
> > > I think that this type of rule is OK if we do it as a "sunsetting"
> restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
> years, 5 years, 10 years).
> >
> > If action is taken, this is the most reasonable approach. Alleviating
> confiscatory concerns through deferral.
> >
> > > You can argue against this example probably, but it is worth
> considering that absence of evidence of use is not evidence of absence of
> use and I myself feel that overall our understanding of Bitcoin transaction
> programming possibilities is still early. If you don't like this example, I
> can give you others (probably).
> >
> > Agreed and this also falls into the reasoning for deciding to utilize
> point 1 in your response. My thoughts on this would be along the lines of
> proof of publication as this change only has the effect of stripping away
> the executable portion of a script between 521 and 10_000 bytes or the
> published data portion if > 10_000 bytes which the same data could likely
> be published in chunked segments using outpoints.
> >
> > Andrew Poelstra:
> > > Aside from proof-of-publication (i.e. data storage directly in the UTXO
> > > set) there is no usage of script which can't be equally (or better)
> > > accomplished by using a Segwit v0 or Taproot script.
> >
> > This sums up the majority of future usecase concern
> >
> > Anthony Towns:
> > > (If you restricted the change to only applying to scripts that used
> > non-push operators, that would probably still provide upgrade flexibility
> > while also preventing potential script abuses. But it wouldn't do
> anything
> > to prevent publishing data)
> >
> > Could this not be done as segments in multiple outpoints using a
> coordination outpoint? I fail to see why publication proof must be in a
> single chunk. This does though however bring another alternative to mind,
> just making these outpoints unspendable but not invalidate the block
> through inclusion...
> >
> > > As far as the "but contiguous data will be regulated more strictly"
> > argument goes; I don't think "your honour, my offensive content has
> > strings of 4d0802 every 520 bytes
> >
> > Correct, this was never meant to resolve this issue.
> >
> > Luke Dashjr:
> > > If we're going this route, we should just close all the gaps for the
> immediate future:
> >
> > To put it nicely, this is completely beyond the scope of what is being
> proposed.
> >
> > Guus Ellenkamp:
> > > If there are really so few OP_RETURN outputs more than 144 bytes, then
> > why increase the limit if that change is so controversial? It seems
> > people who want to use a larger OP_RETURN size do it anyway, even with
> > the current default limits.
> >
> > Completely off topic and irrelevant
> >
> > Greg Tonoski:
> > > Limiting the maximum size of the scriptPubKey of a transaction to 67
> bytes.
> >
> > This leave no room to deal with broken hashing algorithms and very
> little future upgradability for hooks. The rest of these points should be
> merged with Lukes response and either hijack my thread or start a new one
> with the increased scope, any approach I take will only be related to the
> ScriptPubkey
> >
> > Keagan McClelland:
> > > Hard NACK on capping the witness size as that would effectively ban
> large scripts even in the P2SH wrapper which undermines Bitcoin's ability
> to be an effectively programmable money.
> >
> > This has nothing to do with the witness size or even the P2SH wrapper
> >
> > Casey Rodarmor:
> > > I think that "Bitcoin could need it in the future?" might be a good
> enough
> > reason not to do this.
> >
> > > Script pubkeys are the only variable-length transaction fields which
> can be
> > covered by input signatures, which might make them useful for future soft
> > forks. I can imagine confidential asset schemes or post-quantum coin
> recovery
> > schemes requiring large proofs in the outputs, where the validity of the
> proof
> > determined whether or not the transaction is valid, and thus require the
> > proofs to be in the outputs, and not just a hash commitment.
> >
> > Would the ability to publish the data alone be enough? Example make the
> output unspendable but allow for the existence of the bytes to be covered
> through the signature?
> >
> >
> > Antoine Poinsot:
> > > Limiting the size of created scriptPubKeys is not a sufficient
> mitigation on its own
> > I fail to see how this would not be sufficient? To DoS you need 2 things
> inputs with ScriptPubkey redemptions + heavy op_codes that require unique
> checks. Example DUPing stack element again and again doesn't work. This
> then leads to the next part is you could get up to unique complex
> operations with the current (n) limit included per input.
> >
> > > One of the goal of BIP54 is to address objections to Matt's earlier
> proposal, notably the (in my
> > opinion reasonable) confiscation concerns voiced by Russell O'Connor.
> Limiting the size of
> > scriptPubKeys would in this regard be moving in the opposite direction.
> >
> > Some notes is I would actually go as far as to say the confiscation risk
> is higher with the TX limit proposed in BIP54 as we actually have proof of
> redemption of TXs that break that rule and the input set to do this already
> exists on-chain no need to even wonder about the whole presigned.
> bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
> >
> > Please let me know if I am incorrect on any of this.
> >
> > > Furthermore, it's always possible to get the biggest bang for our buck
> in a first step
> >
> > Agreed on bang for the buck regarding DoS.
> >
> > My final point here would be that I would like to discuss more, and this
> is response is from the initial view of your response and could be
> incomplete or incorrect, This is just my in the moment response.
> >
> > Antoine Riard:
> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
> favor of prioritizing
> > a timewarp fix and limiting dosy spends by old redeem scripts
> >
> > The idea of congestion control is interesting, but this solution should
> significantly reduce the total DoS severity of known vectors.
> >
> > On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
> >
> > > Limits on block construction that cross transactions make it harder to
> accurately estimate fees and greatly complicate optimal block
> construction-- the latter being important because smarter and more computer
> powered mining code generating higher profits is a pro centralization
> factor.
> > >
> > > In terms of effectiveness the "spam" will just make itself
> indistinguishable from the most common transaction traffic from the
> perspective of such metrics-- and might well drive up "spam" levels because
> the higher embedding cost may make some of them use more transactions. The
> competition for these buckets by other traffic could make it effectively a
> block size reduction even against very boring ordinary transactions. ...
> which is probably not what most people want.
> > >
> > > I think it's important to keep in mind that bitcoin fee levels even at
> 0.1s/vb are far beyond what other hosting services and other blockchains
> cost-- so anyone still embedding data in bitcoin *really* want to be there
> for some reason and aren't too fee sensitive or else they'd already be
> using something else... some are even in favor of higher costs since the
> high fees are what create the scarcity needed for their seigniorage.
> > >
> > > But yeah I think your comments on priorities are correct.
> > >
> > >
> > >
> > > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
> wrote:
> > >
> > > > Hi list,
> > > >
> > > > Thanks to the annex covered by the signature, I don't see how the
> concern about limiting
> > > > the extensibility of bitcoin script with future (post-quantum)
> cryptographic schemes.
> > > > Previous proposal of the annex were deliberately designed with
> variable-length fields
> > > > to flexibly accomodate a wide range of things.
> > > >
> > > > I believe there is one thing that has not been proposed to limit
> unpredictable utterance
> > > > of spams on the blockchain, namely congestion control of categories
> of outputs (e.g "fat"
> > > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey
> and L a limiting
> > > > threshold for the number of T occurences during the period P. Beyond
> the L threshold, any
> > > > additional T scriptpubkey is making the block invalid. Or
> alternatively, any additional
> > > > T generating / spending transaction must pay some weight penalty...
> > > >
> > > > Congestion control, which of course comes with its lot of
> shenanigans, is not very a novel
> > > > idea as I believe it has been floated few times in the context of
> lightning to solve mass
> > > > closure, where channels out-priced at current feerate would have
> their safety timelocks scale
> > > > ups.
> > > >
> > > > No need anymore to come to social consensus on what is quantitative
> "spam" or not. The blockchain
> > > > would automatically throttle out the block space spamming
> transaction. Qualitative spam it's another
> > > > question, for anyone who has ever read shannon's theory of
> communication only effective thing can
> > > > be to limit the size of data payload. But probably we're kickly back
> to a non-mathematically solvable
> > > > linguistical question again [0].
> > > >
> > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
> favor of prioritizing
> > > > a timewarp fix and limiting dosy spends by old redeem scripts,
> rather than engaging in shooting
> > > > ourselves in the foot with ill-designed "spam" consensus mitigations.
> > > >
> > > > [0] If you have a soul of logician, it would be an interesting
> demonstration to come with
> > > > to establish that we cannot come up with mathematically or
> cryptographically consensus means
> > > > to solve qualitative "spam", which in a very pure sense is a
> linguistical issue.
> > > >
> > > > Best,
> > > > Antoine
> > > > OTS hash:
> 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
> > > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a
> écrit :
> > > >
> > > > > Hi,
> > > > >
> > > > > This approach was discussed last year when evaluating the best way
> to mitigate DoS blocks in terms
> > > > > of gains compared to confiscatory surface. Limiting the size of
> created scriptPubKeys is not a
> > > > > sufficient mitigation on its own, and has a non-trivial
> confiscatory surface.
> > > > >
> > > > > One of the goal of BIP54 is to address objections to Matt's
> earlier proposal, notably the (in my
> > > > > opinion reasonable) confiscation concerns voiced by Russell
> O'Connor. Limiting the size of
> > > > > scriptPubKeys would in this regard be moving in the opposite
> direction.
> > > > >
> > > > > Various approaches of limiting the size of spent scriptPubKeys
> were discussed, in forms that would
> > > > > mitigate the confiscatory surface, to adopt in addition to (what
> eventually became) the BIP54 sigops
> > > > > limit. However i decided against including this additional measure
> in BIP54 because:
> > > > > - of the inherent complexity of the discussed schemes, which would
> make it hard to reason about
> > > > > constructing transactions spending legacy inputs, and equally hard
> to evaluate the reduction of
> > > > > the confiscatory surface;
> > > > > - more importantly, there is steep diminishing returns to piling
> on more mitigations. The BIP54
> > > > > limit on its own prevents an externally-motivated attacker from
> *unevenly* stalling the network
> > > > > for dozens of minutes, and a revenue-maximizing miner from
> regularly stalling its competitions
> > > > > for dozens of seconds, at a minimized cost in confiscatory
> surface. Additional mitigations reduce
> > > > > the worst case validation time by a smaller factor at a higher
> cost in terms of confiscatory
> > > > > surface. It "feels right" to further reduce those numbers, but
> it's less clear what the tangible
> > > > > gains would be.
> > > > >
> > > > > Furthermore, it's always possible to get the biggest bang for our
> buck in a first step and going the
> > > > > extra mile in a later, more controversial, soft fork. I previously
> floated the idea of a "cleanup
> > > > > v2" in private discussions, and i think besides a reduction of the
> maximum scriptPubKey size it
> > > > > should feature a consensus-enforced maximum transaction size for
> the reasons stated here:
> > > > >
> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
> I wouldn't hold my
> > > > > breath on such a "cleanup v2", but it may be useful to have it
> documented somewhere.
> > > > >
> > > > > I'm trying to not go into much details regarding which mitigations
> were considered in designing
> > > > > BIP54, because they are tightly related to the design of various
> DoS blocks. But i'm always happy to
> > > > > rehash the decisions made there and (re-)consider alternative
> approaches on the semi-private Delving
> > > > > thread [0] dedicated to this purpose. Feel free to ping me to get
> access if i know you.
> > > > >
> > > > > Best,
> > > > > Antoine Poinsot
> > > > >
> > > > > [0]:
> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
> fre...@reardencode.com> wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> > > > > >
> > > > > > > But also given that there are essentially no violations and no
> reason to
> > > > > > > expect any I'm not sure the proposal is worth time relative to
> fixes of
> > > > > > > actual moderately serious DOS attack issues.
> > > > > >
> > > > > >
> > > > > > I believe this limit would also stop most (all?) of
> PortlandHODL's
> > > > > > DoSblocks without having to make some of the other changes in
> GCC. I
> > > > > > think it's worthwhile to compare this approach to those proposed
> by
> > > > > > Antoine in solving these DoS vectors.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > --Brandon
> > > > > >
> > > > > > --
> > > > > > You received this message because you are subscribed to the
> Google Groups "Bitcoin Development Mailing List" group.
> > > > > > To unsubscribe from this group and stop receiving emails from
> it, send an email to bitcoindev+...@googlegroups.com.
> > > > > > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
> > > >
> > > > --
> > > > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > > > To unsubscribe from this group and stop receiving emails from it,
> send an email to bitcoindev+...@googlegroups.com.
> > >
> > > > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com
> .
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+...@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 25830 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-27 23:44 ` Michael Tidwell
@ 2025-10-30 2:26 ` Greg Maxwell
2025-10-30 3:36 ` Michael Tidwell
2025-10-30 16:10 ` [bitcoindev] " Tom Harding
0 siblings, 2 replies; 46+ messages in thread
From: Greg Maxwell @ 2025-10-30 2:26 UTC (permalink / raw)
To: Michael Tidwell; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 24093 bytes --]
"A few bytes" might be on the order of forever 10% increase in the UTXO set
size, plus a full from-network resync of all pruned nodes and a full (e.g.
most of day outage) reindex of all unpruned nodes. Not insignificant but
also not nothing. Such a portion of the existing utxo size is not from
outputs over 520 bytes in size, so as a scheme for utxo set size reduction
the addition of MHT tracking would probably make it a failure.
Also some risk of creating some new scarce asset class, txouts consisting
of primordial coins that aren't subject to the new rules... sounds like the
sort of thing that NFT degens would absolutely love. That might not be an
issue *generally* for some change with confiscation risk, but for a change
that is specifically intended to lobotomize bitcoin to make it less useful
to NFT degens, maybe not such a great idea. :P
I mentioned it at all because I thought it could potentially be of some
use, I'm just more skeptical of it for the current context. Also luke-jr
and crew has moved on to actually propose even more invasive changes than
just limiting the script size, which I anticipated, and has much more
significant issues. Just size limiting outputs likely doesn't harm any
interests or usages-- and so probably could be viable if the confiscation
issue was addressed, but it also doesn't stick it to people transacting in
ways the priests of ocean mining dislike.
> I believe you're pointing out the idea of non economically-rational
spammers?
I think it's a mistake to conclude the spammers are economically
irrational-- they're often just responding to different economics which may
be less legible to your analysis. In particular, NFT degens prefer the
high cost of transactions as a thing that makes their tokens scarce and
gives them value. -- otherwise they wouldn't be swapping for one less
efficient encoding for another, they're just be using another blockchain
(perhaps their own) entirely.
On Thu, Oct 30, 2025 at 1:16 AM Michael Tidwell <mtidwell021@gmail.com>
wrote:
> > MRH tracking might make that acceptable, but comes at a high cost which
> I think would clearly not be justified.
>
> Greg, I want to ask/challenge how bad this is, this seems like a generally
> reusable primitive that could make other upgrades more feasible that also
> have the same strict confiscation risk profile.
> IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
>
> Poelstra,
>
> > I don't think this is a great idea -- it would be technically hard to
> implement and slow deployment indefinitely.
>
> I would like to know how much of a deal breaker this is in your opinion.
> Is MRH tracking off the table? In terms of the hypothetical presigned
> transactions that may exist using P2MS, is this a hard enough reason to
> require a MRH idea?
>
> Greg,
>
> > So, paradoxically this limit might increase the amount of non-prunable
> data
>
> I believe you're pointing out the idea of non economically-rational
> spammers? We already see actors ignoring cheaper witness inscription
> methods. If spam shifts to many sub-520 fake pubkey outputs (which I
> believe is less harmful than stamps), that imo is a separate UTXO cost
> discussion. (like a SF to add weight to outputs). Anywho, this point alone
> doesn't seem sufficient to add as a clear negative reason for someone
> opposed to the proposal.
>
> Thanks,
> Tidwell
> On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
>
>> > Confiscation is a problem because of presigned transactions
>>
>> Allow 10000 bytes of total scriptPubKey size in each block counting only
>> those outputs that are larger than x (520 as proposed).
>> The code change is pretty minimal from the most obvious implementation of
>> the original rule.
>>
>> That makes it technically non-confiscatory. Still non-standard, but if
>> anyone out there so obnoxiously foot-gunned themselves, they can't claim
>> they were rugged by the devs.
>>
>> BR,
>> moonsettler
>>
>> On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io>
>> wrote:
>>
>> > Hey,
>> >
>> > First, thank you to everyone who responded, and please continue to do
>> so. There were many thought provoking responses and this did shift my
>> perspective quite a bit from the original post, which in of itself was the
>> goal to a degree.
>> >
>> > I am currently only going to respond to all of the current concerns.
>> Acks; though I like them will be ignored unless new discoveries are
>> included.
>> >
>> > Tl;dr (Portlands Perspective)
>> > - Confiscation is a problem because of presigned transactions
>> > - DoS mitigation could also occur through marking UTXOs as unspendable
>> if > 520 bytes, this would preserve the proof of publication.
>> > - Timeout / Sunset logic is compelling
>> > - The (n) value of acceptable needed bytes is contentious with the
>> lower suggested limit being 67
>> > - Congestion control is worth a look?
>> >
>> > Next Step:
>> > - Deeper discussion at the individual level: Antoine Poinsot and GCC
>> overlap?
>> > - Write an implementation.
>> > - Decide to pursue BIP
>> >
>> > Responses
>> >
>> > Andrew Poelstra:
>> > > There is a risk of confiscation of coins which have pre-signed but
>> > > unpublished transactions spending them to new outputs with large
>> > > scriptPubKeys. Due to long-standing standardness rules, and the
>> presence
>> > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that
>> any
>> > > such transactions exist.
>> >
>> > PortlandHODL: This is a risk that can be incurred and likely not
>> possible to mitigate as there could be possible chains of transactions so
>> even when recursively iterating over a chain there is a chance that a
>> presigned breaks this rule. Every idea I have had from block redemption
>> limits on prevouts seems to just be a coverage issue where you can make the
>> confiscation less likely but not completely mitigated.
>> >
>> > Second, there are already TXs that effectively have been confiscated at
>> the policy level (P2SH Cleanstack violation) where the user can not find
>> any miner with a policy to accept these into their mempool. (3 years)
>> >
>> > /dev /fd0
>> > > so it would be great if this was restricted to OP_RETURN
>> >
>> > PortlandHODL: I reject this completely as this would remove the UTXOset
>> omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
>> restriction and instead just use another op_code, this also do not hit on
>> some of the most important factors such as DoS mitigation and legacy script
>> attack surface reduction.
>> >
>> > Peter Todd
>> > > NACK ...
>> >
>> > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
>> without including any additional context or reasoning.
>> >
>> > jeremy
>> > > I think that this type of rule is OK if we do it as a "sunsetting"
>> restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
>> years, 5 years, 10 years).
>> >
>> > If action is taken, this is the most reasonable approach. Alleviating
>> confiscatory concerns through deferral.
>> >
>> > > You can argue against this example probably, but it is worth
>> considering that absence of evidence of use is not evidence of absence of
>> use and I myself feel that overall our understanding of Bitcoin transaction
>> programming possibilities is still early. If you don't like this example, I
>> can give you others (probably).
>> >
>> > Agreed and this also falls into the reasoning for deciding to utilize
>> point 1 in your response. My thoughts on this would be along the lines of
>> proof of publication as this change only has the effect of stripping away
>> the executable portion of a script between 521 and 10_000 bytes or the
>> published data portion if > 10_000 bytes which the same data could likely
>> be published in chunked segments using outpoints.
>> >
>> > Andrew Poelstra:
>> > > Aside from proof-of-publication (i.e. data storage directly in the
>> UTXO
>> > > set) there is no usage of script which can't be equally (or better)
>> > > accomplished by using a Segwit v0 or Taproot script.
>> >
>> > This sums up the majority of future usecase concern
>> >
>> > Anthony Towns:
>> > > (If you restricted the change to only applying to scripts that used
>> > non-push operators, that would probably still provide upgrade
>> flexibility
>> > while also preventing potential script abuses. But it wouldn't do
>> anything
>> > to prevent publishing data)
>> >
>> > Could this not be done as segments in multiple outpoints using a
>> coordination outpoint? I fail to see why publication proof must be in a
>> single chunk. This does though however bring another alternative to mind,
>> just making these outpoints unspendable but not invalidate the block
>> through inclusion...
>> >
>> > > As far as the "but contiguous data will be regulated more strictly"
>> > argument goes; I don't think "your honour, my offensive content has
>> > strings of 4d0802 every 520 bytes
>> >
>> > Correct, this was never meant to resolve this issue.
>> >
>> > Luke Dashjr:
>> > > If we're going this route, we should just close all the gaps for the
>> immediate future:
>> >
>> > To put it nicely, this is completely beyond the scope of what is being
>> proposed.
>> >
>> > Guus Ellenkamp:
>> > > If there are really so few OP_RETURN outputs more than 144 bytes,
>> then
>> > why increase the limit if that change is so controversial? It seems
>> > people who want to use a larger OP_RETURN size do it anyway, even with
>> > the current default limits.
>> >
>> > Completely off topic and irrelevant
>> >
>> > Greg Tonoski:
>> > > Limiting the maximum size of the scriptPubKey of a transaction to 67
>> bytes.
>> >
>> > This leave no room to deal with broken hashing algorithms and very
>> little future upgradability for hooks. The rest of these points should be
>> merged with Lukes response and either hijack my thread or start a new one
>> with the increased scope, any approach I take will only be related to the
>> ScriptPubkey
>> >
>> > Keagan McClelland:
>> > > Hard NACK on capping the witness size as that would effectively ban
>> large scripts even in the P2SH wrapper which undermines Bitcoin's ability
>> to be an effectively programmable money.
>> >
>> > This has nothing to do with the witness size or even the P2SH wrapper
>> >
>> > Casey Rodarmor:
>> > > I think that "Bitcoin could need it in the future?" might be a good
>> enough
>> > reason not to do this.
>> >
>> > > Script pubkeys are the only variable-length transaction fields which
>> can be
>> > covered by input signatures, which might make them useful for future
>> soft
>> > forks. I can imagine confidential asset schemes or post-quantum coin
>> recovery
>> > schemes requiring large proofs in the outputs, where the validity of
>> the proof
>> > determined whether or not the transaction is valid, and thus require
>> the
>> > proofs to be in the outputs, and not just a hash commitment.
>> >
>> > Would the ability to publish the data alone be enough? Example make the
>> output unspendable but allow for the existence of the bytes to be covered
>> through the signature?
>> >
>> >
>> > Antoine Poinsot:
>> > > Limiting the size of created scriptPubKeys is not a sufficient
>> mitigation on its own
>> > I fail to see how this would not be sufficient? To DoS you need 2
>> things inputs with ScriptPubkey redemptions + heavy op_codes that require
>> unique checks. Example DUPing stack element again and again doesn't work.
>> This then leads to the next part is you could get up to unique complex
>> operations with the current (n) limit included per input.
>> >
>> > > One of the goal of BIP54 is to address objections to Matt's earlier
>> proposal, notably the (in my
>> > opinion reasonable) confiscation concerns voiced by Russell O'Connor.
>> Limiting the size of
>> > scriptPubKeys would in this regard be moving in the opposite direction.
>> >
>> > Some notes is I would actually go as far as to say the confiscation
>> risk is higher with the TX limit proposed in BIP54 as we actually have
>> proof of redemption of TXs that break that rule and the input set to do
>> this already exists on-chain no need to even wonder about the whole
>> presigned. bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>> >
>> > Please let me know if I am incorrect on any of this.
>> >
>> > > Furthermore, it's always possible to get the biggest bang for our
>> buck in a first step
>> >
>> > Agreed on bang for the buck regarding DoS.
>> >
>> > My final point here would be that I would like to discuss more, and
>> this is response is from the initial view of your response and could be
>> incomplete or incorrect, This is just my in the moment response.
>> >
>> > Antoine Riard:
>> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
>> favor of prioritizing
>> > a timewarp fix and limiting dosy spends by old redeem scripts
>> >
>> > The idea of congestion control is interesting, but this solution should
>> significantly reduce the total DoS severity of known vectors.
>> >
>> > On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
>> >
>> > > Limits on block construction that cross transactions make it harder
>> to accurately estimate fees and greatly complicate optimal block
>> construction-- the latter being important because smarter and more computer
>> powered mining code generating higher profits is a pro centralization
>> factor.
>> > >
>> > > In terms of effectiveness the "spam" will just make itself
>> indistinguishable from the most common transaction traffic from the
>> perspective of such metrics-- and might well drive up "spam" levels because
>> the higher embedding cost may make some of them use more transactions. The
>> competition for these buckets by other traffic could make it effectively a
>> block size reduction even against very boring ordinary transactions. ...
>> which is probably not what most people want.
>> > >
>> > > I think it's important to keep in mind that bitcoin fee levels even
>> at 0.1s/vb are far beyond what other hosting services and other blockchains
>> cost-- so anyone still embedding data in bitcoin *really* want to be there
>> for some reason and aren't too fee sensitive or else they'd already be
>> using something else... some are even in favor of higher costs since the
>> high fees are what create the scarcity needed for their seigniorage.
>> > >
>> > > But yeah I think your comments on priorities are correct.
>> > >
>> > >
>> > >
>> > > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
>> wrote:
>> > >
>> > > > Hi list,
>> > > >
>> > > > Thanks to the annex covered by the signature, I don't see how the
>> concern about limiting
>> > > > the extensibility of bitcoin script with future (post-quantum)
>> cryptographic schemes.
>> > > > Previous proposal of the annex were deliberately designed with
>> variable-length fields
>> > > > to flexibly accomodate a wide range of things.
>> > > >
>> > > > I believe there is one thing that has not been proposed to limit
>> unpredictable utterance
>> > > > of spams on the blockchain, namely congestion control of categories
>> of outputs (e.g "fat"
>> > > > scriptpubkeys). Let's say P a block period, T a type of
>> scriptpubkey and L a limiting
>> > > > threshold for the number of T occurences during the period P.
>> Beyond the L threshold, any
>> > > > additional T scriptpubkey is making the block invalid. Or
>> alternatively, any additional
>> > > > T generating / spending transaction must pay some weight penalty...
>> > > >
>> > > > Congestion control, which of course comes with its lot of
>> shenanigans, is not very a novel
>> > > > idea as I believe it has been floated few times in the context of
>> lightning to solve mass
>> > > > closure, where channels out-priced at current feerate would have
>> their safety timelocks scale
>> > > > ups.
>> > > >
>> > > > No need anymore to come to social consensus on what is quantitative
>> "spam" or not. The blockchain
>> > > > would automatically throttle out the block space spamming
>> transaction. Qualitative spam it's another
>> > > > question, for anyone who has ever read shannon's theory of
>> communication only effective thing can
>> > > > be to limit the size of data payload. But probably we're kickly
>> back to a non-mathematically solvable
>> > > > linguistical question again [0].
>> > > >
>> > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
>> favor of prioritizing
>> > > > a timewarp fix and limiting dosy spends by old redeem scripts,
>> rather than engaging in shooting
>> > > > ourselves in the foot with ill-designed "spam" consensus
>> mitigations.
>> > > >
>> > > > [0] If you have a soul of logician, it would be an interesting
>> demonstration to come with
>> > > > to establish that we cannot come up with mathematically or
>> cryptographically consensus means
>> > > > to solve qualitative "spam", which in a very pure sense is a
>> linguistical issue.
>> > > >
>> > > > Best,
>> > > > Antoine
>> > > > OTS hash:
>> 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
>> > > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a
>> écrit :
>> > > >
>> > > > > Hi,
>> > > > >
>> > > > > This approach was discussed last year when evaluating the best
>> way to mitigate DoS blocks in terms
>> > > > > of gains compared to confiscatory surface. Limiting the size of
>> created scriptPubKeys is not a
>> > > > > sufficient mitigation on its own, and has a non-trivial
>> confiscatory surface.
>> > > > >
>> > > > > One of the goal of BIP54 is to address objections to Matt's
>> earlier proposal, notably the (in my
>> > > > > opinion reasonable) confiscation concerns voiced by Russell
>> O'Connor. Limiting the size of
>> > > > > scriptPubKeys would in this regard be moving in the opposite
>> direction.
>> > > > >
>> > > > > Various approaches of limiting the size of spent scriptPubKeys
>> were discussed, in forms that would
>> > > > > mitigate the confiscatory surface, to adopt in addition to (what
>> eventually became) the BIP54 sigops
>> > > > > limit. However i decided against including this additional
>> measure in BIP54 because:
>> > > > > - of the inherent complexity of the discussed schemes, which
>> would make it hard to reason about
>> > > > > constructing transactions spending legacy inputs, and equally
>> hard to evaluate the reduction of
>> > > > > the confiscatory surface;
>> > > > > - more importantly, there is steep diminishing returns to piling
>> on more mitigations. The BIP54
>> > > > > limit on its own prevents an externally-motivated attacker from
>> *unevenly* stalling the network
>> > > > > for dozens of minutes, and a revenue-maximizing miner from
>> regularly stalling its competitions
>> > > > > for dozens of seconds, at a minimized cost in confiscatory
>> surface. Additional mitigations reduce
>> > > > > the worst case validation time by a smaller factor at a higher
>> cost in terms of confiscatory
>> > > > > surface. It "feels right" to further reduce those numbers, but
>> it's less clear what the tangible
>> > > > > gains would be.
>> > > > >
>> > > > > Furthermore, it's always possible to get the biggest bang for our
>> buck in a first step and going the
>> > > > > extra mile in a later, more controversial, soft fork. I
>> previously floated the idea of a "cleanup
>> > > > > v2" in private discussions, and i think besides a reduction of
>> the maximum scriptPubKey size it
>> > > > > should feature a consensus-enforced maximum transaction size for
>> the reasons stated here:
>> > > > >
>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
>> I wouldn't hold my
>> > > > > breath on such a "cleanup v2", but it may be useful to have it
>> documented somewhere.
>> > > > >
>> > > > > I'm trying to not go into much details regarding which
>> mitigations were considered in designing
>> > > > > BIP54, because they are tightly related to the design of various
>> DoS blocks. But i'm always happy to
>> > > > > rehash the decisions made there and (re-)consider alternative
>> approaches on the semi-private Delving
>> > > > > thread [0] dedicated to this purpose. Feel free to ping me to get
>> access if i know you.
>> > > > >
>> > > > > Best,
>> > > > > Antoine Poinsot
>> > > > >
>> > > > > [0]:
>> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>> > > > >
>> > > > >
>> > > > >
>> > > > >
>> > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
>> fre...@reardencode.com> wrote:
>> > > > >
>> > > > > >
>> > > > > >
>> > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>> > > > > >
>> > > > > > > But also given that there are essentially no violations and
>> no reason to
>> > > > > > > expect any I'm not sure the proposal is worth time relative
>> to fixes of
>> > > > > > > actual moderately serious DOS attack issues.
>> > > > > >
>> > > > > >
>> > > > > > I believe this limit would also stop most (all?) of
>> PortlandHODL's
>> > > > > > DoSblocks without having to make some of the other changes in
>> GCC. I
>> > > > > > think it's worthwhile to compare this approach to those
>> proposed by
>> > > > > > Antoine in solving these DoS vectors.
>> > > > > >
>> > > > > > Best,
>> > > > > >
>> > > > > > --Brandon
>> > > > > >
>> > > > > > --
>> > > > > > You received this message because you are subscribed to the
>> Google Groups "Bitcoin Development Mailing List" group.
>> > > > > > To unsubscribe from this group and stop receiving emails from
>> it, send an email to bitcoindev+...@googlegroups.com.
>> > > > > > To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>> > > >
>> > > > --
>> > > > You received this message because you are subscribed to the Google
>> Groups "Bitcoin Development Mailing List" group.
>> > > > To unsubscribe from this group and stop receiving emails from it,
>> send an email to bitcoindev+...@googlegroups.com.
>> > >
>> > > > To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>>
>> >
>> > --
>> > You received this message because you are subscribed to the Google
>> Groups "Bitcoin Development Mailing List" group.
>> > To unsubscribe from this group and stop receiving emails from it, send
>> an email to bitcoindev+...@googlegroups.com.
>> > To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
>>
>>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAAS2fgSUqFs-74m-we6sb5oY2%3D9R6LDVN_iFqtX%2Bz_d3hx0nnw%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 27784 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 2:26 ` Greg Maxwell
@ 2025-10-30 3:36 ` Michael Tidwell
2025-10-30 6:15 ` Greg Maxwell
2025-10-30 16:10 ` [bitcoindev] " Tom Harding
1 sibling, 1 reply; 46+ messages in thread
From: Michael Tidwell @ 2025-10-30 3:36 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 26714 bytes --]
Greg,
> Also some risk of creating a new scarce asset class.
Well, Casey Rodarmor is in the thread, so lol maybe.
Anyway, point taken. I want to be 100% sure I understand the hypotheticals:
there could be an off-chain, presigned, transactions that needs more than
520 bytes for the scriptPubKey and, as Poelstra said, could even form a
chain of presigned transactions under some complex, previously unknown,
scheme that only becomes public after this change is made. Can you confirm?
Would it also be a worry that a chain of transactions using said utxo could
commit to some bizarre scheme, for instance a taproot transaction utxo that
later is presigned committed back to P2MS larger than 520 bytes? If so, I
think I get it, you're saying to essentially guarantee no confiscation we'd
never be able to upgrade old UTXOs and we'd need to track them forever to
prevent unlikely edge cases?
Does the presigned chain at least stop needing to be tracked once the given
UTXO co-mingles with a post-update coinbase utxo?
If so, this is indeed complex! This seems pretty insane both for the
complexity of implementing and the unlikely edge cases. Has Core ever made
a decision of (acceptable risk) to upgrade with protection of onchain utxos
but not hypothetical unpublished ones?
Aren't we going to run into the same situation if we do an op code clean up
in the future if we had people presign/commit to op codes that are no
longer consensus valid?
Tidwell
On Wednesday, October 29, 2025 at 10:32:10 PM UTC-4 Greg Maxwell wrote:
> "A few bytes" might be on the order of forever 10% increase in the UTXO
> set size, plus a full from-network resync of all pruned nodes and a full
> (e.g. most of day outage) reindex of all unpruned nodes. Not
> insignificant but also not nothing. Such a portion of the existing utxo
> size is not from outputs over 520 bytes in size, so as a scheme for utxo
> set size reduction the addition of MHT tracking would probably make it a
> failure.
>
> Also some risk of creating some new scarce asset class, txouts consisting
> of primordial coins that aren't subject to the new rules... sounds like the
> sort of thing that NFT degens would absolutely love. That might not be an
> issue *generally* for some change with confiscation risk, but for a change
> that is specifically intended to lobotomize bitcoin to make it less useful
> to NFT degens, maybe not such a great idea. :P
>
> I mentioned it at all because I thought it could potentially be of some
> use, I'm just more skeptical of it for the current context. Also luke-jr
> and crew has moved on to actually propose even more invasive changes than
> just limiting the script size, which I anticipated, and has much more
> significant issues. Just size limiting outputs likely doesn't harm any
> interests or usages-- and so probably could be viable if the confiscation
> issue was addressed, but it also doesn't stick it to people transacting in
> ways the priests of ocean mining dislike.
>
> > I believe you're pointing out the idea of non economically-rational
> spammers?
>
> I think it's a mistake to conclude the spammers are economically
> irrational-- they're often just responding to different economics which may
> be less legible to your analysis. In particular, NFT degens prefer the
> high cost of transactions as a thing that makes their tokens scarce and
> gives them value. -- otherwise they wouldn't be swapping for one less
> efficient encoding for another, they're just be using another blockchain
> (perhaps their own) entirely.
>
>
>
>
> On Thu, Oct 30, 2025 at 1:16 AM Michael Tidwell <mtidw...@gmail.com>
> wrote:
>
>> > MRH tracking might make that acceptable, but comes at a high cost which
>> I think would clearly not be justified.
>>
>> Greg, I want to ask/challenge how bad this is, this seems like a
>> generally reusable primitive that could make other upgrades more feasible
>> that also have the same strict confiscation risk profile.
>> IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
>>
>> Poelstra,
>>
>> > I don't think this is a great idea -- it would be technically hard to
>> implement and slow deployment indefinitely.
>>
>> I would like to know how much of a deal breaker this is in your opinion.
>> Is MRH tracking off the table? In terms of the hypothetical presigned
>> transactions that may exist using P2MS, is this a hard enough reason to
>> require a MRH idea?
>>
>> Greg,
>>
>> > So, paradoxically this limit might increase the amount of non-prunable
>> data
>>
>> I believe you're pointing out the idea of non economically-rational
>> spammers? We already see actors ignoring cheaper witness inscription
>> methods. If spam shifts to many sub-520 fake pubkey outputs (which I
>> believe is less harmful than stamps), that imo is a separate UTXO cost
>> discussion. (like a SF to add weight to outputs). Anywho, this point alone
>> doesn't seem sufficient to add as a clear negative reason for someone
>> opposed to the proposal.
>>
>> Thanks,
>> Tidwell
>> On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
>>
>>> > Confiscation is a problem because of presigned transactions
>>>
>>> Allow 10000 bytes of total scriptPubKey size in each block counting only
>>> those outputs that are larger than x (520 as proposed).
>>> The code change is pretty minimal from the most obvious implementation
>>> of the original rule.
>>>
>>> That makes it technically non-confiscatory. Still non-standard, but if
>>> anyone out there so obnoxiously foot-gunned themselves, they can't claim
>>> they were rugged by the devs.
>>>
>>> BR,
>>> moonsettler
>>>
>>> On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <
>>> ad...@qrsnap.io> wrote:
>>>
>>> > Hey,
>>> >
>>> > First, thank you to everyone who responded, and please continue to do
>>> so. There were many thought provoking responses and this did shift my
>>> perspective quite a bit from the original post, which in of itself was the
>>> goal to a degree.
>>> >
>>> > I am currently only going to respond to all of the current concerns.
>>> Acks; though I like them will be ignored unless new discoveries are
>>> included.
>>> >
>>> > Tl;dr (Portlands Perspective)
>>> > - Confiscation is a problem because of presigned transactions
>>> > - DoS mitigation could also occur through marking UTXOs as unspendable
>>> if > 520 bytes, this would preserve the proof of publication.
>>> > - Timeout / Sunset logic is compelling
>>> > - The (n) value of acceptable needed bytes is contentious with the
>>> lower suggested limit being 67
>>> > - Congestion control is worth a look?
>>> >
>>> > Next Step:
>>> > - Deeper discussion at the individual level: Antoine Poinsot and GCC
>>> overlap?
>>> > - Write an implementation.
>>> > - Decide to pursue BIP
>>> >
>>> > Responses
>>> >
>>> > Andrew Poelstra:
>>> > > There is a risk of confiscation of coins which have pre-signed but
>>> > > unpublished transactions spending them to new outputs with large
>>> > > scriptPubKeys. Due to long-standing standardness rules, and the
>>> presence
>>> > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that
>>> any
>>> > > such transactions exist.
>>> >
>>> > PortlandHODL: This is a risk that can be incurred and likely not
>>> possible to mitigate as there could be possible chains of transactions so
>>> even when recursively iterating over a chain there is a chance that a
>>> presigned breaks this rule. Every idea I have had from block redemption
>>> limits on prevouts seems to just be a coverage issue where you can make the
>>> confiscation less likely but not completely mitigated.
>>> >
>>> > Second, there are already TXs that effectively have been confiscated
>>> at the policy level (P2SH Cleanstack violation) where the user can not find
>>> any miner with a policy to accept these into their mempool. (3 years)
>>> >
>>> > /dev /fd0
>>> > > so it would be great if this was restricted to OP_RETURN
>>> >
>>> > PortlandHODL: I reject this completely as this would remove the
>>> UTXOset omission for the scriptPubkey and encourage miners to subvert the
>>> OP_RETURN restriction and instead just use another op_code, this also do
>>> not hit on some of the most important factors such as DoS mitigation and
>>> legacy script attack surface reduction.
>>> >
>>> > Peter Todd
>>> > > NACK ...
>>> >
>>> > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
>>> without including any additional context or reasoning.
>>> >
>>> > jeremy
>>> > > I think that this type of rule is OK if we do it as a "sunsetting"
>>> restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
>>> years, 5 years, 10 years).
>>> >
>>> > If action is taken, this is the most reasonable approach. Alleviating
>>> confiscatory concerns through deferral.
>>> >
>>> > > You can argue against this example probably, but it is worth
>>> considering that absence of evidence of use is not evidence of absence of
>>> use and I myself feel that overall our understanding of Bitcoin transaction
>>> programming possibilities is still early. If you don't like this example, I
>>> can give you others (probably).
>>> >
>>> > Agreed and this also falls into the reasoning for deciding to utilize
>>> point 1 in your response. My thoughts on this would be along the lines of
>>> proof of publication as this change only has the effect of stripping away
>>> the executable portion of a script between 521 and 10_000 bytes or the
>>> published data portion if > 10_000 bytes which the same data could likely
>>> be published in chunked segments using outpoints.
>>> >
>>> > Andrew Poelstra:
>>> > > Aside from proof-of-publication (i.e. data storage directly in the
>>> UTXO
>>> > > set) there is no usage of script which can't be equally (or better)
>>> > > accomplished by using a Segwit v0 or Taproot script.
>>> >
>>> > This sums up the majority of future usecase concern
>>> >
>>> > Anthony Towns:
>>> > > (If you restricted the change to only applying to scripts that used
>>> > non-push operators, that would probably still provide upgrade
>>> flexibility
>>> > while also preventing potential script abuses. But it wouldn't do
>>> anything
>>> > to prevent publishing data)
>>> >
>>> > Could this not be done as segments in multiple outpoints using a
>>> coordination outpoint? I fail to see why publication proof must be in a
>>> single chunk. This does though however bring another alternative to mind,
>>> just making these outpoints unspendable but not invalidate the block
>>> through inclusion...
>>> >
>>> > > As far as the "but contiguous data will be regulated more strictly"
>>> > argument goes; I don't think "your honour, my offensive content has
>>> > strings of 4d0802 every 520 bytes
>>> >
>>> > Correct, this was never meant to resolve this issue.
>>> >
>>> > Luke Dashjr:
>>> > > If we're going this route, we should just close all the gaps for the
>>> immediate future:
>>> >
>>> > To put it nicely, this is completely beyond the scope of what is being
>>> proposed.
>>> >
>>> > Guus Ellenkamp:
>>> > > If there are really so few OP_RETURN outputs more than 144 bytes,
>>> then
>>> > why increase the limit if that change is so controversial? It seems
>>> > people who want to use a larger OP_RETURN size do it anyway, even with
>>> > the current default limits.
>>> >
>>> > Completely off topic and irrelevant
>>> >
>>> > Greg Tonoski:
>>> > > Limiting the maximum size of the scriptPubKey of a transaction to 67
>>> bytes.
>>> >
>>> > This leave no room to deal with broken hashing algorithms and very
>>> little future upgradability for hooks. The rest of these points should be
>>> merged with Lukes response and either hijack my thread or start a new one
>>> with the increased scope, any approach I take will only be related to the
>>> ScriptPubkey
>>> >
>>> > Keagan McClelland:
>>> > > Hard NACK on capping the witness size as that would effectively ban
>>> large scripts even in the P2SH wrapper which undermines Bitcoin's ability
>>> to be an effectively programmable money.
>>> >
>>> > This has nothing to do with the witness size or even the P2SH wrapper
>>> >
>>> > Casey Rodarmor:
>>> > > I think that "Bitcoin could need it in the future?" might be a good
>>> enough
>>> > reason not to do this.
>>> >
>>> > > Script pubkeys are the only variable-length transaction fields which
>>> can be
>>> > covered by input signatures, which might make them useful for future
>>> soft
>>> > forks. I can imagine confidential asset schemes or post-quantum coin
>>> recovery
>>> > schemes requiring large proofs in the outputs, where the validity of
>>> the proof
>>> > determined whether or not the transaction is valid, and thus require
>>> the
>>> > proofs to be in the outputs, and not just a hash commitment.
>>> >
>>> > Would the ability to publish the data alone be enough? Example make
>>> the output unspendable but allow for the existence of the bytes to be
>>> covered through the signature?
>>> >
>>> >
>>> > Antoine Poinsot:
>>> > > Limiting the size of created scriptPubKeys is not a sufficient
>>> mitigation on its own
>>> > I fail to see how this would not be sufficient? To DoS you need 2
>>> things inputs with ScriptPubkey redemptions + heavy op_codes that require
>>> unique checks. Example DUPing stack element again and again doesn't work.
>>> This then leads to the next part is you could get up to unique complex
>>> operations with the current (n) limit included per input.
>>> >
>>> > > One of the goal of BIP54 is to address objections to Matt's earlier
>>> proposal, notably the (in my
>>> > opinion reasonable) confiscation concerns voiced by Russell O'Connor.
>>> Limiting the size of
>>> > scriptPubKeys would in this regard be moving in the opposite
>>> direction.
>>> >
>>> > Some notes is I would actually go as far as to say the confiscation
>>> risk is higher with the TX limit proposed in BIP54 as we actually have
>>> proof of redemption of TXs that break that rule and the input set to do
>>> this already exists on-chain no need to even wonder about the whole
>>> presigned. bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>>> >
>>> > Please let me know if I am incorrect on any of this.
>>> >
>>> > > Furthermore, it's always possible to get the biggest bang for our
>>> buck in a first step
>>> >
>>> > Agreed on bang for the buck regarding DoS.
>>> >
>>> > My final point here would be that I would like to discuss more, and
>>> this is response is from the initial view of your response and could be
>>> incomplete or incorrect, This is just my in the moment response.
>>> >
>>> > Antoine Riard:
>>> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
>>> favor of prioritizing
>>> > a timewarp fix and limiting dosy spends by old redeem scripts
>>> >
>>> > The idea of congestion control is interesting, but this solution
>>> should significantly reduce the total DoS severity of known vectors.
>>> >
>>> > On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
>>> >
>>> > > Limits on block construction that cross transactions make it harder
>>> to accurately estimate fees and greatly complicate optimal block
>>> construction-- the latter being important because smarter and more computer
>>> powered mining code generating higher profits is a pro centralization
>>> factor.
>>> > >
>>> > > In terms of effectiveness the "spam" will just make itself
>>> indistinguishable from the most common transaction traffic from the
>>> perspective of such metrics-- and might well drive up "spam" levels because
>>> the higher embedding cost may make some of them use more transactions. The
>>> competition for these buckets by other traffic could make it effectively a
>>> block size reduction even against very boring ordinary transactions. ...
>>> which is probably not what most people want.
>>> > >
>>> > > I think it's important to keep in mind that bitcoin fee levels even
>>> at 0.1s/vb are far beyond what other hosting services and other blockchains
>>> cost-- so anyone still embedding data in bitcoin *really* want to be there
>>> for some reason and aren't too fee sensitive or else they'd already be
>>> using something else... some are even in favor of higher costs since the
>>> high fees are what create the scarcity needed for their seigniorage.
>>> > >
>>> > > But yeah I think your comments on priorities are correct.
>>> > >
>>> > >
>>> > >
>>> > > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
>>> wrote:
>>> > >
>>> > > > Hi list,
>>> > > >
>>> > > > Thanks to the annex covered by the signature, I don't see how the
>>> concern about limiting
>>> > > > the extensibility of bitcoin script with future (post-quantum)
>>> cryptographic schemes.
>>> > > > Previous proposal of the annex were deliberately designed with
>>> variable-length fields
>>> > > > to flexibly accomodate a wide range of things.
>>> > > >
>>> > > > I believe there is one thing that has not been proposed to limit
>>> unpredictable utterance
>>> > > > of spams on the blockchain, namely congestion control of
>>> categories of outputs (e.g "fat"
>>> > > > scriptpubkeys). Let's say P a block period, T a type of
>>> scriptpubkey and L a limiting
>>> > > > threshold for the number of T occurences during the period P.
>>> Beyond the L threshold, any
>>> > > > additional T scriptpubkey is making the block invalid. Or
>>> alternatively, any additional
>>> > > > T generating / spending transaction must pay some weight
>>> penalty...
>>> > > >
>>> > > > Congestion control, which of course comes with its lot of
>>> shenanigans, is not very a novel
>>> > > > idea as I believe it has been floated few times in the context of
>>> lightning to solve mass
>>> > > > closure, where channels out-priced at current feerate would have
>>> their safety timelocks scale
>>> > > > ups.
>>> > > >
>>> > > > No need anymore to come to social consensus on what is
>>> quantitative "spam" or not. The blockchain
>>> > > > would automatically throttle out the block space spamming
>>> transaction. Qualitative spam it's another
>>> > > > question, for anyone who has ever read shannon's theory of
>>> communication only effective thing can
>>> > > > be to limit the size of data payload. But probably we're kickly
>>> back to a non-mathematically solvable
>>> > > > linguistical question again [0].
>>> > > >
>>> > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more
>>> in favor of prioritizing
>>> > > > a timewarp fix and limiting dosy spends by old redeem scripts,
>>> rather than engaging in shooting
>>> > > > ourselves in the foot with ill-designed "spam" consensus
>>> mitigations.
>>> > > >
>>> > > > [0] If you have a soul of logician, it would be an interesting
>>> demonstration to come with
>>> > > > to establish that we cannot come up with mathematically or
>>> cryptographically consensus means
>>> > > > to solve qualitative "spam", which in a very pure sense is a
>>> linguistical issue.
>>> > > >
>>> > > > Best,
>>> > > > Antoine
>>> > > > OTS hash:
>>> 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
>>> > > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a
>>> écrit :
>>> > > >
>>> > > > > Hi,
>>> > > > >
>>> > > > > This approach was discussed last year when evaluating the best
>>> way to mitigate DoS blocks in terms
>>> > > > > of gains compared to confiscatory surface. Limiting the size of
>>> created scriptPubKeys is not a
>>> > > > > sufficient mitigation on its own, and has a non-trivial
>>> confiscatory surface.
>>> > > > >
>>> > > > > One of the goal of BIP54 is to address objections to Matt's
>>> earlier proposal, notably the (in my
>>> > > > > opinion reasonable) confiscation concerns voiced by Russell
>>> O'Connor. Limiting the size of
>>> > > > > scriptPubKeys would in this regard be moving in the opposite
>>> direction.
>>> > > > >
>>> > > > > Various approaches of limiting the size of spent scriptPubKeys
>>> were discussed, in forms that would
>>> > > > > mitigate the confiscatory surface, to adopt in addition to (what
>>> eventually became) the BIP54 sigops
>>> > > > > limit. However i decided against including this additional
>>> measure in BIP54 because:
>>> > > > > - of the inherent complexity of the discussed schemes, which
>>> would make it hard to reason about
>>> > > > > constructing transactions spending legacy inputs, and equally
>>> hard to evaluate the reduction of
>>> > > > > the confiscatory surface;
>>> > > > > - more importantly, there is steep diminishing returns to piling
>>> on more mitigations. The BIP54
>>> > > > > limit on its own prevents an externally-motivated attacker from
>>> *unevenly* stalling the network
>>> > > > > for dozens of minutes, and a revenue-maximizing miner from
>>> regularly stalling its competitions
>>> > > > > for dozens of seconds, at a minimized cost in confiscatory
>>> surface. Additional mitigations reduce
>>> > > > > the worst case validation time by a smaller factor at a higher
>>> cost in terms of confiscatory
>>> > > > > surface. It "feels right" to further reduce those numbers, but
>>> it's less clear what the tangible
>>> > > > > gains would be.
>>> > > > >
>>> > > > > Furthermore, it's always possible to get the biggest bang for
>>> our buck in a first step and going the
>>> > > > > extra mile in a later, more controversial, soft fork. I
>>> previously floated the idea of a "cleanup
>>> > > > > v2" in private discussions, and i think besides a reduction of
>>> the maximum scriptPubKey size it
>>> > > > > should feature a consensus-enforced maximum transaction size for
>>> the reasons stated here:
>>> > > > >
>>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
>>> I wouldn't hold my
>>> > > > > breath on such a "cleanup v2", but it may be useful to have it
>>> documented somewhere.
>>> > > > >
>>> > > > > I'm trying to not go into much details regarding which
>>> mitigations were considered in designing
>>> > > > > BIP54, because they are tightly related to the design of various
>>> DoS blocks. But i'm always happy to
>>> > > > > rehash the decisions made there and (re-)consider alternative
>>> approaches on the semi-private Delving
>>> > > > > thread [0] dedicated to this purpose. Feel free to ping me to
>>> get access if i know you.
>>> > > > >
>>> > > > > Best,
>>> > > > > Antoine Poinsot
>>> > > > >
>>> > > > > [0]:
>>> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > >
>>> > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
>>> fre...@reardencode.com> wrote:
>>> > > > >
>>> > > > > >
>>> > > > > >
>>> > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>>> > > > > >
>>> > > > > > > But also given that there are essentially no violations and
>>> no reason to
>>> > > > > > > expect any I'm not sure the proposal is worth time relative
>>> to fixes of
>>> > > > > > > actual moderately serious DOS attack issues.
>>> > > > > >
>>> > > > > >
>>> > > > > > I believe this limit would also stop most (all?) of
>>> PortlandHODL's
>>> > > > > > DoSblocks without having to make some of the other changes in
>>> GCC. I
>>> > > > > > think it's worthwhile to compare this approach to those
>>> proposed by
>>> > > > > > Antoine in solving these DoS vectors.
>>> > > > > >
>>> > > > > > Best,
>>> > > > > >
>>> > > > > > --Brandon
>>> > > > > >
>>> > > > > > --
>>> > > > > > You received this message because you are subscribed to the
>>> Google Groups "Bitcoin Development Mailing List" group.
>>> > > > > > To unsubscribe from this group and stop receiving emails from
>>> it, send an email to bitcoindev+...@googlegroups.com.
>>> > > > > > To view this discussion visit
>>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>>>
>>> > > >
>>> > > > --
>>> > > > You received this message because you are subscribed to the Google
>>> Groups "Bitcoin Development Mailing List" group.
>>> > > > To unsubscribe from this group and stop receiving emails from it,
>>> send an email to bitcoindev+...@googlegroups.com.
>>> > >
>>> > > > To view this discussion visit
>>> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>>>
>>> >
>>> > --
>>> > You received this message because you are subscribed to the Google
>>> Groups "Bitcoin Development Mailing List" group.
>>> > To unsubscribe from this group and stop receiving emails from it, send
>>> an email to bitcoindev+...@googlegroups.com.
>>> > To view this discussion visit
>>> https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
>>>
>>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Bitcoin Development Mailing List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to bitcoindev+...@googlegroups.com.
>>
> To view this discussion visit
>> https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com
>> <https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com?utm_medium=email&utm_source=footer>
>> .
>>
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0d225c5n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 31212 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 3:36 ` Michael Tidwell
@ 2025-10-30 6:15 ` Greg Maxwell
2025-10-30 8:55 ` Bitcoin Error Log
2025-10-30 20:27 ` [bitcoindev] Policy restrictions Was: " 'Russell O'Connor' via Bitcoin Development Mailing List
0 siblings, 2 replies; 46+ messages in thread
From: Greg Maxwell @ 2025-10-30 6:15 UTC (permalink / raw)
To: Michael Tidwell; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 30733 bytes --]
Prior softforks have stuck to using the more explicit "forward
compatibility" mechanisms, so -- e.g. if you use OP_NOP3 or a higher
transaction version number or whatever that had no purpose (and would
literally do nothing), saw ~no use, and was non-standard, or scripts that
just anyone could have immediately taken at any time (e.g. funds free for
the collecting rather than something secure)... then in that case I think
people have felt that the long discussion leading up to a softfork was
enough to acceptably mitigate the risk. Tapscript was specifically
designed to make upgrades even safer and easier by making it so that the
mere presence of any forward compat opcode (OP_SUCCESSn) makes the whole
script insecure until that opcode is in use.
The proposal to limit scriptpubkey size is worse because longer scripts had
purposes and use (e.g. larger multisigs) and unlike some NOP3 or txversions
where you could be argued to deserve issues if you did something so weird
and abused a forward compat mechanism, people running into a 520 limit
could have been pretty boring (and I see my own watching wallets have some
scriptpubkeys beyond that size (big multisigs), in fact-- though I don't
*think* any are still in use, but even I'm not absolutely sure that such a
restriction wouldn't confiscate some of my own funds--- and it's a pain in
the rear to check, having to bring offline stuff online, etc).
Confiscation isn't just limited to timelocks, since the victims of it may
just not know about the consensus change and while they could move their
coins they don't. One of the big advantages many people see in Bitcoin is
that you can put your keys in a time capsule in the foundation of your home
and trust that they're still going to be there and you'll be able to use
your coins a decade later. ... that you don't have to watch out for banks
drilling your safe deposit boxes or people putting public notices in
classified ads laying claim to your property.
I don't even think bitcoin has ever policy restricted something that was in
active use, much less softforked out something like that. I wouldn't say
it was impossible but I think on the balance it would favor a notice period
so that any reasonable person could have taken notice, taken action, or at
least spoke up. But since there is no requirement to monitor and that's
part of bitcoin's value prop the amount of time to consider reasonable
ought to be quite long. Which also is at odds with the emergency measures
position being taken by proponents of such changes.
(which also, I think are just entirely unjustified, even if you accept the
worst version of their narrative with the historical chain being made
_illegal_, one could simply produce node software that starts from a well
known embedded utxo snapshot and doesn't process historical blocks. Such
a thing would be in principle a reduction in the security model, but
balances against the practical and realistic impact of potentially
confiscating coins I think it looks pretty fine by comparison. It would
also be fully consensus compatible, assuming no reorg below that point, and
can be done right now by anyone who cares in a totally permissionless and
coercion free manner)
On Thu, Oct 30, 2025 at 5:13 AM Michael Tidwell <mtidwell021@gmail.com>
wrote:
> Greg,
>
> > Also some risk of creating a new scarce asset class.
>
> Well, Casey Rodarmor is in the thread, so lol maybe.
>
> Anyway, point taken. I want to be 100% sure I understand the
> hypotheticals: there could be an off-chain, presigned, transactions that
> needs more than 520 bytes for the scriptPubKey and, as Poelstra said, could
> even form a chain of presigned transactions under some complex, previously
> unknown, scheme that only becomes public after this change is made. Can you
> confirm?
>
> Would it also be a worry that a chain of transactions using said utxo
> could commit to some bizarre scheme, for instance a taproot transaction
> utxo that later is presigned committed back to P2MS larger than 520 bytes?
> If so, I think I get it, you're saying to essentially guarantee no
> confiscation we'd never be able to upgrade old UTXOs and we'd need to track
> them forever to prevent unlikely edge cases?
> Does the presigned chain at least stop needing to be tracked once the
> given UTXO co-mingles with a post-update coinbase utxo?
>
> If so, this is indeed complex! This seems pretty insane both for the
> complexity of implementing and the unlikely edge cases. Has Core ever made
> a decision of (acceptable risk) to upgrade with protection of onchain utxos
> but not hypothetical unpublished ones?
> Aren't we going to run into the same situation if we do an op code clean
> up in the future if we had people presign/commit to op codes that are no
> longer consensus valid?
>
> Tidwell
>
> On Wednesday, October 29, 2025 at 10:32:10 PM UTC-4 Greg Maxwell wrote:
>
>> "A few bytes" might be on the order of forever 10% increase in the UTXO
>> set size, plus a full from-network resync of all pruned nodes and a full
>> (e.g. most of day outage) reindex of all unpruned nodes. Not
>> insignificant but also not nothing. Such a portion of the existing utxo
>> size is not from outputs over 520 bytes in size, so as a scheme for utxo
>> set size reduction the addition of MHT tracking would probably make it a
>> failure.
>>
>> Also some risk of creating some new scarce asset class, txouts consisting
>> of primordial coins that aren't subject to the new rules... sounds like the
>> sort of thing that NFT degens would absolutely love. That might not be an
>> issue *generally* for some change with confiscation risk, but for a change
>> that is specifically intended to lobotomize bitcoin to make it less useful
>> to NFT degens, maybe not such a great idea. :P
>>
>> I mentioned it at all because I thought it could potentially be of some
>> use, I'm just more skeptical of it for the current context. Also luke-jr
>> and crew has moved on to actually propose even more invasive changes than
>> just limiting the script size, which I anticipated, and has much more
>> significant issues. Just size limiting outputs likely doesn't harm any
>> interests or usages-- and so probably could be viable if the confiscation
>> issue was addressed, but it also doesn't stick it to people transacting in
>> ways the priests of ocean mining dislike.
>>
>> > I believe you're pointing out the idea of non economically-rational
>> spammers?
>>
>> I think it's a mistake to conclude the spammers are economically
>> irrational-- they're often just responding to different economics which may
>> be less legible to your analysis. In particular, NFT degens prefer the
>> high cost of transactions as a thing that makes their tokens scarce and
>> gives them value. -- otherwise they wouldn't be swapping for one less
>> efficient encoding for another, they're just be using another blockchain
>> (perhaps their own) entirely.
>>
>>
>>
>>
>> On Thu, Oct 30, 2025 at 1:16 AM Michael Tidwell <mtidw...@gmail.com>
>> wrote:
>>
>>> > MRH tracking might make that acceptable, but comes at a high cost
>>> which I think would clearly not be justified.
>>>
>>> Greg, I want to ask/challenge how bad this is, this seems like a
>>> generally reusable primitive that could make other upgrades more feasible
>>> that also have the same strict confiscation risk profile.
>>> IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
>>>
>>> Poelstra,
>>>
>>> > I don't think this is a great idea -- it would be technically hard to
>>> implement and slow deployment indefinitely.
>>>
>>> I would like to know how much of a deal breaker this is in your opinion.
>>> Is MRH tracking off the table? In terms of the hypothetical presigned
>>> transactions that may exist using P2MS, is this a hard enough reason to
>>> require a MRH idea?
>>>
>>> Greg,
>>>
>>> > So, paradoxically this limit might increase the amount of non-prunable
>>> data
>>>
>>> I believe you're pointing out the idea of non economically-rational
>>> spammers? We already see actors ignoring cheaper witness inscription
>>> methods. If spam shifts to many sub-520 fake pubkey outputs (which I
>>> believe is less harmful than stamps), that imo is a separate UTXO cost
>>> discussion. (like a SF to add weight to outputs). Anywho, this point alone
>>> doesn't seem sufficient to add as a clear negative reason for someone
>>> opposed to the proposal.
>>>
>>> Thanks,
>>> Tidwell
>>> On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
>>>
>>>> > Confiscation is a problem because of presigned transactions
>>>>
>>>> Allow 10000 bytes of total scriptPubKey size in each block counting
>>>> only those outputs that are larger than x (520 as proposed).
>>>> The code change is pretty minimal from the most obvious implementation
>>>> of the original rule.
>>>>
>>>> That makes it technically non-confiscatory. Still non-standard, but if
>>>> anyone out there so obnoxiously foot-gunned themselves, they can't claim
>>>> they were rugged by the devs.
>>>>
>>>> BR,
>>>> moonsettler
>>>>
>>>> On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <
>>>> ad...@qrsnap.io> wrote:
>>>>
>>>> > Hey,
>>>> >
>>>> > First, thank you to everyone who responded, and please continue to do
>>>> so. There were many thought provoking responses and this did shift my
>>>> perspective quite a bit from the original post, which in of itself was the
>>>> goal to a degree.
>>>> >
>>>> > I am currently only going to respond to all of the current concerns.
>>>> Acks; though I like them will be ignored unless new discoveries are
>>>> included.
>>>> >
>>>> > Tl;dr (Portlands Perspective)
>>>> > - Confiscation is a problem because of presigned transactions
>>>> > - DoS mitigation could also occur through marking UTXOs as
>>>> unspendable if > 520 bytes, this would preserve the proof of publication.
>>>> > - Timeout / Sunset logic is compelling
>>>> > - The (n) value of acceptable needed bytes is contentious with the
>>>> lower suggested limit being 67
>>>> > - Congestion control is worth a look?
>>>> >
>>>> > Next Step:
>>>> > - Deeper discussion at the individual level: Antoine Poinsot and GCC
>>>> overlap?
>>>> > - Write an implementation.
>>>> > - Decide to pursue BIP
>>>> >
>>>> > Responses
>>>> >
>>>> > Andrew Poelstra:
>>>> > > There is a risk of confiscation of coins which have pre-signed but
>>>> > > unpublished transactions spending them to new outputs with large
>>>> > > scriptPubKeys. Due to long-standing standardness rules, and the
>>>> presence
>>>> > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that
>>>> any
>>>> > > such transactions exist.
>>>> >
>>>> > PortlandHODL: This is a risk that can be incurred and likely not
>>>> possible to mitigate as there could be possible chains of transactions so
>>>> even when recursively iterating over a chain there is a chance that a
>>>> presigned breaks this rule. Every idea I have had from block redemption
>>>> limits on prevouts seems to just be a coverage issue where you can make the
>>>> confiscation less likely but not completely mitigated.
>>>> >
>>>> > Second, there are already TXs that effectively have been confiscated
>>>> at the policy level (P2SH Cleanstack violation) where the user can not find
>>>> any miner with a policy to accept these into their mempool. (3 years)
>>>> >
>>>> > /dev /fd0
>>>> > > so it would be great if this was restricted to OP_RETURN
>>>> >
>>>> > PortlandHODL: I reject this completely as this would remove the
>>>> UTXOset omission for the scriptPubkey and encourage miners to subvert the
>>>> OP_RETURN restriction and instead just use another op_code, this also do
>>>> not hit on some of the most important factors such as DoS mitigation and
>>>> legacy script attack surface reduction.
>>>> >
>>>> > Peter Todd
>>>> > > NACK ...
>>>> >
>>>> > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
>>>> without including any additional context or reasoning.
>>>> >
>>>> > jeremy
>>>> > > I think that this type of rule is OK if we do it as a "sunsetting"
>>>> restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
>>>> years, 5 years, 10 years).
>>>> >
>>>> > If action is taken, this is the most reasonable approach. Alleviating
>>>> confiscatory concerns through deferral.
>>>> >
>>>> > > You can argue against this example probably, but it is worth
>>>> considering that absence of evidence of use is not evidence of absence of
>>>> use and I myself feel that overall our understanding of Bitcoin transaction
>>>> programming possibilities is still early. If you don't like this example, I
>>>> can give you others (probably).
>>>> >
>>>> > Agreed and this also falls into the reasoning for deciding to utilize
>>>> point 1 in your response. My thoughts on this would be along the lines of
>>>> proof of publication as this change only has the effect of stripping away
>>>> the executable portion of a script between 521 and 10_000 bytes or the
>>>> published data portion if > 10_000 bytes which the same data could likely
>>>> be published in chunked segments using outpoints.
>>>> >
>>>> > Andrew Poelstra:
>>>> > > Aside from proof-of-publication (i.e. data storage directly in the
>>>> UTXO
>>>> > > set) there is no usage of script which can't be equally (or better)
>>>> > > accomplished by using a Segwit v0 or Taproot script.
>>>> >
>>>> > This sums up the majority of future usecase concern
>>>> >
>>>> > Anthony Towns:
>>>> > > (If you restricted the change to only applying to scripts that used
>>>> > non-push operators, that would probably still provide upgrade
>>>> flexibility
>>>> > while also preventing potential script abuses. But it wouldn't do
>>>> anything
>>>> > to prevent publishing data)
>>>> >
>>>> > Could this not be done as segments in multiple outpoints using a
>>>> coordination outpoint? I fail to see why publication proof must be in a
>>>> single chunk. This does though however bring another alternative to mind,
>>>> just making these outpoints unspendable but not invalidate the block
>>>> through inclusion...
>>>> >
>>>> > > As far as the "but contiguous data will be regulated more strictly"
>>>> > argument goes; I don't think "your honour, my offensive content has
>>>> > strings of 4d0802 every 520 bytes
>>>> >
>>>> > Correct, this was never meant to resolve this issue.
>>>> >
>>>> > Luke Dashjr:
>>>> > > If we're going this route, we should just close all the gaps for
>>>> the immediate future:
>>>> >
>>>> > To put it nicely, this is completely beyond the scope of what is
>>>> being proposed.
>>>> >
>>>> > Guus Ellenkamp:
>>>> > > If there are really so few OP_RETURN outputs more than 144 bytes,
>>>> then
>>>> > why increase the limit if that change is so controversial? It seems
>>>> > people who want to use a larger OP_RETURN size do it anyway, even
>>>> with
>>>> > the current default limits.
>>>> >
>>>> > Completely off topic and irrelevant
>>>> >
>>>> > Greg Tonoski:
>>>> > > Limiting the maximum size of the scriptPubKey of a transaction to
>>>> 67 bytes.
>>>> >
>>>> > This leave no room to deal with broken hashing algorithms and very
>>>> little future upgradability for hooks. The rest of these points should be
>>>> merged with Lukes response and either hijack my thread or start a new one
>>>> with the increased scope, any approach I take will only be related to the
>>>> ScriptPubkey
>>>> >
>>>> > Keagan McClelland:
>>>> > > Hard NACK on capping the witness size as that would effectively ban
>>>> large scripts even in the P2SH wrapper which undermines Bitcoin's ability
>>>> to be an effectively programmable money.
>>>> >
>>>> > This has nothing to do with the witness size or even the P2SH wrapper
>>>> >
>>>> > Casey Rodarmor:
>>>> > > I think that "Bitcoin could need it in the future?" might be a good
>>>> enough
>>>> > reason not to do this.
>>>> >
>>>> > > Script pubkeys are the only variable-length transaction fields
>>>> which can be
>>>> > covered by input signatures, which might make them useful for future
>>>> soft
>>>> > forks. I can imagine confidential asset schemes or post-quantum coin
>>>> recovery
>>>> > schemes requiring large proofs in the outputs, where the validity of
>>>> the proof
>>>> > determined whether or not the transaction is valid, and thus require
>>>> the
>>>> > proofs to be in the outputs, and not just a hash commitment.
>>>> >
>>>> > Would the ability to publish the data alone be enough? Example make
>>>> the output unspendable but allow for the existence of the bytes to be
>>>> covered through the signature?
>>>> >
>>>> >
>>>> > Antoine Poinsot:
>>>> > > Limiting the size of created scriptPubKeys is not a sufficient
>>>> mitigation on its own
>>>> > I fail to see how this would not be sufficient? To DoS you need 2
>>>> things inputs with ScriptPubkey redemptions + heavy op_codes that require
>>>> unique checks. Example DUPing stack element again and again doesn't work.
>>>> This then leads to the next part is you could get up to unique complex
>>>> operations with the current (n) limit included per input.
>>>> >
>>>> > > One of the goal of BIP54 is to address objections to Matt's earlier
>>>> proposal, notably the (in my
>>>> > opinion reasonable) confiscation concerns voiced by Russell O'Connor.
>>>> Limiting the size of
>>>> > scriptPubKeys would in this regard be moving in the opposite
>>>> direction.
>>>> >
>>>> > Some notes is I would actually go as far as to say the confiscation
>>>> risk is higher with the TX limit proposed in BIP54 as we actually have
>>>> proof of redemption of TXs that break that rule and the input set to do
>>>> this already exists on-chain no need to even wonder about the whole
>>>> presigned. bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>>>> >
>>>> > Please let me know if I am incorrect on any of this.
>>>> >
>>>> > > Furthermore, it's always possible to get the biggest bang for our
>>>> buck in a first step
>>>> >
>>>> > Agreed on bang for the buck regarding DoS.
>>>> >
>>>> > My final point here would be that I would like to discuss more, and
>>>> this is response is from the initial view of your response and could be
>>>> incomplete or incorrect, This is just my in the moment response.
>>>> >
>>>> > Antoine Riard:
>>>> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
>>>> favor of prioritizing
>>>> > a timewarp fix and limiting dosy spends by old redeem scripts
>>>> >
>>>> > The idea of congestion control is interesting, but this solution
>>>> should significantly reduce the total DoS severity of known vectors.
>>>> >
>>>> > On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
>>>> >
>>>> > > Limits on block construction that cross transactions make it harder
>>>> to accurately estimate fees and greatly complicate optimal block
>>>> construction-- the latter being important because smarter and more computer
>>>> powered mining code generating higher profits is a pro centralization
>>>> factor.
>>>> > >
>>>> > > In terms of effectiveness the "spam" will just make itself
>>>> indistinguishable from the most common transaction traffic from the
>>>> perspective of such metrics-- and might well drive up "spam" levels because
>>>> the higher embedding cost may make some of them use more transactions. The
>>>> competition for these buckets by other traffic could make it effectively a
>>>> block size reduction even against very boring ordinary transactions. ...
>>>> which is probably not what most people want.
>>>> > >
>>>> > > I think it's important to keep in mind that bitcoin fee levels even
>>>> at 0.1s/vb are far beyond what other hosting services and other blockchains
>>>> cost-- so anyone still embedding data in bitcoin *really* want to be there
>>>> for some reason and aren't too fee sensitive or else they'd already be
>>>> using something else... some are even in favor of higher costs since the
>>>> high fees are what create the scarcity needed for their seigniorage.
>>>> > >
>>>> > > But yeah I think your comments on priorities are correct.
>>>> > >
>>>> > >
>>>> > >
>>>> > > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
>>>> wrote:
>>>> > >
>>>> > > > Hi list,
>>>> > > >
>>>> > > > Thanks to the annex covered by the signature, I don't see how the
>>>> concern about limiting
>>>> > > > the extensibility of bitcoin script with future (post-quantum)
>>>> cryptographic schemes.
>>>> > > > Previous proposal of the annex were deliberately designed with
>>>> variable-length fields
>>>> > > > to flexibly accomodate a wide range of things.
>>>> > > >
>>>> > > > I believe there is one thing that has not been proposed to limit
>>>> unpredictable utterance
>>>> > > > of spams on the blockchain, namely congestion control of
>>>> categories of outputs (e.g "fat"
>>>> > > > scriptpubkeys). Let's say P a block period, T a type of
>>>> scriptpubkey and L a limiting
>>>> > > > threshold for the number of T occurences during the period P.
>>>> Beyond the L threshold, any
>>>> > > > additional T scriptpubkey is making the block invalid. Or
>>>> alternatively, any additional
>>>> > > > T generating / spending transaction must pay some weight
>>>> penalty...
>>>> > > >
>>>> > > > Congestion control, which of course comes with its lot of
>>>> shenanigans, is not very a novel
>>>> > > > idea as I believe it has been floated few times in the context of
>>>> lightning to solve mass
>>>> > > > closure, where channels out-priced at current feerate would have
>>>> their safety timelocks scale
>>>> > > > ups.
>>>> > > >
>>>> > > > No need anymore to come to social consensus on what is
>>>> quantitative "spam" or not. The blockchain
>>>> > > > would automatically throttle out the block space spamming
>>>> transaction. Qualitative spam it's another
>>>> > > > question, for anyone who has ever read shannon's theory of
>>>> communication only effective thing can
>>>> > > > be to limit the size of data payload. But probably we're kickly
>>>> back to a non-mathematically solvable
>>>> > > > linguistical question again [0].
>>>> > > >
>>>> > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more
>>>> in favor of prioritizing
>>>> > > > a timewarp fix and limiting dosy spends by old redeem scripts,
>>>> rather than engaging in shooting
>>>> > > > ourselves in the foot with ill-designed "spam" consensus
>>>> mitigations.
>>>> > > >
>>>> > > > [0] If you have a soul of logician, it would be an interesting
>>>> demonstration to come with
>>>> > > > to establish that we cannot come up with mathematically or
>>>> cryptographically consensus means
>>>> > > > to solve qualitative "spam", which in a very pure sense is a
>>>> linguistical issue.
>>>> > > >
>>>> > > > Best,
>>>> > > > Antoine
>>>> > > > OTS hash:
>>>> 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
>>>> > > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a
>>>> écrit :
>>>> > > >
>>>> > > > > Hi,
>>>> > > > >
>>>> > > > > This approach was discussed last year when evaluating the best
>>>> way to mitigate DoS blocks in terms
>>>> > > > > of gains compared to confiscatory surface. Limiting the size of
>>>> created scriptPubKeys is not a
>>>> > > > > sufficient mitigation on its own, and has a non-trivial
>>>> confiscatory surface.
>>>> > > > >
>>>> > > > > One of the goal of BIP54 is to address objections to Matt's
>>>> earlier proposal, notably the (in my
>>>> > > > > opinion reasonable) confiscation concerns voiced by Russell
>>>> O'Connor. Limiting the size of
>>>> > > > > scriptPubKeys would in this regard be moving in the opposite
>>>> direction.
>>>> > > > >
>>>> > > > > Various approaches of limiting the size of spent scriptPubKeys
>>>> were discussed, in forms that would
>>>> > > > > mitigate the confiscatory surface, to adopt in addition to
>>>> (what eventually became) the BIP54 sigops
>>>> > > > > limit. However i decided against including this additional
>>>> measure in BIP54 because:
>>>> > > > > - of the inherent complexity of the discussed schemes, which
>>>> would make it hard to reason about
>>>> > > > > constructing transactions spending legacy inputs, and equally
>>>> hard to evaluate the reduction of
>>>> > > > > the confiscatory surface;
>>>> > > > > - more importantly, there is steep diminishing returns to
>>>> piling on more mitigations. The BIP54
>>>> > > > > limit on its own prevents an externally-motivated attacker from
>>>> *unevenly* stalling the network
>>>> > > > > for dozens of minutes, and a revenue-maximizing miner from
>>>> regularly stalling its competitions
>>>> > > > > for dozens of seconds, at a minimized cost in confiscatory
>>>> surface. Additional mitigations reduce
>>>> > > > > the worst case validation time by a smaller factor at a higher
>>>> cost in terms of confiscatory
>>>> > > > > surface. It "feels right" to further reduce those numbers, but
>>>> it's less clear what the tangible
>>>> > > > > gains would be.
>>>> > > > >
>>>> > > > > Furthermore, it's always possible to get the biggest bang for
>>>> our buck in a first step and going the
>>>> > > > > extra mile in a later, more controversial, soft fork. I
>>>> previously floated the idea of a "cleanup
>>>> > > > > v2" in private discussions, and i think besides a reduction of
>>>> the maximum scriptPubKey size it
>>>> > > > > should feature a consensus-enforced maximum transaction size
>>>> for the reasons stated here:
>>>> > > > >
>>>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
>>>> I wouldn't hold my
>>>> > > > > breath on such a "cleanup v2", but it may be useful to have it
>>>> documented somewhere.
>>>> > > > >
>>>> > > > > I'm trying to not go into much details regarding which
>>>> mitigations were considered in designing
>>>> > > > > BIP54, because they are tightly related to the design of
>>>> various DoS blocks. But i'm always happy to
>>>> > > > > rehash the decisions made there and (re-)consider alternative
>>>> approaches on the semi-private Delving
>>>> > > > > thread [0] dedicated to this purpose. Feel free to ping me to
>>>> get access if i know you.
>>>> > > > >
>>>> > > > > Best,
>>>> > > > > Antoine Poinsot
>>>> > > > >
>>>> > > > > [0]:
>>>> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > >
>>>> > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
>>>> fre...@reardencode.com> wrote:
>>>> > > > >
>>>> > > > > >
>>>> > > > > >
>>>> > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>>>> > > > > >
>>>> > > > > > > But also given that there are essentially no violations and
>>>> no reason to
>>>> > > > > > > expect any I'm not sure the proposal is worth time relative
>>>> to fixes of
>>>> > > > > > > actual moderately serious DOS attack issues.
>>>> > > > > >
>>>> > > > > >
>>>> > > > > > I believe this limit would also stop most (all?) of
>>>> PortlandHODL's
>>>> > > > > > DoSblocks without having to make some of the other changes in
>>>> GCC. I
>>>> > > > > > think it's worthwhile to compare this approach to those
>>>> proposed by
>>>> > > > > > Antoine in solving these DoS vectors.
>>>> > > > > >
>>>> > > > > > Best,
>>>> > > > > >
>>>> > > > > > --Brandon
>>>> > > > > >
>>>> > > > > > --
>>>> > > > > > You received this message because you are subscribed to the
>>>> Google Groups "Bitcoin Development Mailing List" group.
>>>> > > > > > To unsubscribe from this group and stop receiving emails from
>>>> it, send an email to bitcoindev+...@googlegroups.com.
>>>> > > > > > To view this discussion visit
>>>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
>>>>
>>>> > > >
>>>> > > > --
>>>> > > > You received this message because you are subscribed to the
>>>> Google Groups "Bitcoin Development Mailing List" group.
>>>> > > > To unsubscribe from this group and stop receiving emails from it,
>>>> send an email to bitcoindev+...@googlegroups.com.
>>>> > >
>>>> > > > To view this discussion visit
>>>> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>>>>
>>>> >
>>>> > --
>>>> > You received this message because you are subscribed to the Google
>>>> Groups "Bitcoin Development Mailing List" group.
>>>> > To unsubscribe from this group and stop receiving emails from it,
>>>> send an email to bitcoindev+...@googlegroups.com.
>>>> > To view this discussion visit
>>>> https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
>>>>
>>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "Bitcoin Development Mailing List" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to bitcoindev+...@googlegroups.com.
>>>
>> To view this discussion visit
>>> https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com
>>> <https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0d225c5n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0d225c5n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAAS2fgSNtO6kpfm0XaBCufyExjJnxg87ttLGUgpUU9pkemZTig%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 34266 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 6:15 ` Greg Maxwell
@ 2025-10-30 8:55 ` Bitcoin Error Log
2025-10-30 17:40 ` Greg Maxwell
2025-10-30 20:27 ` [bitcoindev] Policy restrictions Was: " 'Russell O'Connor' via Bitcoin Development Mailing List
1 sibling, 1 reply; 46+ messages in thread
From: Bitcoin Error Log @ 2025-10-30 8:55 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 30197 bytes --]
Greg,
One correction. Bitcoin has significantly restricted a proven use case with
policy in the past. Maybe you won't think this qualifies, but it happened
while you were away so I am curious about your assessment.
During the change to mempoolfullrbf policy, I, with support from the
original author of the change, and with support of multiple Core devs, and
with the support of multiple businesses providing data on how they provided
zero-conf as a service to users via risk management, tried to stop Bitcoin
from killing first-seen policy, which had been stable for all of the
history of Bitcoin. The change was at least clearly demonstrated as
controversial, and lacking real consensus.
I'm happy to admit that no policy is enforceable, and that zero-conf was
"never safe", but we had a system that worked and made Bitcoin more useful
to people that used it that way. The businesses simply monitored for
doublespends, imposed exposure limits per block, and gated actual delivery
separately from checkout UX. It worked and now it does not and the only
reason was a policy change.
The problem with claiming that policy is not a means of change, is that you
must also admit the lack of need for any RBF flags at all, or for arguing
about data spam relay, or for any wide policy to be a concern of Bitcoin
Core at all. (particularly when speculatively / subjectively applied) .
Thank you, and sorry for the side topic.
~John
On Thursday, October 30, 2025 at 6:40:10 AM UTC Greg Maxwell wrote:
Prior softforks have stuck to using the more explicit "forward
compatibility" mechanisms, so -- e.g. if you use OP_NOP3 or a higher
transaction version number or whatever that had no purpose (and would
literally do nothing), saw ~no use, and was non-standard, or scripts that
just anyone could have immediately taken at any time (e.g. funds free for
the collecting rather than something secure)... then in that case I think
people have felt that the long discussion leading up to a softfork was
enough to acceptably mitigate the risk. Tapscript was specifically
designed to make upgrades even safer and easier by making it so that the
mere presence of any forward compat opcode (OP_SUCCESSn) makes the whole
script insecure until that opcode is in use.
The proposal to limit scriptpubkey size is worse because longer scripts had
purposes and use (e.g. larger multisigs) and unlike some NOP3 or txversions
where you could be argued to deserve issues if you did something so weird
and abused a forward compat mechanism, people running into a 520 limit
could have been pretty boring (and I see my own watching wallets have some
scriptpubkeys beyond that size (big multisigs), in fact-- though I don't
*think* any are still in use, but even I'm not absolutely sure that such a
restriction wouldn't confiscate some of my own funds--- and it's a pain in
the rear to check, having to bring offline stuff online, etc).
Confiscation isn't just limited to timelocks, since the victims of it may
just not know about the consensus change and while they could move their
coins they don't. One of the big advantages many people see in Bitcoin is
that you can put your keys in a time capsule in the foundation of your home
and trust that they're still going to be there and you'll be able to use
your coins a decade later. ... that you don't have to watch out for banks
drilling your safe deposit boxes or people putting public notices in
classified ads laying claim to your property.
I don't even think bitcoin has ever policy restricted something that was in
active use, much less softforked out something like that. I wouldn't say
it was impossible but I think on the balance it would favor a notice period
so that any reasonable person could have taken notice, taken action, or at
least spoke up. But since there is no requirement to monitor and that's
part of bitcoin's value prop the amount of time to consider reasonable
ought to be quite long. Which also is at odds with the emergency measures
position being taken by proponents of such changes.
(which also, I think are just entirely unjustified, even if you accept the
worst version of their narrative with the historical chain being made
_illegal_, one could simply produce node software that starts from a well
known embedded utxo snapshot and doesn't process historical blocks. Such
a thing would be in principle a reduction in the security model, but
balances against the practical and realistic impact of potentially
confiscating coins I think it looks pretty fine by comparison. It would
also be fully consensus compatible, assuming no reorg below that point, and
can be done right now by anyone who cares in a totally permissionless and
coercion free manner)
On Thu, Oct 30, 2025 at 5:13 AM Michael Tidwell <mtidw...@gmail.com> wrote:
Greg,
> Also some risk of creating a new scarce asset class.
Well, Casey Rodarmor is in the thread, so lol maybe.
Anyway, point taken. I want to be 100% sure I understand the hypotheticals:
there could be an off-chain, presigned, transactions that needs more than
520 bytes for the scriptPubKey and, as Poelstra said, could even form a
chain of presigned transactions under some complex, previously unknown,
scheme that only becomes public after this change is made. Can you confirm?
Would it also be a worry that a chain of transactions using said utxo could
commit to some bizarre scheme, for instance a taproot transaction utxo that
later is presigned committed back to P2MS larger than 520 bytes? If so, I
think I get it, you're saying to essentially guarantee no confiscation we'd
never be able to upgrade old UTXOs and we'd need to track them forever to
prevent unlikely edge cases?
Does the presigned chain at least stop needing to be tracked once the given
UTXO co-mingles with a post-update coinbase utxo?
If so, this is indeed complex! This seems pretty insane both for the
complexity of implementing and the unlikely edge cases. Has Core ever made
a decision of (acceptable risk) to upgrade with protection of onchain utxos
but not hypothetical unpublished ones?
Aren't we going to run into the same situation if we do an op code clean up
in the future if we had people presign/commit to op codes that are no
longer consensus valid?
Tidwell
On Wednesday, October 29, 2025 at 10:32:10 PM UTC-4 Greg Maxwell wrote:
"A few bytes" might be on the order of forever 10% increase in the UTXO set
size, plus a full from-network resync of all pruned nodes and a full (e.g.
most of day outage) reindex of all unpruned nodes. Not insignificant but
also not nothing. Such a portion of the existing utxo size is not from
outputs over 520 bytes in size, so as a scheme for utxo set size reduction
the addition of MHT tracking would probably make it a failure.
Also some risk of creating some new scarce asset class, txouts consisting
of primordial coins that aren't subject to the new rules... sounds like the
sort of thing that NFT degens would absolutely love. That might not be an
issue *generally* for some change with confiscation risk, but for a change
that is specifically intended to lobotomize bitcoin to make it less useful
to NFT degens, maybe not such a great idea. :P
I mentioned it at all because I thought it could potentially be of some
use, I'm just more skeptical of it for the current context. Also luke-jr
and crew has moved on to actually propose even more invasive changes than
just limiting the script size, which I anticipated, and has much more
significant issues. Just size limiting outputs likely doesn't harm any
interests or usages-- and so probably could be viable if the confiscation
issue was addressed, but it also doesn't stick it to people transacting in
ways the priests of ocean mining dislike.
> I believe you're pointing out the idea of non economically-rational
spammers?
I think it's a mistake to conclude the spammers are economically
irrational-- they're often just responding to different economics which may
be less legible to your analysis. In particular, NFT degens prefer the
high cost of transactions as a thing that makes their tokens scarce and
gives them value. -- otherwise they wouldn't be swapping for one less
efficient encoding for another, they're just be using another blockchain
(perhaps their own) entirely.
On Thu, Oct 30, 2025 at 1:16 AM Michael Tidwell <mtidw...@gmail.com> wrote:
> MRH tracking might make that acceptable, but comes at a high cost which I
think would clearly not be justified.
Greg, I want to ask/challenge how bad this is, this seems like a generally
reusable primitive that could make other upgrades more feasible that also
have the same strict confiscation risk profile.
IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
Poelstra,
> I don't think this is a great idea -- it would be technically hard to
implement and slow deployment indefinitely.
I would like to know how much of a deal breaker this is in your opinion. Is
MRH tracking off the table? In terms of the hypothetical presigned
transactions that may exist using P2MS, is this a hard enough reason to
require a MRH idea?
Greg,
> So, paradoxically this limit might increase the amount of non-prunable
data
I believe you're pointing out the idea of non economically-rational
spammers? We already see actors ignoring cheaper witness inscription
methods. If spam shifts to many sub-520 fake pubkey outputs (which I
believe is less harmful than stamps), that imo is a separate UTXO cost
discussion. (like a SF to add weight to outputs). Anywho, this point alone
doesn't seem sufficient to add as a clear negative reason for someone
opposed to the proposal.
Thanks,
Tidwell
On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
> Confiscation is a problem because of presigned transactions
Allow 10000 bytes of total scriptPubKey size in each block counting only
those outputs that are larger than x (520 as proposed).
The code change is pretty minimal from the most obvious implementation of
the original rule.
That makes it technically non-confiscatory. Still non-standard, but if
anyone out there so obnoxiously foot-gunned themselves, they can't claim
they were rugged by the devs.
BR,
moonsettler
On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io>
wrote:
> Hey,
>
> First, thank you to everyone who responded, and please continue to do so.
There were many thought provoking responses and this did shift my
perspective quite a bit from the original post, which in of itself was the
goal to a degree.
>
> I am currently only going to respond to all of the current concerns.
Acks; though I like them will be ignored unless new discoveries are
included.
>
> Tl;dr (Portlands Perspective)
> - Confiscation is a problem because of presigned transactions
> - DoS mitigation could also occur through marking UTXOs as unspendable if
> 520 bytes, this would preserve the proof of publication.
> - Timeout / Sunset logic is compelling
> - The (n) value of acceptable needed bytes is contentious with the lower
suggested limit being 67
> - Congestion control is worth a look?
>
> Next Step:
> - Deeper discussion at the individual level: Antoine Poinsot and GCC
overlap?
> - Write an implementation.
> - Decide to pursue BIP
>
> Responses
>
> Andrew Poelstra:
> > There is a risk of confiscation of coins which have pre-signed but
> > unpublished transactions spending them to new outputs with large
> > scriptPubKeys. Due to long-standing standardness rules, and the
presence
> > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > such transactions exist.
>
> PortlandHODL: This is a risk that can be incurred and likely not possible
to mitigate as there could be possible chains of transactions so even when
recursively iterating over a chain there is a chance that a presigned
breaks this rule. Every idea I have had from block redemption limits on
prevouts seems to just be a coverage issue where you can make the
confiscation less likely but not completely mitigated.
>
> Second, there are already TXs that effectively have been confiscated at
the policy level (P2SH Cleanstack violation) where the user can not find
any miner with a policy to accept these into their mempool. (3 years)
>
> /dev /fd0
> > so it would be great if this was restricted to OP_RETURN
>
> PortlandHODL: I reject this completely as this would remove the UTXOset
omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
restriction and instead just use another op_code, this also do not hit on
some of the most important factors such as DoS mitigation and legacy script
attack surface reduction.
>
> Peter Todd
> > NACK ...
>
> PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
without including any additional context or reasoning.
>
> jeremy
> > I think that this type of rule is OK if we do it as a "sunsetting"
restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
years, 5 years, 10 years).
>
> If action is taken, this is the most reasonable approach. Alleviating
confiscatory concerns through deferral.
>
> > You can argue against this example probably, but it is worth
considering that absence of evidence of use is not evidence of absence of
use and I myself feel that overall our understanding of Bitcoin transaction
programming possibilities is still early. If you don't like this example, I
can give you others (probably).
>
> Agreed and this also falls into the reasoning for deciding to utilize
point 1 in your response. My thoughts on this would be along the lines of
proof of publication as this change only has the effect of stripping away
the executable portion of a script between 521 and 10_000 bytes or the
published data portion if > 10_000 bytes which the same data could likely
be published in chunked segments using outpoints.
>
> Andrew Poelstra:
> > Aside from proof-of-publication (i.e. data storage directly in the UTXO
> > set) there is no usage of script which can't be equally (or better)
> > accomplished by using a Segwit v0 or Taproot script.
>
> This sums up the majority of future usecase concern
>
> Anthony Towns:
> > (If you restricted the change to only applying to scripts that used
> non-push operators, that would probably still provide upgrade flexibility
> while also preventing potential script abuses. But it wouldn't do
anything
> to prevent publishing data)
>
> Could this not be done as segments in multiple outpoints using a
coordination outpoint? I fail to see why publication proof must be in a
single chunk. This does though however bring another alternative to mind,
just making these outpoints unspendable but not invalidate the block
through inclusion...
>
> > As far as the "but contiguous data will be regulated more strictly"
> argument goes; I don't think "your honour, my offensive content has
> strings of 4d0802 every 520 bytes
>
> Correct, this was never meant to resolve this issue.
>
> Luke Dashjr:
> > If we're going this route, we should just close all the gaps for the
immediate future:
>
> To put it nicely, this is completely beyond the scope of what is being
proposed.
>
> Guus Ellenkamp:
> > If there are really so few OP_RETURN outputs more than 144 bytes, then
> why increase the limit if that change is so controversial? It seems
> people who want to use a larger OP_RETURN size do it anyway, even with
> the current default limits.
>
> Completely off topic and irrelevant
>
> Greg Tonoski:
> > Limiting the maximum size of the scriptPubKey of a transaction to 67
bytes.
>
> This leave no room to deal with broken hashing algorithms and very little
future upgradability for hooks. The rest of these points should be merged
with Lukes response and either hijack my thread or start a new one with the
increased scope, any approach I take will only be related to the
ScriptPubkey
>
> Keagan McClelland:
> > Hard NACK on capping the witness size as that would effectively ban
large scripts even in the P2SH wrapper which undermines Bitcoin's ability
to be an effectively programmable money.
>
> This has nothing to do with the witness size or even the P2SH wrapper
>
> Casey Rodarmor:
> > I think that "Bitcoin could need it in the future?" might be a good
enough
> reason not to do this.
>
> > Script pubkeys are the only variable-length transaction fields which
can be
> covered by input signatures, which might make them useful for future soft
> forks. I can imagine confidential asset schemes or post-quantum coin
recovery
> schemes requiring large proofs in the outputs, where the validity of the
proof
> determined whether or not the transaction is valid, and thus require the
> proofs to be in the outputs, and not just a hash commitment.
>
> Would the ability to publish the data alone be enough? Example make the
output unspendable but allow for the existence of the bytes to be covered
through the signature?
>
>
> Antoine Poinsot:
> > Limiting the size of created scriptPubKeys is not a sufficient
mitigation on its own
> I fail to see how this would not be sufficient? To DoS you need 2 things
inputs with ScriptPubkey redemptions + heavy op_codes that require unique
checks. Example DUPing stack element again and again doesn't work. This
then leads to the next part is you could get up to unique complex
operations with the current (n) limit included per input.
>
> > One of the goal of BIP54 is to address objections to Matt's earlier
proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite direction.
>
> Some notes is I would actually go as far as to say the confiscation risk
is higher with the TX limit proposed in BIP54 as we actually have proof of
redemption of TXs that break that rule and the input set to do this already
exists on-chain no need to even wonder about the whole presigned.
bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>
> Please let me know if I am incorrect on any of this.
>
> > Furthermore, it's always possible to get the biggest bang for our buck
in a first step
>
> Agreed on bang for the buck regarding DoS.
>
> My final point here would be that I would like to discuss more, and this
is response is from the initial view of your response and could be
incomplete or incorrect, This is just my in the moment response.
>
> Antoine Riard:
> > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
favor of prioritizing
> a timewarp fix and limiting dosy spends by old redeem scripts
>
> The idea of congestion control is interesting, but this solution should
significantly reduce the total DoS severity of known vectors.
>
> On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
>
> > Limits on block construction that cross transactions make it harder to
accurately estimate fees and greatly complicate optimal block
construction-- the latter being important because smarter and more computer
powered mining code generating higher profits is a pro centralization
factor.
> >
> > In terms of effectiveness the "spam" will just make itself
indistinguishable from the most common transaction traffic from the
perspective of such metrics-- and might well drive up "spam" levels because
the higher embedding cost may make some of them use more transactions. The
competition for these buckets by other traffic could make it effectively a
block size reduction even against very boring ordinary transactions. ...
which is probably not what most people want.
> >
> > I think it's important to keep in mind that bitcoin fee levels even at
0.1s/vb are far beyond what other hosting services and other blockchains
cost-- so anyone still embedding data in bitcoin *really* want to be there
for some reason and aren't too fee sensitive or else they'd already be
using something else... some are even in favor of higher costs since the
high fees are what create the scarcity needed for their seigniorage.
> >
> > But yeah I think your comments on priorities are correct.
> >
> >
> >
> > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
wrote:
> >
> > > Hi list,
> > >
> > > Thanks to the annex covered by the signature, I don't see how the
concern about limiting
> > > the extensibility of bitcoin script with future (post-quantum)
cryptographic schemes.
> > > Previous proposal of the annex were deliberately designed with
variable-length fields
> > > to flexibly accomodate a wide range of things.
> > >
> > > I believe there is one thing that has not been proposed to limit
unpredictable utterance
> > > of spams on the blockchain, namely congestion control of categories
of outputs (e.g "fat"
> > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey
and L a limiting
> > > threshold for the number of T occurences during the period P. Beyond
the L threshold, any
> > > additional T scriptpubkey is making the block invalid. Or
alternatively, any additional
> > > T generating / spending transaction must pay some weight penalty...
> > >
> > > Congestion control, which of course comes with its lot of
shenanigans, is not very a novel
> > > idea as I believe it has been floated few times in the context of
lightning to solve mass
> > > closure, where channels out-priced at current feerate would have
their safety timelocks scale
> > > ups.
> > >
> > > No need anymore to come to social consensus on what is quantitative
"spam" or not. The blockchain
> > > would automatically throttle out the block space spamming
transaction. Qualitative spam it's another
> > > question, for anyone who has ever read shannon's theory of
communication only effective thing can
> > > be to limit the size of data payload. But probably we're kickly back
to a non-mathematically solvable
> > > linguistical question again [0].
> > >
> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
favor of prioritizing
> > > a timewarp fix and limiting dosy spends by old redeem scripts, rather
than engaging in shooting
> > > ourselves in the foot with ill-designed "spam" consensus mitigations.
> > >
> > > [0] If you have a soul of logician, it would be an interesting
demonstration to come with
> > > to establish that we cannot come up with mathematically or
cryptographically consensus means
> > > to solve qualitative "spam", which in a very pure sense is a
linguistical issue.
> > >
> > > Best,
> > > Antoine
> > > OTS hash:
6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
> > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit
:
> > >
> > > > Hi,
> > > >
> > > > This approach was discussed last year when evaluating the best way
to mitigate DoS blocks in terms
> > > > of gains compared to confiscatory surface. Limiting the size of
created scriptPubKeys is not a
> > > > sufficient mitigation on its own, and has a non-trivial
confiscatory surface.
> > > >
> > > > One of the goal of BIP54 is to address objections to Matt's earlier
proposal, notably the (in my
> > > > opinion reasonable) confiscation concerns voiced by Russell
O'Connor. Limiting the size of
> > > > scriptPubKeys would in this regard be moving in the opposite
direction.
> > > >
> > > > Various approaches of limiting the size of spent scriptPubKeys were
discussed, in forms that would
> > > > mitigate the confiscatory surface, to adopt in addition to (what
eventually became) the BIP54 sigops
> > > > limit. However i decided against including this additional measure
in BIP54 because:
> > > > - of the inherent complexity of the discussed schemes, which would
make it hard to reason about
> > > > constructing transactions spending legacy inputs, and equally hard
to evaluate the reduction of
> > > > the confiscatory surface;
> > > > - more importantly, there is steep diminishing returns to piling on
more mitigations. The BIP54
> > > > limit on its own prevents an externally-motivated attacker from
*unevenly* stalling the network
> > > > for dozens of minutes, and a revenue-maximizing miner from
regularly stalling its competitions
> > > > for dozens of seconds, at a minimized cost in confiscatory surface.
Additional mitigations reduce
> > > > the worst case validation time by a smaller factor at a higher cost
in terms of confiscatory
> > > > surface. It "feels right" to further reduce those numbers, but it's
less clear what the tangible
> > > > gains would be.
> > > >
> > > > Furthermore, it's always possible to get the biggest bang for our
buck in a first step and going the
> > > > extra mile in a later, more controversial, soft fork. I previously
floated the idea of a "cleanup
> > > > v2" in private discussions, and i think besides a reduction of the
maximum scriptPubKey size it
> > > > should feature a consensus-enforced maximum transaction size for
the reasons stated here:
> > > >
https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
I wouldn't hold my
> > > > breath on such a "cleanup v2", but it may be useful to have it
documented somewhere.
> > > >
> > > > I'm trying to not go into much details regarding which mitigations
were considered in designing
> > > > BIP54, because they are tightly related to the design of various
DoS blocks. But i'm always happy to
> > > > rehash the decisions made there and (re-)consider alternative
approaches on the semi-private Delving
> > > > thread [0] dedicated to this purpose. Feel free to ping me to get
access if i know you.
> > > >
> > > > Best,
> > > > Antoine Poinsot
> > > >
> > > > [0]:
https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > >
> > > >
> > > >
> > > >
> > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
fre...@reardencode.com> wrote:
> > > >
> > > > >
> > > > >
> > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> > > > >
> > > > > > But also given that there are essentially no violations and no
reason to
> > > > > > expect any I'm not sure the proposal is worth time relative to
fixes of
> > > > > > actual moderately serious DOS attack issues.
> > > > >
> > > > >
> > > > > I believe this limit would also stop most (all?) of
PortlandHODL's
> > > > > DoSblocks without having to make some of the other changes in
GCC. I
> > > > > think it's worthwhile to compare this approach to those proposed
by
> > > > > Antoine in solving these DoS vectors.
> > > > >
> > > > > Best,
> > > > >
> > > > > --Brandon
> > > > >
> > > > > --
> > > > > You received this message because you are subscribed to the
Google Groups "Bitcoin Development Mailing List" group.
> > > > > To unsubscribe from this group and stop receiving emails from it,
send an email to bitcoindev+...@googlegroups.com.
> > > > > To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
> > >
> > > --
> > > You received this message because you are subscribed to the Google
Groups "Bitcoin Development Mailing List" group.
> > > To unsubscribe from this group and stop receiving emails from it,
send an email to bitcoindev+...@googlegroups.com.
> >
> > > To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>
> --
> You received this message because you are subscribed to the Google Groups
"Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to bitcoindev+...@googlegroups.com.
> To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups
"Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to bitcoindev+...@googlegroups.com.
To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com
<https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com?utm_medium=email&utm_source=footer>
.
--
You received this message because you are subscribed to the Google Groups
"Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to bitcoindev+...@googlegroups.com.
To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0d225c5n%40googlegroups.com
<https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0d225c5n%40googlegroups.com?utm_medium=email&utm_source=footer>
.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/09d0aa74-1305-45bd-8da9-03d1506f5784n%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 36103 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 2:26 ` Greg Maxwell
2025-10-30 3:36 ` Michael Tidwell
@ 2025-10-30 16:10 ` Tom Harding
2025-10-30 22:15 ` Doctor Buzz
1 sibling, 1 reply; 46+ messages in thread
From: Tom Harding @ 2025-10-30 16:10 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 23666 bytes --]
We should reflect on the goal of minimizing UTXO set size. Would we as
easily say we should minimize the number of people/entities who hold L1
coins, or the number of ways each person/entity can hold them?
The dire concern with UTXO set size was born with the optimization of the
core bitcoin software for mining, rather than for holding and transfers, in
2012. Some geniuses were involved with that change. Satoshi was not one
of them.
On Wednesday, October 29, 2025 at 7:32:10 PM UTC-7 Greg Maxwell wrote:
"A few bytes" might be on the order of forever 10% increase in the UTXO set
size, plus a full from-network resync of all pruned nodes and a full (e.g.
most of day outage) reindex of all unpruned nodes. Not insignificant but
also not nothing. Such a portion of the existing utxo size is not from
outputs over 520 bytes in size, so as a scheme for utxo set size reduction
the addition of MHT tracking would probably make it a failure.
Also some risk of creating some new scarce asset class, txouts consisting
of primordial coins that aren't subject to the new rules... sounds like the
sort of thing that NFT degens would absolutely love. That might not be an
issue *generally* for some change with confiscation risk, but for a change
that is specifically intended to lobotomize bitcoin to make it less useful
to NFT degens, maybe not such a great idea. :P
I mentioned it at all because I thought it could potentially be of some
use, I'm just more skeptical of it for the current context. Also luke-jr
and crew has moved on to actually propose even more invasive changes than
just limiting the script size, which I anticipated, and has much more
significant issues. Just size limiting outputs likely doesn't harm any
interests or usages-- and so probably could be viable if the confiscation
issue was addressed, but it also doesn't stick it to people transacting in
ways the priests of ocean mining dislike.
> I believe you're pointing out the idea of non economically-rational
spammers?
I think it's a mistake to conclude the spammers are economically
irrational-- they're often just responding to different economics which may
be less legible to your analysis. In particular, NFT degens prefer the
high cost of transactions as a thing that makes their tokens scarce and
gives them value. -- otherwise they wouldn't be swapping for one less
efficient encoding for another, they're just be using another blockchain
(perhaps their own) entirely.
On Thu, Oct 30, 2025 at 1:16 AM Michael Tidwell <mtidw...@gmail.com> wrote:
> MRH tracking might make that acceptable, but comes at a high cost which I
think would clearly not be justified.
Greg, I want to ask/challenge how bad this is, this seems like a generally
reusable primitive that could make other upgrades more feasible that also
have the same strict confiscation risk profile.
IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
Poelstra,
> I don't think this is a great idea -- it would be technically hard to
implement and slow deployment indefinitely.
I would like to know how much of a deal breaker this is in your opinion. Is
MRH tracking off the table? In terms of the hypothetical presigned
transactions that may exist using P2MS, is this a hard enough reason to
require a MRH idea?
Greg,
> So, paradoxically this limit might increase the amount of non-prunable
data
I believe you're pointing out the idea of non economically-rational
spammers? We already see actors ignoring cheaper witness inscription
methods. If spam shifts to many sub-520 fake pubkey outputs (which I
believe is less harmful than stamps), that imo is a separate UTXO cost
discussion. (like a SF to add weight to outputs). Anywho, this point alone
doesn't seem sufficient to add as a clear negative reason for someone
opposed to the proposal.
Thanks,
Tidwell
On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
> Confiscation is a problem because of presigned transactions
Allow 10000 bytes of total scriptPubKey size in each block counting only
those outputs that are larger than x (520 as proposed).
The code change is pretty minimal from the most obvious implementation of
the original rule.
That makes it technically non-confiscatory. Still non-standard, but if
anyone out there so obnoxiously foot-gunned themselves, they can't claim
they were rugged by the devs.
BR,
moonsettler
On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io>
wrote:
> Hey,
>
> First, thank you to everyone who responded, and please continue to do so.
There were many thought provoking responses and this did shift my
perspective quite a bit from the original post, which in of itself was the
goal to a degree.
>
> I am currently only going to respond to all of the current concerns.
Acks; though I like them will be ignored unless new discoveries are
included.
>
> Tl;dr (Portlands Perspective)
> - Confiscation is a problem because of presigned transactions
> - DoS mitigation could also occur through marking UTXOs as unspendable if
> 520 bytes, this would preserve the proof of publication.
> - Timeout / Sunset logic is compelling
> - The (n) value of acceptable needed bytes is contentious with the lower
suggested limit being 67
> - Congestion control is worth a look?
>
> Next Step:
> - Deeper discussion at the individual level: Antoine Poinsot and GCC
overlap?
> - Write an implementation.
> - Decide to pursue BIP
>
> Responses
>
> Andrew Poelstra:
> > There is a risk of confiscation of coins which have pre-signed but
> > unpublished transactions spending them to new outputs with large
> > scriptPubKeys. Due to long-standing standardness rules, and the
presence
> > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > such transactions exist.
>
> PortlandHODL: This is a risk that can be incurred and likely not possible
to mitigate as there could be possible chains of transactions so even when
recursively iterating over a chain there is a chance that a presigned
breaks this rule. Every idea I have had from block redemption limits on
prevouts seems to just be a coverage issue where you can make the
confiscation less likely but not completely mitigated.
>
> Second, there are already TXs that effectively have been confiscated at
the policy level (P2SH Cleanstack violation) where the user can not find
any miner with a policy to accept these into their mempool. (3 years)
>
> /dev /fd0
> > so it would be great if this was restricted to OP_RETURN
>
> PortlandHODL: I reject this completely as this would remove the UTXOset
omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
restriction and instead just use another op_code, this also do not hit on
some of the most important factors such as DoS mitigation and legacy script
attack surface reduction.
>
> Peter Todd
> > NACK ...
>
> PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
without including any additional context or reasoning.
>
> jeremy
> > I think that this type of rule is OK if we do it as a "sunsetting"
restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
years, 5 years, 10 years).
>
> If action is taken, this is the most reasonable approach. Alleviating
confiscatory concerns through deferral.
>
> > You can argue against this example probably, but it is worth
considering that absence of evidence of use is not evidence of absence of
use and I myself feel that overall our understanding of Bitcoin transaction
programming possibilities is still early. If you don't like this example, I
can give you others (probably).
>
> Agreed and this also falls into the reasoning for deciding to utilize
point 1 in your response. My thoughts on this would be along the lines of
proof of publication as this change only has the effect of stripping away
the executable portion of a script between 521 and 10_000 bytes or the
published data portion if > 10_000 bytes which the same data could likely
be published in chunked segments using outpoints.
>
> Andrew Poelstra:
> > Aside from proof-of-publication (i.e. data storage directly in the UTXO
> > set) there is no usage of script which can't be equally (or better)
> > accomplished by using a Segwit v0 or Taproot script.
>
> This sums up the majority of future usecase concern
>
> Anthony Towns:
> > (If you restricted the change to only applying to scripts that used
> non-push operators, that would probably still provide upgrade flexibility
> while also preventing potential script abuses. But it wouldn't do
anything
> to prevent publishing data)
>
> Could this not be done as segments in multiple outpoints using a
coordination outpoint? I fail to see why publication proof must be in a
single chunk. This does though however bring another alternative to mind,
just making these outpoints unspendable but not invalidate the block
through inclusion...
>
> > As far as the "but contiguous data will be regulated more strictly"
> argument goes; I don't think "your honour, my offensive content has
> strings of 4d0802 every 520 bytes
>
> Correct, this was never meant to resolve this issue.
>
> Luke Dashjr:
> > If we're going this route, we should just close all the gaps for the
immediate future:
>
> To put it nicely, this is completely beyond the scope of what is being
proposed.
>
> Guus Ellenkamp:
> > If there are really so few OP_RETURN outputs more than 144 bytes, then
> why increase the limit if that change is so controversial? It seems
> people who want to use a larger OP_RETURN size do it anyway, even with
> the current default limits.
>
> Completely off topic and irrelevant
>
> Greg Tonoski:
> > Limiting the maximum size of the scriptPubKey of a transaction to 67
bytes.
>
> This leave no room to deal with broken hashing algorithms and very little
future upgradability for hooks. The rest of these points should be merged
with Lukes response and either hijack my thread or start a new one with the
increased scope, any approach I take will only be related to the
ScriptPubkey
>
> Keagan McClelland:
> > Hard NACK on capping the witness size as that would effectively ban
large scripts even in the P2SH wrapper which undermines Bitcoin's ability
to be an effectively programmable money.
>
> This has nothing to do with the witness size or even the P2SH wrapper
>
> Casey Rodarmor:
> > I think that "Bitcoin could need it in the future?" might be a good
enough
> reason not to do this.
>
> > Script pubkeys are the only variable-length transaction fields which
can be
> covered by input signatures, which might make them useful for future soft
> forks. I can imagine confidential asset schemes or post-quantum coin
recovery
> schemes requiring large proofs in the outputs, where the validity of the
proof
> determined whether or not the transaction is valid, and thus require the
> proofs to be in the outputs, and not just a hash commitment.
>
> Would the ability to publish the data alone be enough? Example make the
output unspendable but allow for the existence of the bytes to be covered
through the signature?
>
>
> Antoine Poinsot:
> > Limiting the size of created scriptPubKeys is not a sufficient
mitigation on its own
> I fail to see how this would not be sufficient? To DoS you need 2 things
inputs with ScriptPubkey redemptions + heavy op_codes that require unique
checks. Example DUPing stack element again and again doesn't work. This
then leads to the next part is you could get up to unique complex
operations with the current (n) limit included per input.
>
> > One of the goal of BIP54 is to address objections to Matt's earlier
proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Connor.
Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite direction.
>
> Some notes is I would actually go as far as to say the confiscation risk
is higher with the TX limit proposed in BIP54 as we actually have proof of
redemption of TXs that break that rule and the input set to do this already
exists on-chain no need to even wonder about the whole presigned.
bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>
> Please let me know if I am incorrect on any of this.
>
> > Furthermore, it's always possible to get the biggest bang for our buck
in a first step
>
> Agreed on bang for the buck regarding DoS.
>
> My final point here would be that I would like to discuss more, and this
is response is from the initial view of your response and could be
incomplete or incorrect, This is just my in the moment response.
>
> Antoine Riard:
> > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
favor of prioritizing
> a timewarp fix and limiting dosy spends by old redeem scripts
>
> The idea of congestion control is interesting, but this solution should
significantly reduce the total DoS severity of known vectors.
>
> On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
>
> > Limits on block construction that cross transactions make it harder to
accurately estimate fees and greatly complicate optimal block
construction-- the latter being important because smarter and more computer
powered mining code generating higher profits is a pro centralization
factor.
> >
> > In terms of effectiveness the "spam" will just make itself
indistinguishable from the most common transaction traffic from the
perspective of such metrics-- and might well drive up "spam" levels because
the higher embedding cost may make some of them use more transactions. The
competition for these buckets by other traffic could make it effectively a
block size reduction even against very boring ordinary transactions. ...
which is probably not what most people want.
> >
> > I think it's important to keep in mind that bitcoin fee levels even at
0.1s/vb are far beyond what other hosting services and other blockchains
cost-- so anyone still embedding data in bitcoin *really* want to be there
for some reason and aren't too fee sensitive or else they'd already be
using something else... some are even in favor of higher costs since the
high fees are what create the scarcity needed for their seigniorage.
> >
> > But yeah I think your comments on priorities are correct.
> >
> >
> >
> > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
wrote:
> >
> > > Hi list,
> > >
> > > Thanks to the annex covered by the signature, I don't see how the
concern about limiting
> > > the extensibility of bitcoin script with future (post-quantum)
cryptographic schemes.
> > > Previous proposal of the annex were deliberately designed with
variable-length fields
> > > to flexibly accomodate a wide range of things.
> > >
> > > I believe there is one thing that has not been proposed to limit
unpredictable utterance
> > > of spams on the blockchain, namely congestion control of categories
of outputs (e.g "fat"
> > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey
and L a limiting
> > > threshold for the number of T occurences during the period P. Beyond
the L threshold, any
> > > additional T scriptpubkey is making the block invalid. Or
alternatively, any additional
> > > T generating / spending transaction must pay some weight penalty...
> > >
> > > Congestion control, which of course comes with its lot of
shenanigans, is not very a novel
> > > idea as I believe it has been floated few times in the context of
lightning to solve mass
> > > closure, where channels out-priced at current feerate would have
their safety timelocks scale
> > > ups.
> > >
> > > No need anymore to come to social consensus on what is quantitative
"spam" or not. The blockchain
> > > would automatically throttle out the block space spamming
transaction. Qualitative spam it's another
> > > question, for anyone who has ever read shannon's theory of
communication only effective thing can
> > > be to limit the size of data payload. But probably we're kickly back
to a non-mathematically solvable
> > > linguistical question again [0].
> > >
> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
favor of prioritizing
> > > a timewarp fix and limiting dosy spends by old redeem scripts, rather
than engaging in shooting
> > > ourselves in the foot with ill-designed "spam" consensus mitigations.
> > >
> > > [0] If you have a soul of logician, it would be an interesting
demonstration to come with
> > > to establish that we cannot come up with mathematically or
cryptographically consensus means
> > > to solve qualitative "spam", which in a very pure sense is a
linguistical issue.
> > >
> > > Best,
> > > Antoine
> > > OTS hash:
6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
> > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit
:
> > >
> > > > Hi,
> > > >
> > > > This approach was discussed last year when evaluating the best way
to mitigate DoS blocks in terms
> > > > of gains compared to confiscatory surface. Limiting the size of
created scriptPubKeys is not a
> > > > sufficient mitigation on its own, and has a non-trivial
confiscatory surface.
> > > >
> > > > One of the goal of BIP54 is to address objections to Matt's earlier
proposal, notably the (in my
> > > > opinion reasonable) confiscation concerns voiced by Russell
O'Connor. Limiting the size of
> > > > scriptPubKeys would in this regard be moving in the opposite
direction.
> > > >
> > > > Various approaches of limiting the size of spent scriptPubKeys were
discussed, in forms that would
> > > > mitigate the confiscatory surface, to adopt in addition to (what
eventually became) the BIP54 sigops
> > > > limit. However i decided against including this additional measure
in BIP54 because:
> > > > - of the inherent complexity of the discussed schemes, which would
make it hard to reason about
> > > > constructing transactions spending legacy inputs, and equally hard
to evaluate the reduction of
> > > > the confiscatory surface;
> > > > - more importantly, there is steep diminishing returns to piling on
more mitigations. The BIP54
> > > > limit on its own prevents an externally-motivated attacker from
*unevenly* stalling the network
> > > > for dozens of minutes, and a revenue-maximizing miner from
regularly stalling its competitions
> > > > for dozens of seconds, at a minimized cost in confiscatory surface.
Additional mitigations reduce
> > > > the worst case validation time by a smaller factor at a higher cost
in terms of confiscatory
> > > > surface. It "feels right" to further reduce those numbers, but it's
less clear what the tangible
> > > > gains would be.
> > > >
> > > > Furthermore, it's always possible to get the biggest bang for our
buck in a first step and going the
> > > > extra mile in a later, more controversial, soft fork. I previously
floated the idea of a "cleanup
> > > > v2" in private discussions, and i think besides a reduction of the
maximum scriptPubKey size it
> > > > should feature a consensus-enforced maximum transaction size for
the reasons stated here:
> > > >
https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
I wouldn't hold my
> > > > breath on such a "cleanup v2", but it may be useful to have it
documented somewhere.
> > > >
> > > > I'm trying to not go into much details regarding which mitigations
were considered in designing
> > > > BIP54, because they are tightly related to the design of various
DoS blocks. But i'm always happy to
> > > > rehash the decisions made there and (re-)consider alternative
approaches on the semi-private Delving
> > > > thread [0] dedicated to this purpose. Feel free to ping me to get
access if i know you.
> > > >
> > > > Best,
> > > > Antoine Poinsot
> > > >
> > > > [0]:
https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > >
> > > >
> > > >
> > > >
> > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
fre...@reardencode.com> wrote:
> > > >
> > > > >
> > > > >
> > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> > > > >
> > > > > > But also given that there are essentially no violations and no
reason to
> > > > > > expect any I'm not sure the proposal is worth time relative to
fixes of
> > > > > > actual moderately serious DOS attack issues.
> > > > >
> > > > >
> > > > > I believe this limit would also stop most (all?) of
PortlandHODL's
> > > > > DoSblocks without having to make some of the other changes in
GCC. I
> > > > > think it's worthwhile to compare this approach to those proposed
by
> > > > > Antoine in solving these DoS vectors.
> > > > >
> > > > > Best,
> > > > >
> > > > > --Brandon
> > > > >
> > > > > --
> > > > > You received this message because you are subscribed to the
Google Groups "Bitcoin Development Mailing List" group.
> > > > > To unsubscribe from this group and stop receiving emails from it,
send an email to bitcoindev+...@googlegroups.com.
> > > > > To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
> > >
> > > --
> > > You received this message because you are subscribed to the Google
Groups "Bitcoin Development Mailing List" group.
> > > To unsubscribe from this group and stop receiving emails from it,
send an email to bitcoindev+...@googlegroups.com.
> >
> > > To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>
> --
> You received this message because you are subscribed to the Google Groups
"Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
email to bitcoindev+...@googlegroups.com.
> To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups
"Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to bitcoindev+...@googlegroups.com.
To view this discussion visit
https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com
<https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com?utm_medium=email&utm_source=footer>
.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/793073a7-84b2-4b42-a531-e03e30f89ddcn%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 28781 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 8:55 ` Bitcoin Error Log
@ 2025-10-30 17:40 ` Greg Maxwell
0 siblings, 0 replies; 46+ messages in thread
From: Greg Maxwell @ 2025-10-30 17:40 UTC (permalink / raw)
To: Bitcoin Error Log; +Cc: Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 31898 bytes --]
Fullrbf change in core only occurred *after* substantive miners had already
made the change, and core updated largely with regret to reflect reality
and not contribute to compound harm. This was clear enough from even a few
minutes read on the history, so I'm unclear as to why you're suggesting
otherwise here.
On Thu, Oct 30, 2025 at 11:39 AM Bitcoin Error Log <
bitcoinerrorlog@gmail.com> wrote:
> Greg,
>
> One correction. Bitcoin has significantly restricted a proven use case
> with policy in the past. Maybe you won't think this qualifies, but it
> happened while you were away so I am curious about your assessment.
>
> During the change to mempoolfullrbf policy, I, with support from the
> original author of the change, and with support of multiple Core devs, and
> with the support of multiple businesses providing data on how they provided
> zero-conf as a service to users via risk management, tried to stop Bitcoin
> from killing first-seen policy, which had been stable for all of the
> history of Bitcoin. The change was at least clearly demonstrated as
> controversial, and lacking real consensus.
>
> I'm happy to admit that no policy is enforceable, and that zero-conf was
> "never safe", but we had a system that worked and made Bitcoin more useful
> to people that used it that way. The businesses simply monitored for
> doublespends, imposed exposure limits per block, and gated actual delivery
> separately from checkout UX. It worked and now it does not and the only
> reason was a policy change.
>
> The problem with claiming that policy is not a means of change, is that
> you must also admit the lack of need for any RBF flags at all, or for
> arguing about data spam relay, or for any wide policy to be a concern of
> Bitcoin Core at all. (particularly when speculatively / subjectively
> applied) .
>
> Thank you, and sorry for the side topic.
>
> ~John
>
>
> On Thursday, October 30, 2025 at 6:40:10 AM UTC Greg Maxwell wrote:
>
> Prior softforks have stuck to using the more explicit "forward
> compatibility" mechanisms, so -- e.g. if you use OP_NOP3 or a higher
> transaction version number or whatever that had no purpose (and would
> literally do nothing), saw ~no use, and was non-standard, or scripts that
> just anyone could have immediately taken at any time (e.g. funds free for
> the collecting rather than something secure)... then in that case I think
> people have felt that the long discussion leading up to a softfork was
> enough to acceptably mitigate the risk. Tapscript was specifically
> designed to make upgrades even safer and easier by making it so that the
> mere presence of any forward compat opcode (OP_SUCCESSn) makes the whole
> script insecure until that opcode is in use.
>
> The proposal to limit scriptpubkey size is worse because longer scripts
> had purposes and use (e.g. larger multisigs) and unlike some NOP3 or
> txversions where you could be argued to deserve issues if you did something
> so weird and abused a forward compat mechanism, people running into a 520
> limit could have been pretty boring (and I see my own watching wallets have
> some scriptpubkeys beyond that size (big multisigs), in fact-- though I
> don't *think* any are still in use, but even I'm not absolutely sure that
> such a restriction wouldn't confiscate some of my own funds--- and it's a
> pain in the rear to check, having to bring offline stuff online, etc).
>
> Confiscation isn't just limited to timelocks, since the victims of it may
> just not know about the consensus change and while they could move their
> coins they don't. One of the big advantages many people see in Bitcoin is
> that you can put your keys in a time capsule in the foundation of your home
> and trust that they're still going to be there and you'll be able to use
> your coins a decade later. ... that you don't have to watch out for banks
> drilling your safe deposit boxes or people putting public notices in
> classified ads laying claim to your property.
>
> I don't even think bitcoin has ever policy restricted something that was
> in active use, much less softforked out something like that. I wouldn't
> say it was impossible but I think on the balance it would favor a notice
> period so that any reasonable person could have taken notice, taken action,
> or at least spoke up. But since there is no requirement to monitor and
> that's part of bitcoin's value prop the amount of time to consider
> reasonable ought to be quite long. Which also is at odds with the
> emergency measures position being taken by proponents of such changes.
>
> (which also, I think are just entirely unjustified, even if you accept the
> worst version of their narrative with the historical chain being made
> _illegal_, one could simply produce node software that starts from a well
> known embedded utxo snapshot and doesn't process historical blocks. Such
> a thing would be in principle a reduction in the security model, but
> balances against the practical and realistic impact of potentially
> confiscating coins I think it looks pretty fine by comparison. It would
> also be fully consensus compatible, assuming no reorg below that point, and
> can be done right now by anyone who cares in a totally permissionless and
> coercion free manner)
>
>
>
> On Thu, Oct 30, 2025 at 5:13 AM Michael Tidwell <mtidw...@gmail.com>
> wrote:
>
> Greg,
>
> > Also some risk of creating a new scarce asset class.
>
> Well, Casey Rodarmor is in the thread, so lol maybe.
>
> Anyway, point taken. I want to be 100% sure I understand the
> hypotheticals: there could be an off-chain, presigned, transactions that
> needs more than 520 bytes for the scriptPubKey and, as Poelstra said, could
> even form a chain of presigned transactions under some complex, previously
> unknown, scheme that only becomes public after this change is made. Can you
> confirm?
>
> Would it also be a worry that a chain of transactions using said utxo
> could commit to some bizarre scheme, for instance a taproot transaction
> utxo that later is presigned committed back to P2MS larger than 520 bytes?
> If so, I think I get it, you're saying to essentially guarantee no
> confiscation we'd never be able to upgrade old UTXOs and we'd need to track
> them forever to prevent unlikely edge cases?
> Does the presigned chain at least stop needing to be tracked once the
> given UTXO co-mingles with a post-update coinbase utxo?
>
> If so, this is indeed complex! This seems pretty insane both for the
> complexity of implementing and the unlikely edge cases. Has Core ever made
> a decision of (acceptable risk) to upgrade with protection of onchain utxos
> but not hypothetical unpublished ones?
> Aren't we going to run into the same situation if we do an op code clean
> up in the future if we had people presign/commit to op codes that are no
> longer consensus valid?
>
> Tidwell
>
> On Wednesday, October 29, 2025 at 10:32:10 PM UTC-4 Greg Maxwell wrote:
>
> "A few bytes" might be on the order of forever 10% increase in the UTXO
> set size, plus a full from-network resync of all pruned nodes and a full
> (e.g. most of day outage) reindex of all unpruned nodes. Not
> insignificant but also not nothing. Such a portion of the existing utxo
> size is not from outputs over 520 bytes in size, so as a scheme for utxo
> set size reduction the addition of MHT tracking would probably make it a
> failure.
>
> Also some risk of creating some new scarce asset class, txouts consisting
> of primordial coins that aren't subject to the new rules... sounds like the
> sort of thing that NFT degens would absolutely love. That might not be an
> issue *generally* for some change with confiscation risk, but for a change
> that is specifically intended to lobotomize bitcoin to make it less useful
> to NFT degens, maybe not such a great idea. :P
>
> I mentioned it at all because I thought it could potentially be of some
> use, I'm just more skeptical of it for the current context. Also luke-jr
> and crew has moved on to actually propose even more invasive changes than
> just limiting the script size, which I anticipated, and has much more
> significant issues. Just size limiting outputs likely doesn't harm any
> interests or usages-- and so probably could be viable if the confiscation
> issue was addressed, but it also doesn't stick it to people transacting in
> ways the priests of ocean mining dislike.
>
> > I believe you're pointing out the idea of non economically-rational
> spammers?
>
> I think it's a mistake to conclude the spammers are economically
> irrational-- they're often just responding to different economics which may
> be less legible to your analysis. In particular, NFT degens prefer the
> high cost of transactions as a thing that makes their tokens scarce and
> gives them value. -- otherwise they wouldn't be swapping for one less
> efficient encoding for another, they're just be using another blockchain
> (perhaps their own) entirely.
>
>
>
>
> On Thu, Oct 30, 2025 at 1:16 AM Michael Tidwell <mtidw...@gmail.com>
> wrote:
>
> > MRH tracking might make that acceptable, but comes at a high cost which
> I think would clearly not be justified.
>
> Greg, I want to ask/challenge how bad this is, this seems like a generally
> reusable primitive that could make other upgrades more feasible that also
> have the same strict confiscation risk profile.
> IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
>
> Poelstra,
>
> > I don't think this is a great idea -- it would be technically hard to
> implement and slow deployment indefinitely.
>
> I would like to know how much of a deal breaker this is in your opinion.
> Is MRH tracking off the table? In terms of the hypothetical presigned
> transactions that may exist using P2MS, is this a hard enough reason to
> require a MRH idea?
>
> Greg,
>
> > So, paradoxically this limit might increase the amount of non-prunable
> data
>
> I believe you're pointing out the idea of non economically-rational
> spammers? We already see actors ignoring cheaper witness inscription
> methods. If spam shifts to many sub-520 fake pubkey outputs (which I
> believe is less harmful than stamps), that imo is a separate UTXO cost
> discussion. (like a SF to add weight to outputs). Anywho, this point alone
> doesn't seem sufficient to add as a clear negative reason for someone
> opposed to the proposal.
>
> Thanks,
> Tidwell
> On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
>
> > Confiscation is a problem because of presigned transactions
>
> Allow 10000 bytes of total scriptPubKey size in each block counting only
> those outputs that are larger than x (520 as proposed).
> The code change is pretty minimal from the most obvious implementation of
> the original rule.
>
> That makes it technically non-confiscatory. Still non-standard, but if
> anyone out there so obnoxiously foot-gunned themselves, they can't claim
> they were rugged by the devs.
>
> BR,
> moonsettler
>
> On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io>
> wrote:
>
> > Hey,
> >
> > First, thank you to everyone who responded, and please continue to do
> so. There were many thought provoking responses and this did shift my
> perspective quite a bit from the original post, which in of itself was the
> goal to a degree.
> >
> > I am currently only going to respond to all of the current concerns.
> Acks; though I like them will be ignored unless new discoveries are
> included.
> >
> > Tl;dr (Portlands Perspective)
> > - Confiscation is a problem because of presigned transactions
> > - DoS mitigation could also occur through marking UTXOs as unspendable
> if > 520 bytes, this would preserve the proof of publication.
> > - Timeout / Sunset logic is compelling
> > - The (n) value of acceptable needed bytes is contentious with the lower
> suggested limit being 67
> > - Congestion control is worth a look?
> >
> > Next Step:
> > - Deeper discussion at the individual level: Antoine Poinsot and GCC
> overlap?
> > - Write an implementation.
> > - Decide to pursue BIP
> >
> > Responses
> >
> > Andrew Poelstra:
> > > There is a risk of confiscation of coins which have pre-signed but
> > > unpublished transactions spending them to new outputs with large
> > > scriptPubKeys. Due to long-standing standardness rules, and the
> presence
> > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > > such transactions exist.
> >
> > PortlandHODL: This is a risk that can be incurred and likely not
> possible to mitigate as there could be possible chains of transactions so
> even when recursively iterating over a chain there is a chance that a
> presigned breaks this rule. Every idea I have had from block redemption
> limits on prevouts seems to just be a coverage issue where you can make the
> confiscation less likely but not completely mitigated.
> >
> > Second, there are already TXs that effectively have been confiscated at
> the policy level (P2SH Cleanstack violation) where the user can not find
> any miner with a policy to accept these into their mempool. (3 years)
> >
> > /dev /fd0
> > > so it would be great if this was restricted to OP_RETURN
> >
> > PortlandHODL: I reject this completely as this would remove the UTXOset
> omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
> restriction and instead just use another op_code, this also do not hit on
> some of the most important factors such as DoS mitigation and legacy script
> attack surface reduction.
> >
> > Peter Todd
> > > NACK ...
> >
> > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
> without including any additional context or reasoning.
> >
> > jeremy
> > > I think that this type of rule is OK if we do it as a "sunsetting"
> restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
> years, 5 years, 10 years).
> >
> > If action is taken, this is the most reasonable approach. Alleviating
> confiscatory concerns through deferral.
> >
> > > You can argue against this example probably, but it is worth
> considering that absence of evidence of use is not evidence of absence of
> use and I myself feel that overall our understanding of Bitcoin transaction
> programming possibilities is still early. If you don't like this example, I
> can give you others (probably).
> >
> > Agreed and this also falls into the reasoning for deciding to utilize
> point 1 in your response. My thoughts on this would be along the lines of
> proof of publication as this change only has the effect of stripping away
> the executable portion of a script between 521 and 10_000 bytes or the
> published data portion if > 10_000 bytes which the same data could likely
> be published in chunked segments using outpoints.
> >
> > Andrew Poelstra:
> > > Aside from proof-of-publication (i.e. data storage directly in the
> UTXO
> > > set) there is no usage of script which can't be equally (or better)
> > > accomplished by using a Segwit v0 or Taproot script.
> >
> > This sums up the majority of future usecase concern
> >
> > Anthony Towns:
> > > (If you restricted the change to only applying to scripts that used
> > non-push operators, that would probably still provide upgrade
> flexibility
> > while also preventing potential script abuses. But it wouldn't do
> anything
> > to prevent publishing data)
> >
> > Could this not be done as segments in multiple outpoints using a
> coordination outpoint? I fail to see why publication proof must be in a
> single chunk. This does though however bring another alternative to mind,
> just making these outpoints unspendable but not invalidate the block
> through inclusion...
> >
> > > As far as the "but contiguous data will be regulated more strictly"
> > argument goes; I don't think "your honour, my offensive content has
> > strings of 4d0802 every 520 bytes
> >
> > Correct, this was never meant to resolve this issue.
> >
> > Luke Dashjr:
> > > If we're going this route, we should just close all the gaps for the
> immediate future:
> >
> > To put it nicely, this is completely beyond the scope of what is being
> proposed.
> >
> > Guus Ellenkamp:
> > > If there are really so few OP_RETURN outputs more than 144 bytes, then
> > why increase the limit if that change is so controversial? It seems
> > people who want to use a larger OP_RETURN size do it anyway, even with
> > the current default limits.
> >
> > Completely off topic and irrelevant
> >
> > Greg Tonoski:
> > > Limiting the maximum size of the scriptPubKey of a transaction to 67
> bytes.
> >
> > This leave no room to deal with broken hashing algorithms and very
> little future upgradability for hooks. The rest of these points should be
> merged with Lukes response and either hijack my thread or start a new one
> with the increased scope, any approach I take will only be related to the
> ScriptPubkey
> >
> > Keagan McClelland:
> > > Hard NACK on capping the witness size as that would effectively ban
> large scripts even in the P2SH wrapper which undermines Bitcoin's ability
> to be an effectively programmable money.
> >
> > This has nothing to do with the witness size or even the P2SH wrapper
> >
> > Casey Rodarmor:
> > > I think that "Bitcoin could need it in the future?" might be a good
> enough
> > reason not to do this.
> >
> > > Script pubkeys are the only variable-length transaction fields which
> can be
> > covered by input signatures, which might make them useful for future
> soft
> > forks. I can imagine confidential asset schemes or post-quantum coin
> recovery
> > schemes requiring large proofs in the outputs, where the validity of the
> proof
> > determined whether or not the transaction is valid, and thus require the
> > proofs to be in the outputs, and not just a hash commitment.
> >
> > Would the ability to publish the data alone be enough? Example make the
> output unspendable but allow for the existence of the bytes to be covered
> through the signature?
> >
> >
> > Antoine Poinsot:
> > > Limiting the size of created scriptPubKeys is not a sufficient
> mitigation on its own
> > I fail to see how this would not be sufficient? To DoS you need 2 things
> inputs with ScriptPubkey redemptions + heavy op_codes that require unique
> checks. Example DUPing stack element again and again doesn't work. This
> then leads to the next part is you could get up to unique complex
> operations with the current (n) limit included per input.
> >
> > > One of the goal of BIP54 is to address objections to Matt's earlier
> proposal, notably the (in my
> > opinion reasonable) confiscation concerns voiced by Russell O'Connor.
> Limiting the size of
> > scriptPubKeys would in this regard be moving in the opposite direction.
> >
> > Some notes is I would actually go as far as to say the confiscation risk
> is higher with the TX limit proposed in BIP54 as we actually have proof of
> redemption of TXs that break that rule and the input set to do this already
> exists on-chain no need to even wonder about the whole presigned.
> bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
> >
> > Please let me know if I am incorrect on any of this.
> >
> > > Furthermore, it's always possible to get the biggest bang for our buck
> in a first step
> >
> > Agreed on bang for the buck regarding DoS.
> >
> > My final point here would be that I would like to discuss more, and this
> is response is from the initial view of your response and could be
> incomplete or incorrect, This is just my in the moment response.
> >
> > Antoine Riard:
> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
> favor of prioritizing
> > a timewarp fix and limiting dosy spends by old redeem scripts
> >
> > The idea of congestion control is interesting, but this solution should
> significantly reduce the total DoS severity of known vectors.
> >
> > On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
> >
> > > Limits on block construction that cross transactions make it harder to
> accurately estimate fees and greatly complicate optimal block
> construction-- the latter being important because smarter and more computer
> powered mining code generating higher profits is a pro centralization
> factor.
> > >
> > > In terms of effectiveness the "spam" will just make itself
> indistinguishable from the most common transaction traffic from the
> perspective of such metrics-- and might well drive up "spam" levels because
> the higher embedding cost may make some of them use more transactions. The
> competition for these buckets by other traffic could make it effectively a
> block size reduction even against very boring ordinary transactions. ...
> which is probably not what most people want.
> > >
> > > I think it's important to keep in mind that bitcoin fee levels even at
> 0.1s/vb are far beyond what other hosting services and other blockchains
> cost-- so anyone still embedding data in bitcoin *really* want to be there
> for some reason and aren't too fee sensitive or else they'd already be
> using something else... some are even in favor of higher costs since the
> high fees are what create the scarcity needed for their seigniorage.
> > >
> > > But yeah I think your comments on priorities are correct.
> > >
> > >
> > >
> > > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
> wrote:
> > >
> > > > Hi list,
> > > >
> > > > Thanks to the annex covered by the signature, I don't see how the
> concern about limiting
> > > > the extensibility of bitcoin script with future (post-quantum)
> cryptographic schemes.
> > > > Previous proposal of the annex were deliberately designed with
> variable-length fields
> > > > to flexibly accomodate a wide range of things.
> > > >
> > > > I believe there is one thing that has not been proposed to limit
> unpredictable utterance
> > > > of spams on the blockchain, namely congestion control of categories
> of outputs (e.g "fat"
> > > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey
> and L a limiting
> > > > threshold for the number of T occurences during the period P. Beyond
> the L threshold, any
> > > > additional T scriptpubkey is making the block invalid. Or
> alternatively, any additional
> > > > T generating / spending transaction must pay some weight penalty...
> > > >
> > > > Congestion control, which of course comes with its lot of
> shenanigans, is not very a novel
> > > > idea as I believe it has been floated few times in the context of
> lightning to solve mass
> > > > closure, where channels out-priced at current feerate would have
> their safety timelocks scale
> > > > ups.
> > > >
> > > > No need anymore to come to social consensus on what is quantitative
> "spam" or not. The blockchain
> > > > would automatically throttle out the block space spamming
> transaction. Qualitative spam it's another
> > > > question, for anyone who has ever read shannon's theory of
> communication only effective thing can
> > > > be to limit the size of data payload. But probably we're kickly back
> to a non-mathematically solvable
> > > > linguistical question again [0].
> > > >
> > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
> favor of prioritizing
> > > > a timewarp fix and limiting dosy spends by old redeem scripts,
> rather than engaging in shooting
> > > > ourselves in the foot with ill-designed "spam" consensus
> mitigations.
> > > >
> > > > [0] If you have a soul of logician, it would be an interesting
> demonstration to come with
> > > > to establish that we cannot come up with mathematically or
> cryptographically consensus means
> > > > to solve qualitative "spam", which in a very pure sense is a
> linguistical issue.
> > > >
> > > > Best,
> > > > Antoine
> > > > OTS hash:
> 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
> > > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a
> écrit :
> > > >
> > > > > Hi,
> > > > >
> > > > > This approach was discussed last year when evaluating the best way
> to mitigate DoS blocks in terms
> > > > > of gains compared to confiscatory surface. Limiting the size of
> created scriptPubKeys is not a
> > > > > sufficient mitigation on its own, and has a non-trivial
> confiscatory surface.
> > > > >
> > > > > One of the goal of BIP54 is to address objections to Matt's
> earlier proposal, notably the (in my
> > > > > opinion reasonable) confiscation concerns voiced by Russell
> O'Connor. Limiting the size of
> > > > > scriptPubKeys would in this regard be moving in the opposite
> direction.
> > > > >
> > > > > Various approaches of limiting the size of spent scriptPubKeys
> were discussed, in forms that would
> > > > > mitigate the confiscatory surface, to adopt in addition to (what
> eventually became) the BIP54 sigops
> > > > > limit. However i decided against including this additional measure
> in BIP54 because:
> > > > > - of the inherent complexity of the discussed schemes, which would
> make it hard to reason about
> > > > > constructing transactions spending legacy inputs, and equally hard
> to evaluate the reduction of
> > > > > the confiscatory surface;
> > > > > - more importantly, there is steep diminishing returns to piling
> on more mitigations. The BIP54
> > > > > limit on its own prevents an externally-motivated attacker from
> *unevenly* stalling the network
> > > > > for dozens of minutes, and a revenue-maximizing miner from
> regularly stalling its competitions
> > > > > for dozens of seconds, at a minimized cost in confiscatory
> surface. Additional mitigations reduce
> > > > > the worst case validation time by a smaller factor at a higher
> cost in terms of confiscatory
> > > > > surface. It "feels right" to further reduce those numbers, but
> it's less clear what the tangible
> > > > > gains would be.
> > > > >
> > > > > Furthermore, it's always possible to get the biggest bang for our
> buck in a first step and going the
> > > > > extra mile in a later, more controversial, soft fork. I previously
> floated the idea of a "cleanup
> > > > > v2" in private discussions, and i think besides a reduction of the
> maximum scriptPubKey size it
> > > > > should feature a consensus-enforced maximum transaction size for
> the reasons stated here:
> > > > >
> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
> I wouldn't hold my
> > > > > breath on such a "cleanup v2", but it may be useful to have it
> documented somewhere.
> > > > >
> > > > > I'm trying to not go into much details regarding which mitigations
> were considered in designing
> > > > > BIP54, because they are tightly related to the design of various
> DoS blocks. But i'm always happy to
> > > > > rehash the decisions made there and (re-)consider alternative
> approaches on the semi-private Delving
> > > > > thread [0] dedicated to this purpose. Feel free to ping me to get
> access if i know you.
> > > > >
> > > > > Best,
> > > > > Antoine Poinsot
> > > > >
> > > > > [0]:
> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
> fre...@reardencode.com> wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> > > > > >
> > > > > > > But also given that there are essentially no violations and no
> reason to
> > > > > > > expect any I'm not sure the proposal is worth time relative to
> fixes of
> > > > > > > actual moderately serious DOS attack issues.
> > > > > >
> > > > > >
> > > > > > I believe this limit would also stop most (all?) of
> PortlandHODL's
> > > > > > DoSblocks without having to make some of the other changes in
> GCC. I
> > > > > > think it's worthwhile to compare this approach to those proposed
> by
> > > > > > Antoine in solving these DoS vectors.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > --Brandon
> > > > > >
> > > > > > --
> > > > > > You received this message because you are subscribed to the
> Google Groups "Bitcoin Development Mailing List" group.
> > > > > > To unsubscribe from this group and stop receiving emails from
> it, send an email to bitcoindev+...@googlegroups.com.
> > > > > > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
> > > >
> > > > --
> > > > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > > > To unsubscribe from this group and stop receiving emails from it,
> send an email to bitcoindev+...@googlegroups.com.
> > >
> > > > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+...@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+...@googlegroups.com.
>
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+...@googlegroups.com.
>
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0d225c5n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0d225c5n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+unsubscribe@googlegroups.com.
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/09d0aa74-1305-45bd-8da9-03d1506f5784n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/09d0aa74-1305-45bd-8da9-03d1506f5784n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAAS2fgTKf4wyfnhep7LsmAEad7HRsg5S5VguX9rDrdMUrkkrXQ%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 37267 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* [bitcoindev] Policy restrictions Was: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 6:15 ` Greg Maxwell
2025-10-30 8:55 ` Bitcoin Error Log
@ 2025-10-30 20:27 ` 'Russell O'Connor' via Bitcoin Development Mailing List
2025-10-30 22:23 ` [bitcoindev] " 'Russell O'Connor' via Bitcoin Development Mailing List
1 sibling, 1 reply; 46+ messages in thread
From: 'Russell O'Connor' via Bitcoin Development Mailing List @ 2025-10-30 20:27 UTC (permalink / raw)
To: Greg Maxwell, Bitcoin Development Mailing List
[-- Attachment #1: Type: text/plain, Size: 1423 bytes --]
On Thu, Oct 30, 2025 at 2:40 AM Greg Maxwell <gmaxwell@gmail.com> wrote:
> I don't even think bitcoin has ever policy restricted something that was
> in active use, much less softforked out something like that.
>
I invite the Bitcore lore experts to correct me here, but I recall someone
many years ago finding that their bare multisig funds (likely related to
the Counterparty nonsense) were stuck by policy due to some new policy
being enacted to mandate that pubkeys in bare multisigs must now all be
on-curve points ... or something like that. I do hope that they managed to
get their funds recovered by now with direct miner intervention.
I really ought to vet my claim above by going through my IRC logs and
Bitcoin development history ... but a quicker way is to post a claim
publicly on the internet and wait for someone else to call it out as being
wrong.
Also, I think this type of policy change quite harmful and shouldn't be
replicated, and ideally reverted, assuming my story is correct.
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAMZUoKnyYpfJ1fZRan7BzyGvMitxznoSyCXkjxc2Qy5Z9pNvMA%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 2077 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 16:10 ` [bitcoindev] " Tom Harding
@ 2025-10-30 22:15 ` Doctor Buzz
0 siblings, 0 replies; 46+ messages in thread
From: Doctor Buzz @ 2025-10-30 22:15 UTC (permalink / raw)
To: Bitcoin Development Mailing List
[-- Attachment #1.1: Type: text/plain, Size: 27313 bytes --]
> We should reflect on the goal of minimizing UTXO set size. Would we as
easily say we should minimize the number of people/entities who hold L1
coins, or the number of ways each person/entity can hold them?
> The dire concern with UTXO set size was born with the optimization of the
core bitcoin software for mining, rather than for holding and transfers, in
2012. Some geniuses were involved with that change. Satoshi was not one
of them.
The goal isn’t to minimize UTXOs themselves or discourage people from
self-custodying coins on L1. We should minimize *non-monetary UTXOs* (those
created only for file storage, dusting-type tracking, etc.), not those used
for transferring or holding value, as intended.
Minimizing *unnecessary resource usage* that doesn’t serve the
“peer-to-peer money” purpose helps everyone that has that purpose as a goal:
– Everyday users who want to store their savings securely.
– Active users who want to spend and settle efficiently.
– Node runners who keep the network decentralized and verifiable.
Bitcoin should take all steps to avoid unnecessary bloat (which has led to
an extra ~30 GB/yr of immutable data storage since Feb 2023 and completely
sidetracked any REAL improvements / security upgrades that should've been
leading discussions). Then perhaps in just a few years, every new phone
could comfortably run its own fully-verifying node with no trusted servers
or light-clients... enabling efficient, permissionless monetary use on both
L1 or L2. Self-hosted LN nodes cannot compete with those who want file
storage that lasts longer than the user itself.
Since you brought it up... I have a proposal to limit "bulk dust" (defined
as a Tx with >= ~20 "Tiny" outputs && >= ~70% of the outputs are "Tiny",
which starts at 4096 sats but halves every epoch, beginning at block
1,260,000).
It should be difficult to use Bitcoin for data storage and sending tiny Txs
for that purpose, which has harmful effects on not just the UTXO set, but
also the network... which is precisely what IsDust() and IsStandard() were
designed to do. Changing consensus rules is less than ideal, but for
everyone that says "filters don't work", consensus changes would appear to
be the only "default solutions" they'll accept, to actually identify
certain Txs or patterns as invalid.
See: Limit "Bulk Dust" with a default filter or
consensus... https://groups.google.com/g/bitcoindev/c/mW_zR01joiY
On Thursday, October 30, 2025 at 12:37:03 PM UTC-5 Tom Harding wrote:
> We should reflect on the goal of minimizing UTXO set size. Would we as
> easily say we should minimize the number of people/entities who hold L1
> coins, or the number of ways each person/entity can hold them?
>
> The dire concern with UTXO set size was born with the optimization of the
> core bitcoin software for mining, rather than for holding and transfers, in
> 2012. Some geniuses were involved with that change. Satoshi was not one
> of them.
>
>
> On Wednesday, October 29, 2025 at 7:32:10 PM UTC-7 Greg Maxwell wrote:
>
> "A few bytes" might be on the order of forever 10% increase in the UTXO
> set size, plus a full from-network resync of all pruned nodes and a full
> (e.g. most of day outage) reindex of all unpruned nodes. Not
> insignificant but also not nothing. Such a portion of the existing utxo
> size is not from outputs over 520 bytes in size, so as a scheme for utxo
> set size reduction the addition of MHT tracking would probably make it a
> failure.
>
> Also some risk of creating some new scarce asset class, txouts consisting
> of primordial coins that aren't subject to the new rules... sounds like the
> sort of thing that NFT degens would absolutely love. That might not be an
> issue *generally* for some change with confiscation risk, but for a change
> that is specifically intended to lobotomize bitcoin to make it less useful
> to NFT degens, maybe not such a great idea. :P
>
> I mentioned it at all because I thought it could potentially be of some
> use, I'm just more skeptical of it for the current context. Also luke-jr
> and crew has moved on to actually propose even more invasive changes than
> just limiting the script size, which I anticipated, and has much more
> significant issues. Just size limiting outputs likely doesn't harm any
> interests or usages-- and so probably could be viable if the confiscation
> issue was addressed, but it also doesn't stick it to people transacting in
> ways the priests of ocean mining dislike.
>
> > I believe you're pointing out the idea of non economically-rational
> spammers?
>
> I think it's a mistake to conclude the spammers are economically
> irrational-- they're often just responding to different economics which may
> be less legible to your analysis. In particular, NFT degens prefer the
> high cost of transactions as a thing that makes their tokens scarce and
> gives them value. -- otherwise they wouldn't be swapping for one less
> efficient encoding for another, they're just be using another blockchain
> (perhaps their own) entirely.
>
>
>
>
> On Thu, Oct 30, 2025 at 1:16 AM Michael Tidwell <mtidw...@gmail.com>
> wrote:
>
> > MRH tracking might make that acceptable, but comes at a high cost which
> I think would clearly not be justified.
>
> Greg, I want to ask/challenge how bad this is, this seems like a generally
> reusable primitive that could make other upgrades more feasible that also
> have the same strict confiscation risk profile.
> IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo?
>
> Poelstra,
>
> > I don't think this is a great idea -- it would be technically hard to
> implement and slow deployment indefinitely.
>
> I would like to know how much of a deal breaker this is in your opinion.
> Is MRH tracking off the table? In terms of the hypothetical presigned
> transactions that may exist using P2MS, is this a hard enough reason to
> require a MRH idea?
>
> Greg,
>
> > So, paradoxically this limit might increase the amount of non-prunable
> data
>
> I believe you're pointing out the idea of non economically-rational
> spammers? We already see actors ignoring cheaper witness inscription
> methods. If spam shifts to many sub-520 fake pubkey outputs (which I
> believe is less harmful than stamps), that imo is a separate UTXO cost
> discussion. (like a SF to add weight to outputs). Anywho, this point alone
> doesn't seem sufficient to add as a clear negative reason for someone
> opposed to the proposal.
>
> Thanks,
> Tidwell
> On Wednesday, October 22, 2025 at 5:55:58 AM UTC-4 moonsettler wrote:
>
> > Confiscation is a problem because of presigned transactions
>
> Allow 10000 bytes of total scriptPubKey size in each block counting only
> those outputs that are larger than x (520 as proposed).
> The code change is pretty minimal from the most obvious implementation of
> the original rule.
>
> That makes it technically non-confiscatory. Still non-standard, but if
> anyone out there so obnoxiously foot-gunned themselves, they can't claim
> they were rugged by the devs.
>
> BR,
> moonsettler
>
> On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io>
> wrote:
>
> > Hey,
> >
> > First, thank you to everyone who responded, and please continue to do
> so. There were many thought provoking responses and this did shift my
> perspective quite a bit from the original post, which in of itself was the
> goal to a degree.
> >
> > I am currently only going to respond to all of the current concerns.
> Acks; though I like them will be ignored unless new discoveries are
> included.
> >
> > Tl;dr (Portlands Perspective)
> > - Confiscation is a problem because of presigned transactions
> > - DoS mitigation could also occur through marking UTXOs as unspendable
> if > 520 bytes, this would preserve the proof of publication.
> > - Timeout / Sunset logic is compelling
> > - The (n) value of acceptable needed bytes is contentious with the lower
> suggested limit being 67
> > - Congestion control is worth a look?
> >
> > Next Step:
> > - Deeper discussion at the individual level: Antoine Poinsot and GCC
> overlap?
> > - Write an implementation.
> > - Decide to pursue BIP
> >
> > Responses
> >
> > Andrew Poelstra:
> > > There is a risk of confiscation of coins which have pre-signed but
> > > unpublished transactions spending them to new outputs with large
> > > scriptPubKeys. Due to long-standing standardness rules, and the
> presence
> > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> > > such transactions exist.
> >
> > PortlandHODL: This is a risk that can be incurred and likely not
> possible to mitigate as there could be possible chains of transactions so
> even when recursively iterating over a chain there is a chance that a
> presigned breaks this rule. Every idea I have had from block redemption
> limits on prevouts seems to just be a coverage issue where you can make the
> confiscation less likely but not completely mitigated.
> >
> > Second, there are already TXs that effectively have been confiscated at
> the policy level (P2SH Cleanstack violation) where the user can not find
> any miner with a policy to accept these into their mempool. (3 years)
> >
> > /dev /fd0
> > > so it would be great if this was restricted to OP_RETURN
> >
> > PortlandHODL: I reject this completely as this would remove the UTXOset
> omission for the scriptPubkey and encourage miners to subvert the OP_RETURN
> restriction and instead just use another op_code, this also do not hit on
> some of the most important factors such as DoS mitigation and legacy script
> attack surface reduction.
> >
> > Peter Todd
> > > NACK ...
> >
> > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,
> without including any additional context or reasoning.
> >
> > jeremy
> > > I think that this type of rule is OK if we do it as a "sunsetting"
> restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2
> years, 5 years, 10 years).
> >
> > If action is taken, this is the most reasonable approach. Alleviating
> confiscatory concerns through deferral.
> >
> > > You can argue against this example probably, but it is worth
> considering that absence of evidence of use is not evidence of absence of
> use and I myself feel that overall our understanding of Bitcoin transaction
> programming possibilities is still early. If you don't like this example, I
> can give you others (probably).
> >
> > Agreed and this also falls into the reasoning for deciding to utilize
> point 1 in your response. My thoughts on this would be along the lines of
> proof of publication as this change only has the effect of stripping away
> the executable portion of a script between 521 and 10_000 bytes or the
> published data portion if > 10_000 bytes which the same data could likely
> be published in chunked segments using outpoints.
> >
> > Andrew Poelstra:
> > > Aside from proof-of-publication (i.e. data storage directly in the
> UTXO
> > > set) there is no usage of script which can't be equally (or better)
> > > accomplished by using a Segwit v0 or Taproot script.
> >
> > This sums up the majority of future usecase concern
> >
> > Anthony Towns:
> > > (If you restricted the change to only applying to scripts that used
> > non-push operators, that would probably still provide upgrade
> flexibility
> > while also preventing potential script abuses. But it wouldn't do
> anything
> > to prevent publishing data)
> >
> > Could this not be done as segments in multiple outpoints using a
> coordination outpoint? I fail to see why publication proof must be in a
> single chunk. This does though however bring another alternative to mind,
> just making these outpoints unspendable but not invalidate the block
> through inclusion...
> >
> > > As far as the "but contiguous data will be regulated more strictly"
> > argument goes; I don't think "your honour, my offensive content has
> > strings of 4d0802 every 520 bytes
> >
> > Correct, this was never meant to resolve this issue.
> >
> > Luke Dashjr:
> > > If we're going this route, we should just close all the gaps for the
> immediate future:
> >
> > To put it nicely, this is completely beyond the scope of what is being
> proposed.
> >
> > Guus Ellenkamp:
> > > If there are really so few OP_RETURN outputs more than 144 bytes, then
> > why increase the limit if that change is so controversial? It seems
> > people who want to use a larger OP_RETURN size do it anyway, even with
> > the current default limits.
> >
> > Completely off topic and irrelevant
> >
> > Greg Tonoski:
> > > Limiting the maximum size of the scriptPubKey of a transaction to 67
> bytes.
> >
> > This leave no room to deal with broken hashing algorithms and very
> little future upgradability for hooks. The rest of these points should be
> merged with Lukes response and either hijack my thread or start a new one
> with the increased scope, any approach I take will only be related to the
> ScriptPubkey
> >
> > Keagan McClelland:
> > > Hard NACK on capping the witness size as that would effectively ban
> large scripts even in the P2SH wrapper which undermines Bitcoin's ability
> to be an effectively programmable money.
> >
> > This has nothing to do with the witness size or even the P2SH wrapper
> >
> > Casey Rodarmor:
> > > I think that "Bitcoin could need it in the future?" might be a good
> enough
> > reason not to do this.
> >
> > > Script pubkeys are the only variable-length transaction fields which
> can be
> > covered by input signatures, which might make them useful for future
> soft
> > forks. I can imagine confidential asset schemes or post-quantum coin
> recovery
> > schemes requiring large proofs in the outputs, where the validity of the
> proof
> > determined whether or not the transaction is valid, and thus require the
> > proofs to be in the outputs, and not just a hash commitment.
> >
> > Would the ability to publish the data alone be enough? Example make the
> output unspendable but allow for the existence of the bytes to be covered
> through the signature?
> >
> >
> > Antoine Poinsot:
> > > Limiting the size of created scriptPubKeys is not a sufficient
> mitigation on its own
> > I fail to see how this would not be sufficient? To DoS you need 2 things
> inputs with ScriptPubkey redemptions + heavy op_codes that require unique
> checks. Example DUPing stack element again and again doesn't work. This
> then leads to the next part is you could get up to unique complex
> operations with the current (n) limit included per input.
> >
> > > One of the goal of BIP54 is to address objections to Matt's earlier
> proposal, notably the (in my
> > opinion reasonable) confiscation concerns voiced by Russell O'Connor.
> Limiting the size of
> > scriptPubKeys would in this regard be moving in the opposite direction.
> >
> > Some notes is I would actually go as far as to say the confiscation risk
> is higher with the TX limit proposed in BIP54 as we actually have proof of
> redemption of TXs that break that rule and the input set to do this already
> exists on-chain no need to even wonder about the whole presigned.
> bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
> >
> > Please let me know if I am incorrect on any of this.
> >
> > > Furthermore, it's always possible to get the biggest bang for our buck
> in a first step
> >
> > Agreed on bang for the buck regarding DoS.
> >
> > My final point here would be that I would like to discuss more, and this
> is response is from the initial view of your response and could be
> incomplete or incorrect, This is just my in the moment response.
> >
> > Antoine Riard:
> > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
> favor of prioritizing
> > a timewarp fix and limiting dosy spends by old redeem scripts
> >
> > The idea of congestion control is interesting, but this solution should
> significantly reduce the total DoS severity of known vectors.
> >
> > On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
> >
> > > Limits on block construction that cross transactions make it harder to
> accurately estimate fees and greatly complicate optimal block
> construction-- the latter being important because smarter and more computer
> powered mining code generating higher profits is a pro centralization
> factor.
> > >
> > > In terms of effectiveness the "spam" will just make itself
> indistinguishable from the most common transaction traffic from the
> perspective of such metrics-- and might well drive up "spam" levels because
> the higher embedding cost may make some of them use more transactions. The
> competition for these buckets by other traffic could make it effectively a
> block size reduction even against very boring ordinary transactions. ...
> which is probably not what most people want.
> > >
> > > I think it's important to keep in mind that bitcoin fee levels even at
> 0.1s/vb are far beyond what other hosting services and other blockchains
> cost-- so anyone still embedding data in bitcoin *really* want to be there
> for some reason and aren't too fee sensitive or else they'd already be
> using something else... some are even in favor of higher costs since the
> high fees are what create the scarcity needed for their seigniorage.
> > >
> > > But yeah I think your comments on priorities are correct.
> > >
> > >
> > >
> > > On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com>
> wrote:
> > >
> > > > Hi list,
> > > >
> > > > Thanks to the annex covered by the signature, I don't see how the
> concern about limiting
> > > > the extensibility of bitcoin script with future (post-quantum)
> cryptographic schemes.
> > > > Previous proposal of the annex were deliberately designed with
> variable-length fields
> > > > to flexibly accomodate a wide range of things.
> > > >
> > > > I believe there is one thing that has not been proposed to limit
> unpredictable utterance
> > > > of spams on the blockchain, namely congestion control of categories
> of outputs (e.g "fat"
> > > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey
> and L a limiting
> > > > threshold for the number of T occurences during the period P. Beyond
> the L threshold, any
> > > > additional T scriptpubkey is making the block invalid. Or
> alternatively, any additional
> > > > T generating / spending transaction must pay some weight penalty...
> > > >
> > > > Congestion control, which of course comes with its lot of
> shenanigans, is not very a novel
> > > > idea as I believe it has been floated few times in the context of
> lightning to solve mass
> > > > closure, where channels out-priced at current feerate would have
> their safety timelocks scale
> > > > ups.
> > > >
> > > > No need anymore to come to social consensus on what is quantitative
> "spam" or not. The blockchain
> > > > would automatically throttle out the block space spamming
> transaction. Qualitative spam it's another
> > > > question, for anyone who has ever read shannon's theory of
> communication only effective thing can
> > > > be to limit the size of data payload. But probably we're kickly back
> to a non-mathematically solvable
> > > > linguistical question again [0].
> > > >
> > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in
> favor of prioritizing
> > > > a timewarp fix and limiting dosy spends by old redeem scripts,
> rather than engaging in shooting
> > > > ourselves in the foot with ill-designed "spam" consensus
> mitigations.
> > > >
> > > > [0] If you have a soul of logician, it would be an interesting
> demonstration to come with
> > > > to establish that we cannot come up with mathematically or
> cryptographically consensus means
> > > > to solve qualitative "spam", which in a very pure sense is a
> linguistical issue.
> > > >
> > > > Best,
> > > > Antoine
> > > > OTS hash:
> 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
> > > > Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a
> écrit :
> > > >
> > > > > Hi,
> > > > >
> > > > > This approach was discussed last year when evaluating the best way
> to mitigate DoS blocks in terms
> > > > > of gains compared to confiscatory surface. Limiting the size of
> created scriptPubKeys is not a
> > > > > sufficient mitigation on its own, and has a non-trivial
> confiscatory surface.
> > > > >
> > > > > One of the goal of BIP54 is to address objections to Matt's
> earlier proposal, notably the (in my
> > > > > opinion reasonable) confiscation concerns voiced by Russell
> O'Connor. Limiting the size of
> > > > > scriptPubKeys would in this regard be moving in the opposite
> direction.
> > > > >
> > > > > Various approaches of limiting the size of spent scriptPubKeys
> were discussed, in forms that would
> > > > > mitigate the confiscatory surface, to adopt in addition to (what
> eventually became) the BIP54 sigops
> > > > > limit. However i decided against including this additional measure
> in BIP54 because:
> > > > > - of the inherent complexity of the discussed schemes, which would
> make it hard to reason about
> > > > > constructing transactions spending legacy inputs, and equally hard
> to evaluate the reduction of
> > > > > the confiscatory surface;
> > > > > - more importantly, there is steep diminishing returns to piling
> on more mitigations. The BIP54
> > > > > limit on its own prevents an externally-motivated attacker from
> *unevenly* stalling the network
> > > > > for dozens of minutes, and a revenue-maximizing miner from
> regularly stalling its competitions
> > > > > for dozens of seconds, at a minimized cost in confiscatory
> surface. Additional mitigations reduce
> > > > > the worst case validation time by a smaller factor at a higher
> cost in terms of confiscatory
> > > > > surface. It "feels right" to further reduce those numbers, but
> it's less clear what the tangible
> > > > > gains would be.
> > > > >
> > > > > Furthermore, it's always possible to get the biggest bang for our
> buck in a first step and going the
> > > > > extra mile in a later, more controversial, soft fork. I previously
> floated the idea of a "cleanup
> > > > > v2" in private discussions, and i think besides a reduction of the
> maximum scriptPubKey size it
> > > > > should feature a consensus-enforced maximum transaction size for
> the reasons stated here:
> > > > >
> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8.
> I wouldn't hold my
> > > > > breath on such a "cleanup v2", but it may be useful to have it
> documented somewhere.
> > > > >
> > > > > I'm trying to not go into much details regarding which mitigations
> were considered in designing
> > > > > BIP54, because they are tightly related to the design of various
> DoS blocks. But i'm always happy to
> > > > > rehash the decisions made there and (re-)consider alternative
> approaches on the semi-private Delving
> > > > > thread [0] dedicated to this purpose. Feel free to ping me to get
> access if i know you.
> > > > >
> > > > > Best,
> > > > > Antoine Poinsot
> > > > >
> > > > > [0]:
> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <
> fre...@reardencode.com> wrote:
> > > > >
> > > > > >
> > > > > >
> > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
> > > > > >
> > > > > > > But also given that there are essentially no violations and no
> reason to
> > > > > > > expect any I'm not sure the proposal is worth time relative to
> fixes of
> > > > > > > actual moderately serious DOS attack issues.
> > > > > >
> > > > > >
> > > > > > I believe this limit would also stop most (all?) of
> PortlandHODL's
> > > > > > DoSblocks without having to make some of the other changes in
> GCC. I
> > > > > > think it's worthwhile to compare this approach to those proposed
> by
> > > > > > Antoine in solving these DoS vectors.
> > > > > >
> > > > > > Best,
> > > > > >
> > > > > > --Brandon
> > > > > >
> > > > > > --
> > > > > > You received this message because you are subscribed to the
> Google Groups "Bitcoin Development Mailing List" group.
> > > > > > To unsubscribe from this group and stop receiving emails from
> it, send an email to bitcoindev+...@googlegroups.com.
> > > > > > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.
> > > >
> > > > --
> > > > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > > > To unsubscribe from this group and stop receiving emails from it,
> send an email to bitcoindev+...@googlegroups.com.
> > >
> > > > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.com.
>
> >
> > --
> > You received this message because you are subscribed to the Google
> Groups "Bitcoin Development Mailing List" group.
> > To unsubscribe from this group and stop receiving emails from it, send
> an email to bitcoindev+...@googlegroups.com.
> > To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to bitcoindev+...@googlegroups.com.
>
> To view this discussion visit
> https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com
> <https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba2ff473n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/73793fc0-29d8-4b79-b7bf-048e459c928bn%40googlegroups.com.
[-- Attachment #1.2: Type: text/html, Size: 32758 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
* [bitcoindev] Re: Policy restrictions Was: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus.
2025-10-30 20:27 ` [bitcoindev] Policy restrictions Was: " 'Russell O'Connor' via Bitcoin Development Mailing List
@ 2025-10-30 22:23 ` 'Russell O'Connor' via Bitcoin Development Mailing List
0 siblings, 0 replies; 46+ messages in thread
From: 'Russell O'Connor' via Bitcoin Development Mailing List @ 2025-10-30 22:23 UTC (permalink / raw)
To: Greg Maxwell, Bitcoin Development Mailing List, Antoine Poinsot
[-- Attachment #1: Type: text/plain, Size: 2548 bytes --]
Fine, I ended up looking into it.
PR 5247 <http://github.com/bitcoin/bitcoin/pull/5247> changed the semantics
of STRICTENC policy in late 2014 so that during a CHECKMULTISIGVERIFY, if a
pubkey with an incorrect prefix is encountered, the script fails. The
previous behaviour was that if a pubkey was invalid, the check failed for
only that pubkey and processing continued on to the next pubkey.
I still have to go through my IRC logs, but my recollection is there is at
least one person who had their funds "soft confiscated" in the sense that
they were now, by policy only, unable to spend their UTXOs and would
require bypassing policy to retrieve their funds.
People who have better databases than me are welcome to search through bare
multisig UTXOs to see if there are any having a strict subset of malformed
pubkeys in them.
So one minor correction to my story: it wasn't a matter of the pubkey being
off-curve, but rather having an invalid prefix / invalid encoding.
On Thu, Oct 30, 2025 at 4:27 PM Russell O'Connor <roconnor@blockstream.com>
wrote:
> On Thu, Oct 30, 2025 at 2:40 AM Greg Maxwell <gmaxwell@gmail.com> wrote:
>
>> I don't even think bitcoin has ever policy restricted something that was
>> in active use, much less softforked out something like that.
>>
>
> I invite the Bitcore lore experts to correct me here, but I recall someone
> many years ago finding that their bare multisig funds (likely related to
> the Counterparty nonsense) were stuck by policy due to some new policy
> being enacted to mandate that pubkeys in bare multisigs must now all be
> on-curve points ... or something like that. I do hope that they managed to
> get their funds recovered by now with direct miner intervention.
>
> I really ought to vet my claim above by going through my IRC logs and
> Bitcoin development history ... but a quicker way is to post a claim
> publicly on the internet and wait for someone else to call it out as being
> wrong.
>
> Also, I think this type of policy change quite harmful and shouldn't be
> replicated, and ideally reverted, assuming my story is correct.
>
--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CAMZUoKmPfbwJApAeYkXs6U9Syuj4KjbcsH3aFJ7desFnHxyyTw%40mail.gmail.com.
[-- Attachment #2: Type: text/html, Size: 3583 bytes --]
^ permalink raw reply [flat|nested] 46+ messages in thread
end of thread, other threads:[~2025-10-30 22:29 UTC | newest]
Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-10-02 20:42 [bitcoindev] [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus PortlandHODL
2025-10-02 22:19 ` Andrew Poelstra
2025-10-02 22:46 ` Andrew Poelstra
2025-10-02 22:47 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 7:11 ` Garlo Nicon
2025-10-02 22:27 ` Brandon Black
2025-10-03 1:21 ` [bitcoindev] " /dev /fd0
2025-10-03 10:46 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 11:26 ` /dev /fd0
2025-10-03 13:35 ` jeremy
2025-10-03 13:59 ` Andrew Poelstra
2025-10-03 14:18 ` /dev /fd0
2025-10-03 14:59 ` Andrew Poelstra
2025-10-03 16:15 ` Anthony Towns
2025-10-05 9:59 ` Guus Ellenkamp
2025-10-03 13:21 ` [bitcoindev] " Peter Todd
2025-10-03 16:52 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-03 15:42 ` Anthony Towns
2025-10-03 20:02 ` Luke Dashjr
2025-10-03 20:52 ` /dev /fd0
2025-10-04 23:12 ` jeremy
2025-10-05 10:59 ` Luke Dashjr
2025-10-08 15:03 ` Greg Tonoski
2025-10-08 18:15 ` Keagan McClelland
2025-10-15 20:04 ` [bitcoindev] " Casey Rodarmor
2025-10-16 0:06 ` Greg Maxwell
2025-10-17 17:07 ` Brandon Black
2025-10-17 18:05 ` 'Antoine Poinsot' via Bitcoin Development Mailing List
2025-10-18 1:01 ` Antoine Riard
2025-10-18 4:03 ` Greg Maxwell
2025-10-18 12:06 ` PortlandHODL
2025-10-18 16:44 ` Greg Tonoski
2025-10-18 16:54 ` /dev /fd0
2025-10-22 8:07 ` 'moonsettler' via Bitcoin Development Mailing List
2025-10-27 23:44 ` Michael Tidwell
2025-10-30 2:26 ` Greg Maxwell
2025-10-30 3:36 ` Michael Tidwell
2025-10-30 6:15 ` Greg Maxwell
2025-10-30 8:55 ` Bitcoin Error Log
2025-10-30 17:40 ` Greg Maxwell
2025-10-30 20:27 ` [bitcoindev] Policy restrictions Was: " 'Russell O'Connor' via Bitcoin Development Mailing List
2025-10-30 22:23 ` [bitcoindev] " 'Russell O'Connor' via Bitcoin Development Mailing List
2025-10-30 16:10 ` [bitcoindev] " Tom Harding
2025-10-30 22:15 ` Doctor Buzz
2025-10-20 15:22 ` Greg Maxwell
2025-10-21 19:05 ` Garlo Nicon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox