Hi portlandhold,

> PortlandHODL: I reject this completely as this would remove the UTXOset omission for the scriptPubkey

Your proposed solution would affect the UTXO set negatively if someone is really motivated to use scriptpubkey for arbitrary data. They will use multiple outputs as people do with [DNS records][0].

> and encourage miners to subvert the OP_RETURN restriction and instead just use another op_code

What would motivate users to follow this approach, considering that storing data in the witness is cheaper?

[0]: https://asherfalcon.com/blog/posts/2
[1]: https://docs.ordinals.com/guides/batch-inscribing.html

/dev/fd0
floppy disk guy


On Sat, Oct 18, 2025 at 6:45 PM PortlandHODL <admin@qrsnap.io> wrote:
Hey,

First, thank you to everyone who responded, and please continue to do so. There were many thought provoking responses and this did shift my perspective quite a bit from the original post, which in of itself was the goal to a degree.

I am currently only going to respond to all of the current concerns. Acks; though I like them will be ignored unless new discoveries are included.

Tl;dr (Portlands Perspective)
 - Confiscation is a problem because of presigned transactions
 - DoS mitigation could also occur through marking UTXOs as unspendable if > 520 bytes, this would preserve the proof of publication.
 - Timeout / Sunset logic is compelling
 - The (n) value of acceptable needed bytes is contentious with the lower suggested limit being 67
 - Congestion control is worth a look?

Next Step:
 - Deeper discussion at the individual level: Antoine Poinsot and GCC overlap?
 - Write an implementation.
 - Decide to pursue BIP

Responses

Andrew Poelstra:
> There is a risk of confiscation of coins which have pre-signed but
> unpublished transactions spending them to new outputs with large
> scriptPubKeys. Due to long-standing standardness rules, and the presence
> of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any
> such transactions exist.

PortlandHODL: This is a risk that can be incurred and likely not possible to mitigate as there could be possible chains of transactions so even when recursively iterating over a chain there is a chance that a presigned breaks this rule. Every idea I have had from block redemption limits on prevouts seems to just be a coverage issue where you can make the confiscation less likely but not completely mitigated.

Second, there are already TXs that effectively have been confiscated at the policy level (P2SH Cleanstack violation) where the user can not find any miner with a policy to accept these into their mempool. (3 years)

/dev /fd0
>  so it would be great if this was restricted to OP_RETURN

PortlandHODL: I reject this completely as this would remove the UTXOset omission for the scriptPubkey and encourage miners to subvert the OP_RETURN restriction and instead just use another op_code, this also do not hit on some of the most important factors such as DoS mitigation and legacy script attack surface reduction.

Peter Todd
> NACK ...

PortlandHODL: You NACK'd for the same reasons that I stated in my OP, without including any additional context or reasoning.

jeremy
> I think that this type of rule is OK if we do it as a "sunsetting" restriction -- e.g. a soft fork active for the next N blocks (N = e.g. 2 years, 5 years, 10 years).

If action is taken, this is the most reasonable approach. Alleviating confiscatory concerns through deferral.

> You can argue against this example probably, but it is worth considering that absence of evidence of use is not evidence of absence of use and I myself feel that overall our understanding of Bitcoin transaction programming possibilities is still early.  If you don't like this example, I can give you others (probably).

Agreed and this also falls into the reasoning for deciding to utilize point 1 in your response. My thoughts on this would be along the lines of proof of publication as this change only has the effect of stripping away the executable portion of a script between 521 and 10_000 bytes or the published data portion if > 10_000 bytes which the same data could likely be published in chunked segments using outpoints.

Andrew Poelstra:
> Aside from proof-of-publication (i.e. data storage directly in the UTXO
> set) there is no usage of script which can't be equally (or better)
> accomplished by using a Segwit v0 or Taproot script.

This sums up the majority of future usecase concern

Anthony Towns:
> (If you restricted the change to only applying to scripts that used
non-push operators, that would probably still provide upgrade flexibility
while also preventing potential script abuses. But it wouldn't do anything
to prevent publishing data)

Could this not be done as segments in multiple outpoints using a coordination outpoint? I fail to see why publication proof must be in a single chunk. This does though however bring another alternative to mind, just making these outpoints unspendable but not invalidate the block through inclusion...

> As far as the "but contiguous data will be regulated more strictly"
argument goes; I don't think "your honour, my offensive content has
strings of 4d0802 every 520 bytes

Correct, this was never meant to resolve this issue.

Luke Dashjr:
> If we're going this route, we should just close all the gaps for the immediate future:

To put it nicely, this is completely beyond the scope of what is being proposed.

Guus Ellenkamp:
> If there are really so few OP_RETURN outputs more than 144 bytes, then
why increase the limit if that change is so controversial? It seems
people who want to use a larger OP_RETURN size do it anyway, even with
the current default limits.

Completely off topic and irrelevant

Greg Tonoski:
> Limiting the maximum size of the scriptPubKey of a transaction to 67 bytes.

This leave no room to deal with broken hashing algorithms and very little future upgradability for hooks. The rest of these points should be merged with Lukes response and either hijack my thread or start a new one with the increased scope, any approach I take will only be related to the ScriptPubkey

Keagan McClelland:
> Hard NACK on capping the witness size as that would effectively ban large scripts even in the P2SH wrapper which undermines Bitcoin's ability to be an effectively programmable money.

This has nothing to do with the witness size or even the P2SH wrapper

Casey Rodarmor:
> I think that "Bitcoin could need it in the future?" might be a good enough
reason not to do this.

> Script pubkeys are the only variable-length transaction fields which can be
covered by input signatures, which might make them useful for future soft
forks. I can imagine confidential asset schemes or post-quantum coin recovery
schemes requiring large proofs in the outputs, where the validity of the proof
determined whether or not the transaction is valid, and thus require the
proofs to be in the outputs, and not just a hash commitment.

Would the ability to publish the data alone be enough? Example make the output unspendable but allow for the existence of the bytes to be covered through the signature?


Antoine Poinsot:
> Limiting the size of created scriptPubKeys is not a sufficient mitigation on its own
I fail to see how this would not be sufficient? To DoS you need 2 things inputs with ScriptPubkey redemptions + heavy op_codes that require unique checks. Example DUPing stack element again and again doesn't work. This then leads to the next part is you could get up to unique complex operations with the current (n) limit included per input.

> One of the goal of BIP54 is to address objections to Matt's earlier proposal, notably the (in my
opinion reasonable) confiscation concerns voiced by Russell O'Connor. Limiting the size of
scriptPubKeys would in this regard be moving in the opposite direction.

Some notes is I would actually go as far as to say the confiscation risk is higher with the TX limit proposed in BIP54 as we actually have proof of redemption of TXs that break that rule and the input set to do this already exists on-chain no need to even wonder about the whole presigned. bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08

Please let me know if I am incorrect on any of this.

> Furthermore, it's always possible to get the biggest bang for our buck in a first step

Agreed on bang for the buck regarding DoS.

My final point here would be that I would like to discuss more, and this is response is from the initial view of your response and could be incomplete or incorrect, This is just my in the moment response.

Antoine Riard:
> Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor of prioritizing
a timewarp fix and limiting dosy spends by old redeem scripts

The idea of congestion control is interesting, but this solution should significantly reduce the total DoS severity of known vectors. 

On Saturday, October 18, 2025 at 2:25:18 AM UTC-7 Greg Maxwell wrote:
Limits on block construction that cross transactions make it harder to accurately estimate fees and greatly complicate optimal block construction--  the latter being important because smarter and more computer powered mining code generating higher profits is a pro centralization factor.

In terms of effectiveness the "spam" will just make itself indistinguishable from the most common transaction traffic from the perspective of such metrics--  and might well drive up "spam" levels because the higher embedding cost may make some of them use more transactions.  The competition for these buckets by other traffic could make it effectively a block size reduction even against very boring ordinary transactions.  ... which is probably not what most people want.

I think it's important to keep in mind that bitcoin fee levels even at 0.1s/vb are far beyond what other hosting services and other blockchains cost-- so anyone still embedding data in bitcoin *really* want to be there for some reason and aren't too fee sensitive or else they'd already be using something else... some are even in favor of higher costs since the high fees are what create the scarcity needed for their seigniorage.

But yeah I think your comments on priorities are correct.


On Sat, Oct 18, 2025 at 1:20 AM Antoine Riard <antoin...@gmail.com> wrote:
Hi list,

Thanks to the annex covered by the signature, I don't see how the concern about limiting
the extensibility of bitcoin script with future (post-quantum) cryptographic schemes.
Previous proposal of the annex were deliberately designed with variable-length fields
to flexibly accomodate a wide range of things.

I believe there is one thing that has not been proposed to limit unpredictable utterance
of spams on the blockchain, namely congestion control of categories of outputs (e.g "fat"
scriptpubkeys). Let's say P a block period, T a type of scriptpubkey and L a limiting
threshold for the number of T occurences during the period P. Beyond the L threshold, any
additional T scriptpubkey is making the block invalid. Or alternatively, any additional
T generating / spending transaction must pay some weight penalty...

Congestion control, which of course comes with its lot of shenanigans, is not very a novel
idea as I believe it has been floated few times in the context of lightning to solve mass
closure, where channels out-priced at current feerate would have their safety timelocks scale
ups.

No need anymore to come to social consensus on what is quantitative "spam" or not. The blockchain
would automatically throttle out the block space spamming transaction. Qualitative spam it's another
question, for anyone who has ever read shannon's theory of communication only effective thing can
be to limit the size of data payload. But probably we're kickly back to a non-mathematically solvable
linguistical question again [0].

Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor of prioritizing
a timewarp fix and limiting dosy spends by old redeem scripts, rather than engaging in shooting
ourselves in the foot with ill-designed "spam" consensus mitigations.

[0] If you have a soul of logician, it would be an interesting demonstration to come with
to establish that we cannot come up with mathematically or cryptographically consensus means
to solve qualitative "spam", which in a very pure sense is a linguistical issue.

Best,
Antoine
OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999
Le vendredi 17 octobre 2025 à 19:45:44 UTC+1, Antoine Poinsot a écrit :
Hi,

This approach was discussed last year when evaluating the best way to mitigate DoS blocks in terms
of gains compared to confiscatory surface. Limiting the size of created scriptPubKeys is not a
sufficient mitigation on its own, and has a non-trivial confiscatory surface.

One of the goal of BIP54 is to address objections to Matt's earlier proposal, notably the (in my
opinion reasonable) confiscation concerns voiced by Russell O'Connor. Limiting the size of
scriptPubKeys would in this regard be moving in the opposite direction.

Various approaches of limiting the size of spent scriptPubKeys were discussed, in forms that would
mitigate the confiscatory surface, to adopt in addition to (what eventually became) the BIP54 sigops
limit. However i decided against including this additional measure in BIP54 because:
- of the inherent complexity of the discussed schemes, which would make it hard to reason about
constructing transactions spending legacy inputs, and equally hard to evaluate the reduction of
the confiscatory surface;
- more importantly, there is steep diminishing returns to piling on more mitigations. The BIP54
limit on its own prevents an externally-motivated attacker from *unevenly* stalling the network
for dozens of minutes, and a revenue-maximizing miner from regularly stalling its competitions
for dozens of seconds, at a minimized cost in confiscatory surface. Additional mitigations reduce
the worst case validation time by a smaller factor at a higher cost in terms of confiscatory
surface. It "feels right" to further reduce those numbers, but it's less clear what the tangible
gains would be.

Furthermore, it's always possible to get the biggest bang for our buck in a first step and going the
extra mile in a later, more controversial, soft fork. I previously floated the idea of a "cleanup
v2" in private discussions, and i think besides a reduction of the maximum scriptPubKey size it
should feature a consensus-enforced maximum transaction size for the reasons stated here:
https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8. I wouldn't hold my
breath on such a "cleanup v2", but it may be useful to have it documented somewhere.

I'm trying to not go into much details regarding which mitigations were considered in designing
BIP54, because they are tightly related to the design of various DoS blocks. But i'm always happy to
rehash the decisions made there and (re-)consider alternative approaches on the semi-private Delving
thread [0] dedicated to this purpose. Feel free to ping me to get access if i know you.

Best,
Antoine Poinsot

[0]: https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711




On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <fre...@reardencode.com> wrote:

>
>
> On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>
> > But also given that there are essentially no violations and no reason to
> > expect any I'm not sure the proposal is worth time relative to fixes of
> > actual moderately serious DOS attack issues.
>
>
> I believe this limit would also stop most (all?) of PortlandHODL's
> DoSblocks without having to make some of the other changes in GCC. I
> think it's worthwhile to compare this approach to those proposed by
> Antoine in solving these DoS vectors.
>
> Best,
>
> --Brandon
>
> --
> You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/CALiT-ZpYav87WPn-hrFEcSrvLet95_B%3DMPzqi6kk%3D_nSnQj1VQ%40mail.gmail.com.