From mboxrd@z Thu Jan 1 00:00:00 1970 Delivery-date: Sat, 18 Oct 2025 06:15:28 -0700 Received: from mail-oa1-f55.google.com ([209.85.160.55]) by mail.fairlystable.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.94.2) (envelope-from ) id 1vA6mM-0001tC-Tz for bitcoindev@gnusha.org; Sat, 18 Oct 2025 06:15:28 -0700 Received: by mail-oa1-f55.google.com with SMTP id 586e51a60fabf-3c9b1b356casf940770fac.1 for ; Sat, 18 Oct 2025 06:15:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20230601; t=1760793321; x=1761398121; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:sender:from :to:cc:subject:date:message-id:reply-to; bh=ZDudUTn4FKVw6zmZzyCFCqDWpDE9mmFEzWxwWJmKo0g=; b=uSPswbLfOlkYS2l+ag1Dm7NLC90llF8cXK+2HRlhZyzt/nC3EI5GqunU2UpIHYYJmc xIDOZMBhcKcH9793LScuKTipUIv1hYliPVjZ6DVpL4ucF/WfgLM6Ly73ssP+iOJOV4Bz +EhQHZEL72SGKD+8gDmxbznvmUYqgsfugOESO/VmAvokOIAciDn9HMGeqC4HxyA63izh 4E0or2yqklbi4FBnIxznV15Va88bK9tKRz9tvSuMsXmLIQ/WgHSD4GL/bhyhHhjeRQpu pWyMq9ruUZuiYmb6oPHTeyXVFqc8l4ynpbQn2zReeuz8sneKdnparQdFFEIH9AwP57Ly 9y8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1760793321; x=1761398121; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:x-beenthere :x-gm-message-state:sender:from:to:cc:subject:date:message-id :reply-to; bh=ZDudUTn4FKVw6zmZzyCFCqDWpDE9mmFEzWxwWJmKo0g=; b=vkJLIQ+X2shpBIM+ZOdbORv/exOc9lMiLrS5EN9kypfvwq4yAFg5pS79o/DV3DkZDQ itN/3QB48eN6On+l0i9cQHjCqDcf9olw+zyodcePaQ7c0UpqrRY2eZzF+XWYgpB+1pCi mGv1lLjc6yEkVXajxg0gtiTzW3Jfd0VEFcmx+dpkjc3pUoFSQ70v9OQwPzagI/JLuOgy s7vzXrvvj9nDbllFnu1aoEwsQTzy+RB9oTn8hvYtzTSEYa6zIMaKjH8s13ZJIsfOcZmv SSA5C+FA6D3N0SHNF17BE4DhCI7YPH1cJQp+g/ToBIq0OXYckZOiQntT5kGxbFlQdFuy fsoQ== Sender: bitcoindev@googlegroups.com X-Forwarded-Encrypted: i=1; AJvYcCU40E4gUh7+0qrQn0QKdR7OgR37KUcpdWXgEpx2EoW/pq4YX6//1raPsgyyddu88u04e0cSWLrsphUQ@gnusha.org X-Gm-Message-State: AOJu0Yxmik8oYDHUtxL5ifTUYm9xjgUJjsSAnWsukO3z0Q8flVN33COm ZxnyUIqLaBzzqMwOPA8KKv3xxa7B4vbNE+vJUYg/V1C7OhW6R+xlBN79 X-Google-Smtp-Source: AGHT+IGYppSp9bo1X99nmNgAJp9LMgpFyd/wIuOlzNrfWI2Wh0TqUqozSQwGhytg8pbN5uQ7cEsP5Q== X-Received: by 2002:a05:6871:eb09:b0:345:bbd6:b0a1 with SMTP id 586e51a60fabf-3c98d0db7bemr3058364fac.30.1760793320557; Sat, 18 Oct 2025 06:15:20 -0700 (PDT) X-BeenThere: bitcoindev@googlegroups.com; h="ARHlJd6ABBbs0P6l0h5r0CbZVHEqXUO73rXFYyk3CIA3DiR6RQ==" Received: by 2002:a05:6870:818e:b0:3c9:879a:d965 with SMTP id 586e51a60fabf-3c9879ae1a0ls1336664fac.0.-pod-prod-01-us; Sat, 18 Oct 2025 06:15:15 -0700 (PDT) X-Received: by 2002:a05:6808:6a83:b0:441:8f74:fac with SMTP id 5614622812f47-443a31ba615mr3379478b6e.57.1760793315718; Sat, 18 Oct 2025 06:15:15 -0700 (PDT) Received: by 2002:a0d:d341:0:b0:780:f7eb:fdf with SMTP id 00721157ae682-78373422abems7b3; Sat, 18 Oct 2025 05:06:05 -0700 (PDT) X-Received: by 2002:a05:690e:1289:b0:63e:17d8:d96b with SMTP id 956f58d0204a3-63e17d8daefmr5324725d50.43.1760789164672; Sat, 18 Oct 2025 05:06:04 -0700 (PDT) Date: Sat, 18 Oct 2025 05:06:04 -0700 (PDT) From: PortlandHODL To: Bitcoin Development Mailing List Message-Id: <78475572-3e52-44e4-8116-8f1a917995a4n@googlegroups.com> In-Reply-To: References: <6f6b570f-7f9d-40c0-a771-378eb2c0c701n@googlegroups.com> <961e3c3a-a627-4a07-ae81-eb01f7a375a1n@googlegroups.com> <5135a031-a94e-49b9-ab31-a1eb48875ff2n@googlegroups.com> Subject: Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus. MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_145000_1212129669.1760789164102" X-Original-Sender: admin@qrsnap.io Precedence: list Mailing-list: list bitcoindev@googlegroups.com; contact bitcoindev+owners@googlegroups.com List-ID: X-Google-Group-Id: 786775582512 List-Post: , List-Help: , List-Archive: , List-Unsubscribe: , X-Spam-Score: -0.7 (/) ------=_Part_145000_1212129669.1760789164102 Content-Type: multipart/alternative; boundary="----=_Part_145001_1853933948.1760789164102" ------=_Part_145001_1853933948.1760789164102 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hey,=20 First, thank you to everyone who responded, and please continue to do so.= =20 There were many thought provoking responses and this did shift my=20 perspective quite a bit from the original post, which in of itself was the= =20 goal to a degree. I am currently only going to respond to all of the current concerns. Acks;= =20 though I like them will be ignored unless new discoveries are included. Tl;dr (Portlands Perspective) - Confiscation is a problem because of presigned transactions - DoS mitigation could also occur through marking UTXOs as unspendable if= =20 > 520 bytes, this would preserve the proof of publication. - Timeout / Sunset logic is compelling - The (n) value of acceptable needed bytes is contentious with the lower= =20 suggested limit being 67 - Congestion control is worth a look? Next Step: - Deeper discussion at the individual level: Antoine Poinsot and GCC=20 overlap?=20 - Write an implementation. - Decide to pursue BIP Responses Andrew Poelstra:=20 > There is a risk of confiscation of coins which have pre-signed but > unpublished transactions spending them to new outputs with large > scriptPubKeys. Due to long-standing standardness rules, and the presence > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any > such transactions exist. PortlandHODL: This is a risk that can be incurred and likely not possible= =20 to mitigate as there could be possible chains of transactions so even when= =20 recursively iterating over a chain there is a chance that a presigned=20 breaks this rule. Every idea I have had from block redemption limits on=20 prevouts seems to just be a coverage issue where you can make the=20 confiscation less likely but not completely mitigated. Second, there are already TXs that effectively have been confiscated at the= =20 policy level (P2SH Cleanstack violation) where the user can not find any=20 miner with a policy to accept these into their mempool. (3 years) /dev /fd0 > so it would be great if this was restricted to OP_RETURN PortlandHODL: I reject this completely as this would remove the UTXOset=20 omission for the scriptPubkey and encourage miners to subvert the OP_RETURN= =20 restriction and instead just use another op_code, this also do not hit on= =20 some of the most important factors such as DoS mitigation and legacy script= =20 attack surface reduction. Peter Todd > NACK ... PortlandHODL: You NACK'd for the same reasons that I stated in my OP,=20 without including any additional context or reasoning.=20 jeremy > I think that this type of rule is OK if we do it as a "sunsetting"=20 restriction -- e.g. a soft fork active for the next N blocks (N =3D e.g. 2= =20 years, 5 years, 10 years). If action is taken, this is the most reasonable approach. Alleviating=20 confiscatory concerns through deferral.=20 > You can argue against this example probably, but it is worth considering= =20 that absence of evidence of use is not evidence of absence of use and I=20 myself feel that overall our understanding of Bitcoin transaction=20 programming possibilities is still early. If you don't like this example,= =20 I can give you others (probably). Agreed and this also falls into the reasoning for deciding to utilize point= =20 1 in your response. My thoughts on this would be along the lines of proof= =20 of publication as this change only has the effect of stripping away the=20 executable portion of a script between 521 and 10_000 bytes or the=20 published data portion if > 10_000 bytes which the same data could likely= =20 be published in chunked segments using outpoints. Andrew Poelstra: > Aside from proof-of-publication (i.e. data storage directly in the UTXO > set) there is no usage of script which can't be equally (or better) > accomplished by using a Segwit v0 or Taproot script. This sums up the majority of future usecase concern Anthony Towns: > (If you restricted the change to only applying to scripts that used non-push operators, that would probably still provide upgrade flexibility while also preventing potential script abuses. But it wouldn't do anything to prevent publishing data) Could this not be done as segments in multiple outpoints using a=20 coordination outpoint? I fail to see why publication proof must be in a=20 single chunk. This does though however bring another alternative to mind,= =20 just making these outpoints unspendable but not invalidate the block=20 through inclusion...=20 > As far as the "but contiguous data will be regulated more strictly" argument goes; I don't think "your honour, my offensive content has strings of 4d0802 every 520 bytes Correct, this was never meant to resolve this issue. Luke Dashjr: > If we're going this route, we should just close all the gaps for the=20 immediate future: To put it nicely, this is completely beyond the scope of what is being=20 proposed. Guus Ellenkamp: > If there are really so few OP_RETURN outputs more than 144 bytes, then why increase the limit if that change is so controversial? It seems people who want to use a larger OP_RETURN size do it anyway, even with the current default limits. Completely off topic and irrelevant Greg Tonoski: > Limiting the maximum size of the scriptPubKey of a transaction to 67=20 bytes. This leave no room to deal with broken hashing algorithms and very little= =20 future upgradability for hooks. The rest of these points should be merged= =20 with Lukes response and either hijack my thread or start a new one with the= =20 increased scope, any approach I take will only be related to the=20 ScriptPubkey Keagan McClelland: > Hard NACK on capping the witness size as that would effectively ban large= =20 scripts even in the P2SH wrapper which undermines Bitcoin's ability to be= =20 an effectively programmable money. This has nothing to do with the witness size or even the P2SH wrapper Casey Rodarmor: > I think that "Bitcoin could need it in the future?" might be a good enoug= h reason not to do this. > Script pubkeys are the only variable-length transaction fields which can= =20 be covered by input signatures, which might make them useful for future soft forks. I can imagine confidential asset schemes or post-quantum coin=20 recovery schemes requiring large proofs in the outputs, where the validity of the=20 proof determined whether or not the transaction is valid, and thus require the proofs to be in the outputs, and not just a hash commitment. Would the ability to publish the data alone be enough? Example make the=20 output unspendable but allow for the existence of the bytes to be covered= =20 through the signature? Antoine Poinsot: > Limiting the size of created scriptPubKeys is not a sufficient mitigation= =20 on its own I fail to see how this would not be sufficient? To DoS you need 2 things=20 inputs with ScriptPubkey redemptions + heavy op_codes that require unique= =20 checks. Example DUPing stack element again and again doesn't work. This=20 then leads to the next part is you could get up to unique complex=20 operations with the current (n) limit included per input. > One of the goal of BIP54 is to address objections to Matt's earlier=20 proposal, notably the (in my opinion reasonable) confiscation concerns voiced by Russell O'Connor.=20 Limiting the size of scriptPubKeys would in this regard be moving in the opposite direction. Some notes is I would actually go as far as to say the confiscation risk is= =20 higher with the TX limit proposed in BIP54 as we actually have proof of=20 redemption of TXs that break that rule and the input set to do this already= =20 exists on-chain no need to even wonder about the whole presigned.=20 bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08 Please let me know if I am incorrect on any of this. > Furthermore, it's always possible to get the biggest bang for our buck in= =20 a first step=20 Agreed on bang for the buck regarding DoS.=20 My final point here would be that I would like to discuss more, and this is= =20 response is from the initial view of your response and could be incomplete= =20 or incorrect, This is just my in the moment response. Antoine Riard: > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favor= =20 of prioritizing a timewarp fix and limiting dosy spends by old redeem scripts The idea of congestion control is interesting, but this solution should=20 significantly reduce the total DoS severity of known vectors.=20 On Saturday, October 18, 2025 at 2:25:18=E2=80=AFAM UTC-7 Greg Maxwell wrot= e: > Limits on block construction that cross transactions make it harder to=20 > accurately estimate fees and greatly complicate optimal block=20 > construction-- the latter being important because smarter and more=20 > computer powered mining code generating higher profits is a pro=20 > centralization factor. > > In terms of effectiveness the "spam" will just make itself=20 > indistinguishable from the most common transaction traffic from the=20 > perspective of such metrics-- and might well drive up "spam" levels=20 > because the higher embedding cost may make some of them use more=20 > transactions. The competition for these buckets by other traffic could= =20 > make it effectively a block size reduction even against very boring=20 > ordinary transactions. ... which is probably not what most people want. > > I think it's important to keep in mind that bitcoin fee levels even at=20 > 0.1s/vb are far beyond what other hosting services and other blockchains= =20 > cost-- so anyone still embedding data in bitcoin *really* want to be ther= e=20 > for some reason and aren't too fee sensitive or else they'd already be=20 > using something else... some are even in favor of higher costs since the= =20 > high fees are what create the scarcity needed for their seigniorage. > > But yeah I think your comments on priorities are correct. > > > On Sat, Oct 18, 2025 at 1:20=E2=80=AFAM Antoine Riard wrote: > >> Hi list, >> >> Thanks to the annex covered by the signature, I don't see how the concer= n=20 >> about limiting >> the extensibility of bitcoin script with future (post-quantum)=20 >> cryptographic schemes. >> Previous proposal of the annex were deliberately designed with=20 >> variable-length fields >> to flexibly accomodate a wide range of things. >> >> I believe there is one thing that has not been proposed to limit=20 >> unpredictable utterance >> of spams on the blockchain, namely congestion control of categories of= =20 >> outputs (e.g "fat" >> scriptpubkeys). Let's say P a block period, T a type of scriptpubkey and= =20 >> L a limiting >> threshold for the number of T occurences during the period P. Beyond the= =20 >> L threshold, any >> additional T scriptpubkey is making the block invalid. Or alternatively,= =20 >> any additional >> T generating / spending transaction must pay some weight penalty... >> >> Congestion control, which of course comes with its lot of shenanigans, i= s=20 >> not very a novel >> idea as I believe it has been floated few times in the context of=20 >> lightning to solve mass >> closure, where channels out-priced at current feerate would have their= =20 >> safety timelocks scale >> ups. >> >> No need anymore to come to social consensus on what is quantitative=20 >> "spam" or not. The blockchain >> would automatically throttle out the block space spamming transaction.= =20 >> Qualitative spam it's another >> question, for anyone who has ever read shannon's theory of communication= =20 >> only effective thing can >> be to limit the size of data payload. But probably we're kickly back to = a=20 >> non-mathematically solvable >> linguistical question again [0]. >> >> Anyway, in the sleeping pond of consensus fixes fishes, I'm more in favo= r=20 >> of prioritizing >> a timewarp fix and limiting dosy spends by old redeem scripts, rather=20 >> than engaging in shooting >> ourselves in the foot with ill-designed "spam" consensus mitigations. >> >> [0] If you have a soul of logician, it would be an interesting=20 >> demonstration to come with >> to establish that we cannot come up with mathematically or=20 >> cryptographically consensus means >> to solve qualitative "spam", which in a very pure sense is a linguistica= l=20 >> issue. >> >> Best, >> Antoine >> OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e49= 99 >> Le vendredi 17 octobre 2025 =C3=A0 19:45:44 UTC+1, Antoine Poinsot a =C3= =A9crit : >> >>> Hi,=20 >>> >>> This approach was discussed last year when evaluating the best way to= =20 >>> mitigate DoS blocks in terms=20 >>> of gains compared to confiscatory surface. Limiting the size of created= =20 >>> scriptPubKeys is not a=20 >>> sufficient mitigation on its own, and has a non-trivial confiscatory=20 >>> surface.=20 >>> >>> One of the goal of BIP54 is to address objections to Matt's earlier=20 >>> proposal, notably the (in my=20 >>> opinion reasonable) confiscation concerns voiced by Russell O'Connor.= =20 >>> Limiting the size of=20 >>> scriptPubKeys would in this regard be moving in the opposite direction.= =20 >>> >>> Various approaches of limiting the size of spent scriptPubKeys were=20 >>> discussed, in forms that would=20 >>> mitigate the confiscatory surface, to adopt in addition to (what=20 >>> eventually became) the BIP54 sigops=20 >>> limit. However i decided against including this additional measure in= =20 >>> BIP54 because:=20 >>> - of the inherent complexity of the discussed schemes, which would make= =20 >>> it hard to reason about=20 >>> constructing transactions spending legacy inputs, and equally hard to= =20 >>> evaluate the reduction of=20 >>> the confiscatory surface;=20 >>> - more importantly, there is steep diminishing returns to piling on mor= e=20 >>> mitigations. The BIP54=20 >>> limit on its own prevents an externally-motivated attacker from=20 >>> *unevenly* stalling the network=20 >>> for dozens of minutes, and a revenue-maximizing miner from regularly=20 >>> stalling its competitions=20 >>> for dozens of seconds, at a minimized cost in confiscatory surface.=20 >>> Additional mitigations reduce=20 >>> the worst case validation time by a smaller factor at a higher cost in= =20 >>> terms of confiscatory=20 >>> surface. It "feels right" to further reduce those numbers, but it's les= s=20 >>> clear what the tangible=20 >>> gains would be.=20 >>> >>> Furthermore, it's always possible to get the biggest bang for our buck= =20 >>> in a first step and going the=20 >>> extra mile in a later, more controversial, soft fork. I previously=20 >>> floated the idea of a "cleanup=20 >>> v2" in private discussions, and i think besides a reduction of the=20 >>> maximum scriptPubKey size it=20 >>> should feature a consensus-enforced maximum transaction size for the=20 >>> reasons stated here:=20 >>> >>> https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/= 1732/8.=20 >>> I wouldn't hold my=20 >>> breath on such a "cleanup v2", but it may be useful to have it=20 >>> documented somewhere.=20 >>> >>> I'm trying to not go into much details regarding which mitigations were= =20 >>> considered in designing=20 >>> BIP54, because they are tightly related to the design of various DoS=20 >>> blocks. But i'm always happy to=20 >>> rehash the decisions made there and (re-)consider alternative approache= s=20 >>> on the semi-private Delving=20 >>> thread [0] dedicated to this purpose. Feel free to ping me to get acces= s=20 >>> if i know you.=20 >>> >>> Best,=20 >>> Antoine Poinsot=20 >>> >>> [0]:=20 >>> https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711=20 >>> >>> >>> >>> >>> On Friday, October 17th, 2025 at 1:12 PM, Brandon Black < >>> fre...@reardencode.com> wrote:=20 >>> >>> >=20 >>> >=20 >>> > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:=20 >>> >=20 >>> > > But also given that there are essentially no violations and no=20 >>> reason to=20 >>> > > expect any I'm not sure the proposal is worth time relative to fixe= s=20 >>> of=20 >>> > > actual moderately serious DOS attack issues.=20 >>> >=20 >>> >=20 >>> > I believe this limit would also stop most (all?) of PortlandHODL's=20 >>> > DoSblocks without having to make some of the other changes in GCC. I= =20 >>> > think it's worthwhile to compare this approach to those proposed by= =20 >>> > Antoine in solving these DoS vectors.=20 >>> >=20 >>> > Best,=20 >>> >=20 >>> > --Brandon=20 >>> >=20 >>> > --=20 >>> > You received this message because you are subscribed to the Google=20 >>> Groups "Bitcoin Development Mailing List" group.=20 >>> > To unsubscribe from this group and stop receiving emails from it, sen= d=20 >>> an email to bitcoindev+...@googlegroups.com.=20 >>> > To view this discussion visit=20 >>> https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console= .=20 >>> >>> >> --=20 >> You received this message because you are subscribed to the Google Group= s=20 >> "Bitcoin Development Mailing List" group. >> To unsubscribe from this group and stop receiving emails from it, send a= n=20 >> email to bitcoindev+...@googlegroups.com. >> > To view this discussion visit=20 >> https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1e= b48875ff2n%40googlegroups.com=20 >> >> . >> > --=20 You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/= 78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com. ------=_Part_145001_1853933948.1760789164102 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hey,

First, thank you to everyone who responded, and please con= tinue to do so. There were many thought provoking responses and this did sh= ift my perspective quite a bit from the original post, which in of itself w= as the goal to a degree.

I am currently only going to respond to= all of the current concerns. Acks; though I like them will be ignored unle= ss new discoveries are included.

Tl;dr (Portlands Perspective)=C2=A0- Confiscation is a problem because of presigned transactions
=C2=A0- DoS mitigation could also occur through marking UTXOs as unspenda= ble if > 520 bytes, this would preserve the proof of publication.
= =C2=A0- Timeout / Sunset logic is compelling
=C2=A0- The (n) value of = acceptable needed bytes is contentious with the lower suggested limit being= 67
=C2=A0- Congestion control is worth a look?

Next Step:<= br />=C2=A0- Deeper discussion at the individual level: Antoine Poinsot and= GCC overlap?
=C2=A0- Write an implementation.
=C2=A0- Decide to= pursue BIP

Responses

Andrew Poelstra:
> The= re is a risk of confiscation of coins which have pre-signed but
> u= npublished transactions spending them to new outputs with large
> s= criptPubKeys. Due to long-standing standardness rules, and the presence
> of P2SH (and now P2WSH) for well over a decade, I'm skeptical that a= ny
> such transactions exist.

PortlandHODL: This is a ri= sk that can be incurred and likely not possible to mitigate as there could = be possible chains of transactions so even when recursively iterating over = a chain there is a chance that a presigned breaks this rule. Every idea I h= ave had from block redemption limits on prevouts seems to just be a coverag= e issue where you can make the confiscation less likely but not completely = mitigated.

Second, there are already TXs that effectively have b= een confiscated at the policy level (P2SH Cleanstack violation) where the u= ser can not find any miner with a policy to accept these into their mempool= . (3 years)

/dev /fd0
> =C2=A0so it would be great if th= is was restricted to OP_RETURN

PortlandHODL: I reject this compl= etely as this would remove the UTXOset omission for the scriptPubkey and en= courage miners to subvert the OP_RETURN restriction and instead just use an= other op_code, this also do not hit on some of the most important factors s= uch as DoS mitigation and legacy script attack surface reduction.

Peter Todd
> NACK ...

PortlandHODL: You NACK'd for the= same reasons that I stated in my OP, without including any additional cont= ext or reasoning.

jeremy
> I think that this type of ru= le is OK if we do it as a "sunsetting" restriction -- e.g. a soft fork acti= ve for the next N blocks (N =3D e.g. 2 years, 5 years, 10 years).

If action is taken, this is the most reasonable approach. Alleviating con= fiscatory concerns through deferral.

> You can argue against= this example probably, but it is worth considering that absence of evidenc= e of use is not evidence of absence of use and I myself feel that overall o= ur understanding of Bitcoin transaction programming possibilities is still = early. =C2=A0If you don't like this example, I can give you others (probabl= y).

Agreed and this also falls into the reasoning for deciding t= o utilize point 1 in your response. My thoughts on this would be along the = lines of proof of publication as this change only has the effect of strippi= ng away the executable portion of a script between 521 and 10_000 bytes or = the published data portion if > 10_000 bytes which the same data could l= ikely be published in chunked segments using outpoints.

Andrew P= oelstra:
> Aside from proof-of-publication (i.e. data storage direc= tly in the UTXO
> set) there is no usage of script which can't be e= qually (or better)
> accomplished by using a Segwit v0 or Taproot s= cript.

This sums up the majority of future usecase concern
=
Anthony Towns:
> (If you restricted the change to only applyi= ng to scripts that used
non-push operators, that would probably still = provide upgrade flexibility
while also preventing potential script abu= ses. But it wouldn't do anything
to prevent publishing data)

Could this not be done as segments in multiple outpoints using a coordina= tion outpoint? I fail to see why publication proof must be in a single chun= k. This does though however bring another alternative to mind, just making = these outpoints unspendable but not invalidate the block through inclusion.= ..

> As far as the "but contiguous data will be regulated mo= re strictly"
argument goes; I don't think "your honour, my offensive c= ontent has
strings of 4d0802 every 520 bytes

Correct, this = was never meant to resolve this issue.

Luke Dashjr:
> If= we're going this route, we should just close all the gaps for the immediat= e future:

To put it nicely, this is completely beyond the scope = of what is being proposed.

Guus Ellenkamp:
> If there ar= e really so few OP_RETURN outputs more than 144 bytes, then
why increa= se the limit if that change is so controversial? It seems
people who w= ant to use a larger OP_RETURN size do it anyway, even with
the current= default limits.

Completely off topic and irrelevant

= Greg Tonoski:
> Limiting the maximum size of the scriptPubKey of a = transaction to 67 bytes.

This leave no room to deal with broken = hashing algorithms and very little future upgradability for hooks. The rest= of these points should be merged with Lukes response and either hijack my = thread or start a new one with the increased scope, any approach I take wil= l only be related to the ScriptPubkey

Keagan McClelland:
&g= t; Hard NACK on capping the witness size as that would effectively ban larg= e scripts even in the P2SH wrapper which undermines Bitcoin's ability to be= an effectively programmable money.

This has nothing to do with = the witness size or even the P2SH wrapper

Casey Rodarmor:
&= gt; I think that "Bitcoin could need it in the future?" might be a good eno= ugh
reason not to do this.

> Script pubkeys are the only= variable-length transaction fields which can be
covered by input sign= atures, which might make them useful for future soft
forks. I can imag= ine confidential asset schemes or post-quantum coin recovery
schemes r= equiring large proofs in the outputs, where the validity of the proof
= determined whether or not the transaction is valid, and thus require theproofs to be in the outputs, and not just a hash commitment.

= Would the ability to publish the data alone be enough? Example make the out= put unspendable but allow for the existence of the bytes to be covered thro= ugh the signature?


Antoine Poinsot:
> Limiting the= size of created scriptPubKeys is not a sufficient mitigation on its ownI fail to see how this would not be sufficient? To DoS you need 2 things= inputs with ScriptPubkey redemptions + heavy op_codes that require unique = checks. Example DUPing stack element again and again doesn't work. This the= n leads to the next part is you could get up to unique complex operations w= ith the current (n) limit included per input.

> One of the go= al of BIP54 is to address objections to Matt's earlier proposal, notably th= e (in my
opinion reasonable) confiscation concerns voiced by Russell O= 'Connor. Limiting the size of
scriptPubKeys would in this regard be mo= ving in the opposite direction.

Some notes is I would actually g= o as far as to say the confiscation risk is higher with the TX limit propos= ed in BIP54 as we actually have proof of redemption of TXs that break that = rule and the input set to do this already exists on-chain no need to even w= onder about the whole presigned. bb41a757f405890fb0f5856228e23b715702d714d5= 9bf2b1feb70d8b2b4e3e08

Please let me know if I am incorrect on a= ny of this.

> Furthermore, it's always possible to get the bi= ggest bang for our buck in a first step

Agreed on bang for the = buck regarding DoS.

My final point here would be that I would l= ike to discuss more, and this is response is from the initial view of your = response and could be incomplete or incorrect, This is just my in the momen= t response.

Antoine Riard:
> Anyway, in the sleeping pon= d of consensus fixes fishes, I'm more in favor of prioritizing
a timew= arp fix and limiting dosy spends by old redeem scripts

The idea = of congestion control is interesting, but this solution should significantl= y reduce the total DoS severity of known vectors.=C2=A0

On Saturday, Octo= ber 18, 2025 at 2:25:18=E2=80=AFAM UTC-7 Greg Maxwell wrote:
Limit= s on block construction that cross transactions make it harder to accuratel= y estimate fees and greatly complicate optimal block construction--=C2=A0 t= he latter being important because smarter and more computer powered mining = code generating higher profits is a pro centralization=C2=A0factor.

In terms of effectiveness=C2=A0the "spam" will = just make itself indistinguishable from the most common transaction traffic= from the perspective of such metrics--=C2=A0 and might well drive up "= ;spam" levels because the higher embedding cost may make some of them = use more transactions.=C2=A0 The competition for these buckets by other tra= ffic could make it effectively a block size reduction even against very bor= ing ordinary transactions.=C2=A0 ... which is probably not what most people= want.

I think it's important to keep in mind = that bitcoin fee levels even at 0.1s/vb are far beyond what other hosting s= ervices and other blockchains cost-- so anyone still embedding data in bitc= oin *really* want to be there for some reason and aren't too fee sensit= ive=C2=A0or else they'd already be using something else... some are eve= n in favor of higher costs since the high fees are what create the scarcity= needed for their seigniorage.

But yeah I think yo= ur comments on priorities are correct.


On Sat, Oct 18, 2025 at 1:20=E2=80=AFAM Antoine Riard <= antoin...@gmail.com> wrot= e:
Hi list,

Thanks to the annex covered by the sign= ature, I don't see how the concern about limiting
the extensibility = of bitcoin script with future (post-quantum) cryptographic schemes.
Prev= ious proposal of the annex were deliberately designed with variable-length = fields
to flexibly accomodate a wide range of things.

I believe t= here is one thing that has not been proposed to limit unpredictable utteran= ce
of spams on the blockchain, namely congestion control of categories o= f outputs (e.g "fat"
scriptpubkeys). Let's say P a block p= eriod, T a type of scriptpubkey and L a limiting
threshold for the numbe= r of T occurences during the period P. Beyond the L threshold, any
addit= ional T scriptpubkey is making the block invalid. Or alternatively, any add= itional
T generating / spending transaction must pay some weight penalty= ...

Congestion control, which of course comes with its lot of shenan= igans, is not very a novel
idea as I believe it has been floated few tim= es in the context of lightning to solve mass
closure, where channels out= -priced at current feerate would have their safety timelocks scale
ups.<= br>
No need anymore to come to social consensus on what is quantitative = "spam" or not. The blockchain
would automatically throttle out= the block space spamming transaction. Qualitative spam it's anotherquestion, for anyone who has ever read shannon's theory of communicati= on only effective thing can
be to limit the size of data payload. But pr= obably we're kickly back to a non-mathematically solvable
linguistic= al question again [0].

Anyway, in the sleeping pond of consensus fix= es fishes, I'm more in favor of prioritizing
a timewarp fix and limi= ting dosy spends by old redeem scripts, rather than engaging in shootingourselves in the foot with ill-designed "spam" consensus mitigat= ions.

[0] If you have a soul of logician, it would be an interesting= demonstration to come with
to establish that we cannot come up with mat= hematically or cryptographically consensus means
to solve qualitative &q= uot;spam", which in a very pure sense is a linguistical issue.

= Best,
Antoine
OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a= 640dd4a31d72f0e4999
Le vendredi 17 octobre 2025 =C3=A0 19:45:44 UTC+1, Antoine = Poinsot a =C3=A9crit=C2=A0:
Hi,

This approach was discussed last year when evaluating the best way to m= itigate DoS blocks in terms
of gains compared to confiscatory surface. Limiting the size of created= scriptPubKeys is not a
sufficient mitigation on its own, and has a non-trivial confiscatory su= rface.

One of the goal of BIP54 is to address objections to Matt's earlier= proposal, notably the (in my
opinion reasonable) confiscation concerns voiced by Russell O'Conno= r. Limiting the size of
scriptPubKeys would in this regard be moving in the opposite direction.

Various approaches of limiting the size of spent scriptPubKeys were dis= cussed, in forms that would
mitigate the confiscatory surface, to adopt in addition to (what eventu= ally became) the BIP54 sigops
limit. However i decided against including this additional measure in B= IP54 because:
- of the inherent complexity of the discussed schemes, which would make= it hard to reason about
constructing transactions spending legacy inputs, and equally hard to= evaluate the reduction of
the confiscatory surface;
- more importantly, there is steep diminishing returns to piling on mor= e mitigations. The BIP54
limit on its own prevents an externally-motivated attacker from *unev= enly* stalling the network
for dozens of minutes, and a revenue-maximizing miner from regularly = stalling its competitions
for dozens of seconds, at a minimized cost in confiscatory surface. A= dditional mitigations reduce
the worst case validation time by a smaller factor at a higher cost i= n terms of confiscatory
surface. It "feels right" to further reduce those numbers, = but it's less clear what the tangible
gains would be.

Furthermore, it's always possible to get the biggest bang for our b= uck in a first step and going the
extra mile in a later, more controversial, soft fork. I previously floa= ted the idea of a "cleanup
v2" in private discussions, and i think besides a reduction of the= maximum scriptPubKey size it
should feature a consensus-enforced maximum transaction size for the re= asons stated here:
https://delvin= gbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732/8. I woul= dn't hold my
breath on such a "cleanup v2", but it may be useful to have i= t documented somewhere.

I'm trying to not go into much details regarding which mitigations = were considered in designing
BIP54, because they are tightly related to the design of various DoS bl= ocks. But i'm always happy to
rehash the decisions made there and (re-)consider alternative approache= s on the semi-private Delving
thread [0] dedicated to this purpose. Feel free to ping me to get acces= s if i know you.

Best,
Antoine Poinsot

[0]: https://delvingbitcoin.org/t= /worst-block-validation-time-inquiry/711




On Friday, October 17th, 2025 at 1:12 PM, Brandon Black <fre...@reardencode.com> wrote:

>=20
>=20
> On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:
>=20
> > But also given that there are essentially no violations and n= o reason to
> > expect any I'm not sure the proposal is worth time relati= ve to fixes of
> > actual moderately serious DOS attack issues.
>=20
>=20
> I believe this limit would also stop most (all?) of PortlandHODL&#= 39;s
> DoSblocks without having to make some of the other changes in GCC.= I
> think it's worthwhile to compare this approach to those propos= ed by
> Antoine in solving these DoS vectors.
>=20
> Best,
>=20
> --Brandon
>=20
> --
> You received this message because you are subscribed to the Google= Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it, = send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40c= onsole.

--
You received this message because you are subscribed to the Google Groups &= quot;Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+...@googlegro= ups.com.

--
You received this message because you are subscribed to the Google Groups &= quot;Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoind= ev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoind= ev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.
------=_Part_145001_1853933948.1760789164102-- ------=_Part_145000_1212129669.1760789164102--