From mboxrd@z Thu Jan 1 00:00:00 1970 Delivery-date: Thu, 30 Oct 2025 04:39:16 -0700 Received: from mail-oa1-f57.google.com ([209.85.160.57]) by mail.fairlystable.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.94.2) (envelope-from ) id 1vEQzp-0003Ie-EB for bitcoindev@gnusha.org; Thu, 30 Oct 2025 04:39:16 -0700 Received: by mail-oa1-f57.google.com with SMTP id 586e51a60fabf-3c97be590afsf417955fac.1 for ; Thu, 30 Oct 2025 04:39:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20230601; t=1761824346; x=1762429146; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:sender:from :to:cc:subject:date:message-id:reply-to; bh=KYRrkOF7feZIUvV5CPXLvSN2t1yqxckYiKNNOvOGRCA=; b=BpBiKA2wJqSseBmq7UEXeHmbDqKhtfbdl/mPrROTGr2+QF66sjrCXMkZ6+j1IPXY45 LLdh1pfQZsjYCEIoKCK7yuQsScxt5JLFK9+PziBUBUFK0+juFZ8eWaPsV7AFFqGEOf/v SEL6Xk2/fIm/OmrDTC+1se6tXzjnezTo4kZRN8jT4UyoOc7j/SIz9vI5gagFF8FgZLgb oQxVDl9Nsk+VXFm0rpvQdDSBLxIRi3ZF6s2ZMKggBzONYOoSOT1pAVvuR4uIIRX9KZjC omktEwNT6tXj/B/4EdgaXORzEkktlL4QaitJAoG65lemeBt1JsJ6VTcLJbPBB4vQDDmI YcAw== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1761824346; x=1762429146; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=KYRrkOF7feZIUvV5CPXLvSN2t1yqxckYiKNNOvOGRCA=; b=KQT2NIQzciz9SzeDjVrai7r1jHLUQ5ryeI5NLNQw+BsIXzKEf8JctpVQdVnXFqsof7 q7D6vaGDscc2U0pBEKHWsr0Lm4C8MmUvnWRCa3x0Vpr0ldGOTtkBD8FpZQEPAdPm2BQs uWs4YxQyBUaPT5KcGuu1vKR4t+X6tBc6JxJVMOQCafweWs+vAamEmMqwv5fp4D+UH+MZ 4As5GnM9oOxPjlP5i5ANJBGuQOJKKdYaPai4nr2M7BTY2E8MMAqzvl6CqNrdO22dKggn VXZp2+wgNGP0ZuEaRB7M7JDDF9/tJQ5MCh/ssPqbI3KPg1AfhM50LmzORO0bSWmmgMwP WUOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761824346; x=1762429146; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:x-beenthere :x-gm-message-state:sender:from:to:cc:subject:date:message-id :reply-to; bh=KYRrkOF7feZIUvV5CPXLvSN2t1yqxckYiKNNOvOGRCA=; b=MJsl0o/iZxXoQCrfPuG7a21IbHGfpTtPKIp55CJbxKLG0Lhl9BlyxSVy2pTq/SUeCS ERd9cGXXVM1tgPPAt+TgzN72K0Lzd9jiOVaBa3+b2nCsrnaoIau5Vnc6Y8oN+NAAA+TT XdKtrhdoip8G7d0LAt7WlmSt2SEsvIyQQmX+zz5IwpajZD6VCBJjSLaNb9GpuAg9pH11 H7jgybLv2L8ap/t7I9lStbIp+/2TWy3SdCo0vj93GtZbWic9ErlYCu3uzSTjuATTwPv0 FppB9ocn2jkF1p5jI8ZI+9Xo58WyPoeJ3WPH1EBgXvg9qWgRZA2zEaF/iqDuR2s5kytw 8LKw== Sender: bitcoindev@googlegroups.com X-Forwarded-Encrypted: i=1; AJvYcCVhIBgWnF3WawUwTxaB6/yO2l4YijNFljg/PiHozfMbpFhTcRMTUwM9LkjyTR0R4g5/8v23hWYEOBFD@gnusha.org X-Gm-Message-State: AOJu0YzhvSlrF0lkqL1yoyYByrgZSIN8BJ5Q9vBW7w+hlt7VpJql21U5 vfApj5zlGU8sdiwRQbEVABSTSr0vXnxaLEvgxrN2yeUxLICg063LB5qp X-Google-Smtp-Source: AGHT+IGxb7GDffh8wn7eHKoaCwwKsgwlhBTKSCT4AaFZrU8rlSUhf3/2cmviU1Z+yC55bDq0HNqHfw== X-Received: by 2002:a05:6871:3a06:b0:375:db59:20e4 with SMTP id 586e51a60fabf-3d7461a4660mr2906852fac.13.1761824346187; Thu, 30 Oct 2025 04:39:06 -0700 (PDT) X-BeenThere: bitcoindev@googlegroups.com; h="Ae8XA+aB97cTaJH3gTe8H6zmUNDjiEIuVa1U/0voqAINel5BCQ==" Received: by 2002:a05:6870:7043:10b0:330:f9af:ee37 with SMTP id 586e51a60fabf-3d8ce546d61ls252108fac.1.-pod-prod-01-us; Thu, 30 Oct 2025 04:39:02 -0700 (PDT) X-Received: by 2002:a05:6808:3992:b0:443:9fae:8c4a with SMTP id 5614622812f47-44f7a5096a5mr3036738b6e.36.1761824342139; Thu, 30 Oct 2025 04:39:02 -0700 (PDT) Received: by 2002:a05:690c:a5c1:b0:74f:1486:e2a9 with SMTP id 00721157ae682-78629398ee3ms7b3; Thu, 30 Oct 2025 01:55:21 -0700 (PDT) X-Received: by 2002:a05:690c:9a0c:b0:785:f54a:998e with SMTP id 00721157ae682-78629049a71mr48290177b3.69.1761814520348; Thu, 30 Oct 2025 01:55:20 -0700 (PDT) Date: Thu, 30 Oct 2025 01:55:20 -0700 (PDT) From: Bitcoin Error Log To: Bitcoin Development Mailing List Message-Id: <09d0aa74-1305-45bd-8da9-03d1506f5784n@googlegroups.com> In-Reply-To: References: <6f6b570f-7f9d-40c0-a771-378eb2c0c701n@googlegroups.com> <961e3c3a-a627-4a07-ae81-eb01f7a375a1n@googlegroups.com> <5135a031-a94e-49b9-ab31-a1eb48875ff2n@googlegroups.com> <78475572-3e52-44e4-8116-8f1a917995a4n@googlegroups.com> Subject: Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus. MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_55474_1493915764.1761814520034" X-Original-Sender: bitcoinerrorlog@gmail.com Precedence: list Mailing-list: list bitcoindev@googlegroups.com; contact bitcoindev+owners@googlegroups.com List-ID: X-Google-Group-Id: 786775582512 List-Post: , List-Help: , List-Archive: , List-Unsubscribe: , X-Spam-Score: -0.5 (/) ------=_Part_55474_1493915764.1761814520034 Content-Type: multipart/alternative; boundary="----=_Part_55475_1989108371.1761814520034" ------=_Part_55475_1989108371.1761814520034 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Greg, One correction. Bitcoin has significantly restricted a proven use case with= =20 policy in the past. Maybe you won't think this qualifies, but it happened= =20 while you were away so I am curious about your assessment.=20 During the change to mempoolfullrbf policy, I, with support from the=20 original author of the change, and with support of multiple Core devs, and= =20 with the support of multiple businesses providing data on how they provided= =20 zero-conf as a service to users via risk management, tried to stop Bitcoin= =20 from killing first-seen policy, which had been stable for all of the=20 history of Bitcoin. The change was at least clearly demonstrated as=20 controversial, and lacking real consensus.=20 I'm happy to admit that no policy is enforceable, and that zero-conf was=20 "never safe", but we had a system that worked and made Bitcoin more useful= =20 to people that used it that way. The businesses simply monitored for=20 doublespends, imposed exposure limits per block, and gated actual delivery= =20 separately from checkout UX. It worked and now it does not and the only=20 reason was a policy change. The problem with claiming that policy is not a means of change, is that you= =20 must also admit the lack of need for any RBF flags at all, or for arguing= =20 about data spam relay, or for any wide policy to be a concern of Bitcoin=20 Core at all. (particularly when speculatively / subjectively applied) . Thank you, and sorry for the side topic. ~John On Thursday, October 30, 2025 at 6:40:10=E2=80=AFAM UTC Greg Maxwell wrote: Prior softforks have stuck to using the more explicit "forward=20 compatibility" mechanisms, so -- e.g. if you use OP_NOP3 or a higher=20 transaction version number or whatever that had no purpose (and would=20 literally do nothing), saw ~no use, and was non-standard, or scripts that= =20 just anyone could have immediately taken at any time (e.g. funds free for= =20 the collecting rather than something secure)... then in that case I think= =20 people have felt that the long discussion leading up to a softfork was=20 enough to acceptably mitigate the risk. Tapscript was specifically=20 designed to make upgrades even safer and easier by making it so that the=20 mere presence of any forward compat opcode (OP_SUCCESSn) makes the whole=20 script insecure until that opcode is in use.=20 The proposal to limit scriptpubkey size is worse because longer scripts had= =20 purposes and use (e.g. larger multisigs) and unlike some NOP3 or txversions= =20 where you could be argued to deserve issues if you did something so weird= =20 and abused a forward compat mechanism, people running into a 520 limit=20 could have been pretty boring (and I see my own watching wallets have some= =20 scriptpubkeys beyond that size (big multisigs), in fact-- though I don't=20 *think* any are still in use, but even I'm not absolutely sure that such a= =20 restriction wouldn't confiscate some of my own funds--- and it's a pain in= =20 the rear to check, having to bring offline stuff online, etc). Confiscation isn't just limited to timelocks, since the victims of it may= =20 just not know about the consensus change and while they could move their=20 coins they don't. One of the big advantages many people see in Bitcoin is= =20 that you can put your keys in a time capsule in the foundation of your home= =20 and trust that they're still going to be there and you'll be able to use=20 your coins a decade later. ... that you don't have to watch out for banks= =20 drilling your safe deposit boxes or people putting public notices in=20 classified ads laying claim to your property. I don't even think bitcoin has ever policy restricted something that was in= =20 active use, much less softforked out something like that. I wouldn't say= =20 it was impossible but I think on the balance it would favor a notice period= =20 so that any reasonable person could have taken notice, taken action, or at= =20 least spoke up. But since there is no requirement to monitor and that's=20 part of bitcoin's value prop the amount of time to consider reasonable=20 ought to be quite long. Which also is at odds with the emergency measures= =20 position being taken by proponents of such changes. (which also, I think are just entirely unjustified, even if you accept the= =20 worst version of their narrative with the historical chain being made=20 _illegal_, one could simply produce node software that starts from a well= =20 known embedded utxo snapshot and doesn't process historical blocks. Such= =20 a thing would be in principle a reduction in the security model, but=20 balances against the practical and realistic impact of potentially=20 confiscating coins I think it looks pretty fine by comparison. It would=20 also be fully consensus compatible, assuming no reorg below that point, and= =20 can be done right now by anyone who cares in a totally permissionless and= =20 coercion free manner) On Thu, Oct 30, 2025 at 5:13=E2=80=AFAM Michael Tidwell wrote: Greg, > Also some risk of creating a new scarce asset class. Well, Casey Rodarmor is in the thread, so lol maybe. Anyway, point taken. I want to be 100% sure I understand the hypotheticals:= =20 there could be an off-chain, presigned, transactions that needs more than= =20 520 bytes for the scriptPubKey and, as Poelstra said, could even form a=20 chain of presigned transactions under some complex, previously unknown,=20 scheme that only becomes public after this change is made. Can you confirm? Would it also be a worry that a chain of transactions using said utxo could= =20 commit to some bizarre scheme, for instance a taproot transaction utxo that= =20 later is presigned committed back to P2MS larger than 520 bytes? If so, I= =20 think I get it, you're saying to essentially guarantee no confiscation we'd= =20 never be able to upgrade old UTXOs and we'd need to track them forever to= =20 prevent unlikely edge cases?=20 Does the presigned chain at least stop needing to be tracked once the given= =20 UTXO co-mingles with a post-update coinbase utxo? If so, this is indeed complex! This seems pretty insane both for the=20 complexity of implementing and the unlikely edge cases. Has Core ever made= =20 a decision of (acceptable risk) to upgrade with protection of onchain utxos= =20 but not hypothetical unpublished ones?=20 Aren't we going to run into the same situation if we do an op code clean up= =20 in the future if we had people presign/commit to op codes that are no=20 longer consensus valid? Tidwell On Wednesday, October 29, 2025 at 10:32:10=E2=80=AFPM UTC-4 Greg Maxwell wr= ote: "A few bytes" might be on the order of forever 10% increase in the UTXO set= =20 size, plus a full from-network resync of all pruned nodes and a full (e.g.= =20 most of day outage) reindex of all unpruned nodes. Not insignificant but= =20 also not nothing. Such a portion of the existing utxo size is not from=20 outputs over 520 bytes in size, so as a scheme for utxo set size reduction= =20 the addition of MHT tracking would probably make it a failure. Also some risk of creating some new scarce asset class, txouts consisting= =20 of primordial coins that aren't subject to the new rules... sounds like the= =20 sort of thing that NFT degens would absolutely love. That might not be an= =20 issue *generally* for some change with confiscation risk, but for a change= =20 that is specifically intended to lobotomize bitcoin to make it less useful= =20 to NFT degens, maybe not such a great idea. :P I mentioned it at all because I thought it could potentially be of some=20 use, I'm just more skeptical of it for the current context. Also luke-jr= =20 and crew has moved on to actually propose even more invasive changes than= =20 just limiting the script size, which I anticipated, and has much more=20 significant issues. Just size limiting outputs likely doesn't harm any=20 interests or usages-- and so probably could be viable if the confiscation= =20 issue was addressed, but it also doesn't stick it to people transacting in= =20 ways the priests of ocean mining dislike.=20 > I believe you're pointing out the idea of non economically-rational=20 spammers? I think it's a mistake to conclude the spammers are economically=20 irrational-- they're often just responding to different economics which may= =20 be less legible to your analysis. In particular, NFT degens prefer the=20 high cost of transactions as a thing that makes their tokens scarce and=20 gives them value. -- otherwise they wouldn't be swapping for one less=20 efficient encoding for another, they're just be using another blockchain=20 (perhaps their own) entirely. On Thu, Oct 30, 2025 at 1:16=E2=80=AFAM Michael Tidwell wrote: > MRH tracking might make that acceptable, but comes at a high cost which I= =20 think would clearly not be justified. Greg, I want to ask/challenge how bad this is, this seems like a generally= =20 reusable primitive that could make other upgrades more feasible that also= =20 have the same strict confiscation risk profile. IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo? Poelstra, > I don't think this is a great idea -- it would be technically hard to implement and slow deployment indefinitely. I would like to know how much of a deal breaker this is in your opinion. Is= =20 MRH tracking off the table? In terms of the hypothetical presigned=20 transactions that may exist using P2MS, is this a hard enough reason to=20 require a MRH idea? Greg, > So, paradoxically this limit might increase the amount of non-prunable=20 data I believe you're pointing out the idea of non economically-rational=20 spammers? We already see actors ignoring cheaper witness inscription=20 methods. If spam shifts to many sub-520 fake pubkey outputs (which I=20 believe is less harmful than stamps), that imo is a separate UTXO cost=20 discussion. (like a SF to add weight to outputs). Anywho, this point alone= =20 doesn't seem sufficient to add as a clear negative reason for someone=20 opposed to the proposal. Thanks, Tidwell On Wednesday, October 22, 2025 at 5:55:58=E2=80=AFAM UTC-4 moonsettler wrot= e: > Confiscation is a problem because of presigned transactions=20 Allow 10000 bytes of total scriptPubKey size in each block counting only=20 those outputs that are larger than x (520 as proposed).=20 The code change is pretty minimal from the most obvious implementation of= =20 the original rule.=20 That makes it technically non-confiscatory. Still non-standard, but if=20 anyone out there so obnoxiously foot-gunned themselves, they can't claim=20 they were rugged by the devs.=20 BR,=20 moonsettler=20 On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL = =20 wrote:=20 > Hey,=20 >=20 > First, thank you to everyone who responded, and please continue to do so.= =20 There were many thought provoking responses and this did shift my=20 perspective quite a bit from the original post, which in of itself was the= =20 goal to a degree.=20 >=20 > I am currently only going to respond to all of the current concerns.=20 Acks; though I like them will be ignored unless new discoveries are=20 included.=20 >=20 > Tl;dr (Portlands Perspective)=20 > - Confiscation is a problem because of presigned transactions=20 > - DoS mitigation could also occur through marking UTXOs as unspendable if= =20 > 520 bytes, this would preserve the proof of publication.=20 > - Timeout / Sunset logic is compelling=20 > - The (n) value of acceptable needed bytes is contentious with the lower= =20 suggested limit being 67=20 > - Congestion control is worth a look?=20 >=20 > Next Step:=20 > - Deeper discussion at the individual level: Antoine Poinsot and GCC=20 overlap?=20 > - Write an implementation.=20 > - Decide to pursue BIP=20 >=20 > Responses=20 >=20 > Andrew Poelstra:=20 > > There is a risk of confiscation of coins which have pre-signed but=20 > > unpublished transactions spending them to new outputs with large=20 > > scriptPubKeys. Due to long-standing standardness rules, and the=20 presence=20 > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any= =20 > > such transactions exist.=20 >=20 > PortlandHODL: This is a risk that can be incurred and likely not possible= =20 to mitigate as there could be possible chains of transactions so even when= =20 recursively iterating over a chain there is a chance that a presigned=20 breaks this rule. Every idea I have had from block redemption limits on=20 prevouts seems to just be a coverage issue where you can make the=20 confiscation less likely but not completely mitigated.=20 >=20 > Second, there are already TXs that effectively have been confiscated at= =20 the policy level (P2SH Cleanstack violation) where the user can not find=20 any miner with a policy to accept these into their mempool. (3 years)=20 >=20 > /dev /fd0=20 > > so it would be great if this was restricted to OP_RETURN=20 >=20 > PortlandHODL: I reject this completely as this would remove the UTXOset= =20 omission for the scriptPubkey and encourage miners to subvert the OP_RETURN= =20 restriction and instead just use another op_code, this also do not hit on= =20 some of the most important factors such as DoS mitigation and legacy script= =20 attack surface reduction.=20 >=20 > Peter Todd=20 > > NACK ...=20 >=20 > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,=20 without including any additional context or reasoning.=20 >=20 > jeremy=20 > > I think that this type of rule is OK if we do it as a "sunsetting"=20 restriction -- e.g. a soft fork active for the next N blocks (N =3D e.g. 2= =20 years, 5 years, 10 years).=20 >=20 > If action is taken, this is the most reasonable approach. Alleviating=20 confiscatory concerns through deferral.=20 >=20 > > You can argue against this example probably, but it is worth=20 considering that absence of evidence of use is not evidence of absence of= =20 use and I myself feel that overall our understanding of Bitcoin transaction= =20 programming possibilities is still early. If you don't like this example, I= =20 can give you others (probably).=20 >=20 > Agreed and this also falls into the reasoning for deciding to utilize=20 point 1 in your response. My thoughts on this would be along the lines of= =20 proof of publication as this change only has the effect of stripping away= =20 the executable portion of a script between 521 and 10_000 bytes or the=20 published data portion if > 10_000 bytes which the same data could likely= =20 be published in chunked segments using outpoints.=20 >=20 > Andrew Poelstra:=20 > > Aside from proof-of-publication (i.e. data storage directly in the UTXO= =20 > > set) there is no usage of script which can't be equally (or better)=20 > > accomplished by using a Segwit v0 or Taproot script.=20 >=20 > This sums up the majority of future usecase concern=20 >=20 > Anthony Towns:=20 > > (If you restricted the change to only applying to scripts that used=20 > non-push operators, that would probably still provide upgrade flexibility= =20 > while also preventing potential script abuses. But it wouldn't do=20 anything=20 > to prevent publishing data)=20 >=20 > Could this not be done as segments in multiple outpoints using a=20 coordination outpoint? I fail to see why publication proof must be in a=20 single chunk. This does though however bring another alternative to mind,= =20 just making these outpoints unspendable but not invalidate the block=20 through inclusion...=20 >=20 > > As far as the "but contiguous data will be regulated more strictly"=20 > argument goes; I don't think "your honour, my offensive content has=20 > strings of 4d0802 every 520 bytes=20 >=20 > Correct, this was never meant to resolve this issue.=20 >=20 > Luke Dashjr:=20 > > If we're going this route, we should just close all the gaps for the=20 immediate future:=20 >=20 > To put it nicely, this is completely beyond the scope of what is being=20 proposed.=20 >=20 > Guus Ellenkamp:=20 > > If there are really so few OP_RETURN outputs more than 144 bytes, then= =20 > why increase the limit if that change is so controversial? It seems=20 > people who want to use a larger OP_RETURN size do it anyway, even with=20 > the current default limits.=20 >=20 > Completely off topic and irrelevant=20 >=20 > Greg Tonoski:=20 > > Limiting the maximum size of the scriptPubKey of a transaction to 67=20 bytes.=20 >=20 > This leave no room to deal with broken hashing algorithms and very little= =20 future upgradability for hooks. The rest of these points should be merged= =20 with Lukes response and either hijack my thread or start a new one with the= =20 increased scope, any approach I take will only be related to the=20 ScriptPubkey=20 >=20 > Keagan McClelland:=20 > > Hard NACK on capping the witness size as that would effectively ban=20 large scripts even in the P2SH wrapper which undermines Bitcoin's ability= =20 to be an effectively programmable money.=20 >=20 > This has nothing to do with the witness size or even the P2SH wrapper=20 >=20 > Casey Rodarmor:=20 > > I think that "Bitcoin could need it in the future?" might be a good=20 enough=20 > reason not to do this.=20 >=20 > > Script pubkeys are the only variable-length transaction fields which=20 can be=20 > covered by input signatures, which might make them useful for future soft= =20 > forks. I can imagine confidential asset schemes or post-quantum coin=20 recovery=20 > schemes requiring large proofs in the outputs, where the validity of the= =20 proof=20 > determined whether or not the transaction is valid, and thus require the= =20 > proofs to be in the outputs, and not just a hash commitment.=20 >=20 > Would the ability to publish the data alone be enough? Example make the= =20 output unspendable but allow for the existence of the bytes to be covered= =20 through the signature?=20 >=20 >=20 > Antoine Poinsot:=20 > > Limiting the size of created scriptPubKeys is not a sufficient=20 mitigation on its own=20 > I fail to see how this would not be sufficient? To DoS you need 2 things= =20 inputs with ScriptPubkey redemptions + heavy op_codes that require unique= =20 checks. Example DUPing stack element again and again doesn't work. This=20 then leads to the next part is you could get up to unique complex=20 operations with the current (n) limit included per input.=20 >=20 > > One of the goal of BIP54 is to address objections to Matt's earlier=20 proposal, notably the (in my=20 > opinion reasonable) confiscation concerns voiced by Russell O'Connor.=20 Limiting the size of=20 > scriptPubKeys would in this regard be moving in the opposite direction.= =20 >=20 > Some notes is I would actually go as far as to say the confiscation risk= =20 is higher with the TX limit proposed in BIP54 as we actually have proof of= =20 redemption of TXs that break that rule and the input set to do this already= =20 exists on-chain no need to even wonder about the whole presigned.=20 bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08=20 >=20 > Please let me know if I am incorrect on any of this.=20 >=20 > > Furthermore, it's always possible to get the biggest bang for our buck= =20 in a first step=20 >=20 > Agreed on bang for the buck regarding DoS.=20 >=20 > My final point here would be that I would like to discuss more, and this= =20 is response is from the initial view of your response and could be=20 incomplete or incorrect, This is just my in the moment response.=20 >=20 > Antoine Riard:=20 > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in=20 favor of prioritizing=20 > a timewarp fix and limiting dosy spends by old redeem scripts=20 >=20 > The idea of congestion control is interesting, but this solution should= =20 significantly reduce the total DoS severity of known vectors.=20 >=20 > On Saturday, October 18, 2025 at 2:25:18=E2=80=AFAM UTC-7 Greg Maxwell wr= ote:=20 >=20 > > Limits on block construction that cross transactions make it harder to= =20 accurately estimate fees and greatly complicate optimal block=20 construction-- the latter being important because smarter and more computer= =20 powered mining code generating higher profits is a pro centralization=20 factor.=20 > >=20 > > In terms of effectiveness the "spam" will just make itself=20 indistinguishable from the most common transaction traffic from the=20 perspective of such metrics-- and might well drive up "spam" levels because= =20 the higher embedding cost may make some of them use more transactions. The= =20 competition for these buckets by other traffic could make it effectively a= =20 block size reduction even against very boring ordinary transactions. ...=20 which is probably not what most people want.=20 > >=20 > > I think it's important to keep in mind that bitcoin fee levels even at= =20 0.1s/vb are far beyond what other hosting services and other blockchains=20 cost-- so anyone still embedding data in bitcoin *really* want to be there= =20 for some reason and aren't too fee sensitive or else they'd already be=20 using something else... some are even in favor of higher costs since the=20 high fees are what create the scarcity needed for their seigniorage.=20 > >=20 > > But yeah I think your comments on priorities are correct.=20 > >=20 > >=20 > >=20 > > On Sat, Oct 18, 2025 at 1:20=E2=80=AFAM Antoine Riard =20 wrote:=20 > >=20 > > > Hi list,=20 > > >=20 > > > Thanks to the annex covered by the signature, I don't see how the=20 concern about limiting=20 > > > the extensibility of bitcoin script with future (post-quantum)=20 cryptographic schemes.=20 > > > Previous proposal of the annex were deliberately designed with=20 variable-length fields=20 > > > to flexibly accomodate a wide range of things.=20 > > >=20 > > > I believe there is one thing that has not been proposed to limit=20 unpredictable utterance=20 > > > of spams on the blockchain, namely congestion control of categories= =20 of outputs (e.g "fat"=20 > > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey= =20 and L a limiting=20 > > > threshold for the number of T occurences during the period P. Beyond= =20 the L threshold, any=20 > > > additional T scriptpubkey is making the block invalid. Or=20 alternatively, any additional=20 > > > T generating / spending transaction must pay some weight penalty...= =20 > > >=20 > > > Congestion control, which of course comes with its lot of=20 shenanigans, is not very a novel=20 > > > idea as I believe it has been floated few times in the context of=20 lightning to solve mass=20 > > > closure, where channels out-priced at current feerate would have=20 their safety timelocks scale=20 > > > ups.=20 > > >=20 > > > No need anymore to come to social consensus on what is quantitative= =20 "spam" or not. The blockchain=20 > > > would automatically throttle out the block space spamming=20 transaction. Qualitative spam it's another=20 > > > question, for anyone who has ever read shannon's theory of=20 communication only effective thing can=20 > > > be to limit the size of data payload. But probably we're kickly back= =20 to a non-mathematically solvable=20 > > > linguistical question again [0].=20 > > >=20 > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in= =20 favor of prioritizing=20 > > > a timewarp fix and limiting dosy spends by old redeem scripts, rather= =20 than engaging in shooting=20 > > > ourselves in the foot with ill-designed "spam" consensus mitigations.= =20 > > >=20 > > > [0] If you have a soul of logician, it would be an interesting=20 demonstration to come with=20 > > > to establish that we cannot come up with mathematically or=20 cryptographically consensus means=20 > > > to solve qualitative "spam", which in a very pure sense is a=20 linguistical issue.=20 > > >=20 > > > Best,=20 > > > Antoine=20 > > > OTS hash:=20 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999=20 > > > Le vendredi 17 octobre 2025 =C3=A0 19:45:44 UTC+1, Antoine Poinsot a = =C3=A9crit=20 :=20 > > >=20 > > > > Hi,=20 > > > >=20 > > > > This approach was discussed last year when evaluating the best way= =20 to mitigate DoS blocks in terms=20 > > > > of gains compared to confiscatory surface. Limiting the size of=20 created scriptPubKeys is not a=20 > > > > sufficient mitigation on its own, and has a non-trivial=20 confiscatory surface.=20 > > > >=20 > > > > One of the goal of BIP54 is to address objections to Matt's earlier= =20 proposal, notably the (in my=20 > > > > opinion reasonable) confiscation concerns voiced by Russell=20 O'Connor. Limiting the size of=20 > > > > scriptPubKeys would in this regard be moving in the opposite=20 direction.=20 > > > >=20 > > > > Various approaches of limiting the size of spent scriptPubKeys were= =20 discussed, in forms that would=20 > > > > mitigate the confiscatory surface, to adopt in addition to (what=20 eventually became) the BIP54 sigops=20 > > > > limit. However i decided against including this additional measure= =20 in BIP54 because:=20 > > > > - of the inherent complexity of the discussed schemes, which would= =20 make it hard to reason about=20 > > > > constructing transactions spending legacy inputs, and equally hard= =20 to evaluate the reduction of=20 > > > > the confiscatory surface;=20 > > > > - more importantly, there is steep diminishing returns to piling on= =20 more mitigations. The BIP54=20 > > > > limit on its own prevents an externally-motivated attacker from=20 *unevenly* stalling the network=20 > > > > for dozens of minutes, and a revenue-maximizing miner from=20 regularly stalling its competitions=20 > > > > for dozens of seconds, at a minimized cost in confiscatory surface.= =20 Additional mitigations reduce=20 > > > > the worst case validation time by a smaller factor at a higher cost= =20 in terms of confiscatory=20 > > > > surface. It "feels right" to further reduce those numbers, but it's= =20 less clear what the tangible=20 > > > > gains would be.=20 > > > >=20 > > > > Furthermore, it's always possible to get the biggest bang for our= =20 buck in a first step and going the=20 > > > > extra mile in a later, more controversial, soft fork. I previously= =20 floated the idea of a "cleanup=20 > > > > v2" in private discussions, and i think besides a reduction of the= =20 maximum scriptPubKey size it=20 > > > > should feature a consensus-enforced maximum transaction size for=20 the reasons stated here:=20 > > > >=20 https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732= /8.=20 I wouldn't hold my=20 > > > > breath on such a "cleanup v2", but it may be useful to have it=20 documented somewhere.=20 > > > >=20 > > > > I'm trying to not go into much details regarding which mitigations= =20 were considered in designing=20 > > > > BIP54, because they are tightly related to the design of various=20 DoS blocks. But i'm always happy to=20 > > > > rehash the decisions made there and (re-)consider alternative=20 approaches on the semi-private Delving=20 > > > > thread [0] dedicated to this purpose. Feel free to ping me to get= =20 access if i know you.=20 > > > >=20 > > > > Best,=20 > > > > Antoine Poinsot=20 > > > >=20 > > > > [0]:=20 https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711=20 > > > >=20 > > > >=20 > > > >=20 > > > >=20 > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black < fre...@reardencode.com> wrote:=20 > > > >=20 > > > > >=20 > > > > >=20 > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:=20 > > > > >=20 > > > > > > But also given that there are essentially no violations and no= =20 reason to=20 > > > > > > expect any I'm not sure the proposal is worth time relative to= =20 fixes of=20 > > > > > > actual moderately serious DOS attack issues.=20 > > > > >=20 > > > > >=20 > > > > > I believe this limit would also stop most (all?) of=20 PortlandHODL's=20 > > > > > DoSblocks without having to make some of the other changes in=20 GCC. I=20 > > > > > think it's worthwhile to compare this approach to those proposed= =20 by=20 > > > > > Antoine in solving these DoS vectors.=20 > > > > >=20 > > > > > Best,=20 > > > > >=20 > > > > > --Brandon=20 > > > > >=20 > > > > > --=20 > > > > > You received this message because you are subscribed to the=20 Google Groups "Bitcoin Development Mailing List" group.=20 > > > > > To unsubscribe from this group and stop receiving emails from it,= =20 send an email to bitcoindev+...@googlegroups.com.=20 > > > > > To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.=20 > > >=20 > > > --=20 > > > You received this message because you are subscribed to the Google=20 Groups "Bitcoin Development Mailing List" group.=20 > > > To unsubscribe from this group and stop receiving emails from it,=20 send an email to bitcoindev+...@googlegroups.com.=20 > >=20 > > > To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48= 875ff2n%40googlegroups.com.=20 >=20 > --=20 > You received this message because you are subscribed to the Google Groups= =20 "Bitcoin Development Mailing List" group.=20 > To unsubscribe from this group and stop receiving emails from it, send an= =20 email to bitcoindev+...@googlegroups.com.=20 > To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a91= 7995a4n%40googlegroups.com.=20 --=20 You received this message because you are subscribed to the Google Groups= =20 "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an= =20 email to bitcoindev+...@googlegroups.com. To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba= 2ff473n%40googlegroups.com=20 . --=20 You received this message because you are subscribed to the Google Groups= =20 "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an= =20 email to bitcoindev+...@googlegroups.com. To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/c208e054-b85a-4a5c-9193-c28ef0= d225c5n%40googlegroups.com=20 . --=20 You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/= 09d0aa74-1305-45bd-8da9-03d1506f5784n%40googlegroups.com. ------=_Part_55475_1989108371.1761814520034 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Greg,

One correction. Bitcoin has significantly restri= cted a proven use case with policy in the past. Maybe you won't think this = qualifies, but it happened while you were away so I am curious about your a= ssessment.=C2=A0

During the change to mempoolful= lrbf policy, I, with support from the original author of the change, and wi= th support of multiple Core devs, and with the support of multiple business= es providing data on how they provided zero-conf as a service to users via = risk management, tried to stop Bitcoin from killing first-seen policy, whic= h had been stable for all of the history of Bitcoin. The change was at leas= t clearly demonstrated as controversial, and lacking real consensus.=C2=A0<= /div>

I'm happy to admit that no policy is enforceable= , and that zero-conf was "never safe", but we had a system that worked and = made Bitcoin more useful to people that used it that way. The businesses si= mply monitored for doublespends, imposed exposure limits per block, and gat= ed actual delivery separately from checkout UX. It worked and now it does n= ot and the only reason was a policy change.

The = problem with claiming that policy is not a means of change, is that you mus= t also admit the lack of need for any RBF flags at all, or for arguing abou= t data spam relay, or for any wide policy to be a concern of Bitcoin Core a= t all. (particularly when speculatively / subjectively applied) .

Thank you, and sorry for the side topic.

=
~John


On Thursday, October 30, 2025 at 6:40:10=E2=80=AFAM UTC Greg Maxwell wr= ote:
P= rior softforks have stuck to using the more explicit "forward compatibility= " mechanisms, so -- e.g. if you use OP_NOP3 or a higher transaction version= number or whatever that had no purpose (and would literally=C2=A0do nothin= g), saw ~no use, and was non-standard, or scripts that just anyone could ha= ve immediately taken at any time (e.g. funds free for the collecting rather= than something secure)... then in that case I think people have felt that = the long discussion leading up to a softfork=C2=A0was enough to acceptably = mitigate the risk.=C2=A0 Tapscript was specifically designed to make upgrad= es even safer and easier by making it so that the mere presence of any forw= ard compat opcode (OP_SUCCESSn) makes the whole script insecure until that = opcode is in use.=C2=A0

The proposal to limit sc= riptpubkey size is worse because longer scripts had purposes and use (e.g. = larger multisigs) and unlike some NOP3 or txversions where you could be arg= ued to deserve issues if you did something so weird and abused a forward co= mpat mechanism, people running into a 520 limit could have been pretty bori= ng (and I see my own watching wallets have some scriptpubkeys beyond that s= ize (big multisigs), in fact-- though I don't *think* any are still in use,= but even I'm not absolutely sure that such a restriction wouldn't confisca= te some of my own funds--- and it's a pain in the rear to check, having to = bring offline stuff online, etc).

Confiscation i= sn't just limited to timelocks, since the victims of it may just not know a= bout the consensus change and while they could move their coins they don't.= =C2=A0 One of the big advantages many people see in Bitcoin is that you can= put your keys in a time capsule in the foundation of your home and trust t= hat they're still going to be there and you'll be able to use your coins a = decade later. ... that you don't have to watch out for banks drilling your = safe deposit boxes or people putting public notices in classified ads layin= g claim to your property.

I don't even think bit= coin has ever policy restricted something that was in active use, much less= softforked=C2=A0out something like that.=C2=A0 I wouldn't say it was impos= sible but I think on the balance it would favor a notice period so that any= reasonable person could have taken notice, taken action, or at least spoke= up.=C2=A0 But since there is no requirement to monitor and that's part of = bitcoin's value prop the amount of time to consider reasonable ought to be = quite long.=C2=A0 Which also is at odds with the emergency measures positio= n being taken by proponents of such changes.

(wh= ich also, I think are just entirely unjustified, even if you accept the wor= st version of their narrative with the historical chain being made _illegal= _, one could simply produce node software that starts from a well known emb= edded utxo snapshot and doesn't process historical blocks.=C2=A0 =C2=A0Such= a thing would be in principle a reduction in the security model, but balan= ces against the practical and realistic impact of potentially confiscating = coins I think it looks pretty fine by comparison.=C2=A0 It would also be fu= lly consensus compatible, assuming no reorg below that point, and can be do= ne right now by anyone who cares in a totally permissionless and coercion f= ree manner)



<= div>
On Thu, Oct 30, 2025 at 5:13=E2=80=AFAM Michael Tidwel= l <mtidw...@gmail.com> wrote:
=

Greg,

&g= t; Also some risk of creating a new scarce asset class.

Well, Casey Rodarmor is in the thread, so lol maybe.

Anyway, point taken. I want to be 100% sure I understand the hypothetica= ls: there could be an off-chain, presigned, transactions that needs more th= an 520 bytes for the scriptPubKey and, as Poelstra said, could even form a = chain of presigned transactions under some complex, previously unknown, sch= eme that only becomes public after this change is made. Can you confirm?

Would it also be a=C2=A0worry that a chain of transactions using said ut= xo could commit to some bizarre scheme, for instance a taproot transaction = utxo that later is presigned committed back to P2MS larger than 520 bytes? = If so, I think I get it, you're saying to essentially guarantee no confisca= tion we'd never be able to upgrade old UTXOs and we'd need to track them fo= rever to prevent unlikely edge cases?
Does the presigned chain at lea= st stop needing to be tracked once the given UTXO co-mingles with a post-up= date coinbase utxo?

If so, this is indeed complex! This seems pretty insane both for the com= plexity of implementing and the unlikely edge cases. Has Core ever made a d= ecision of (acceptable risk) to upgrade with protection of onchain utxos bu= t not hypothetical unpublished ones?
Aren't we going to run into the = same situation if we do an op code clean up in the future if we had people = presign/commit to op codes that are no longer consensus valid?

Tidwell


On Wednesday, October 29, 2025 a= t 10:32:10=E2=80=AFPM UTC-4 Greg Maxwell wrote:
"A few bytes" might be on the ord= er of forever 10% increase in the UTXO set size, plus a full from-network r= esync of all pruned nodes and a full (e.g. most of day outage) reindex of a= ll unpruned nodes.=C2=A0 Not insignificant=C2=A0but also not nothing.=C2=A0= Such a portion of the=C2=A0existing utxo size is not from outputs over 520= bytes in size, so as a scheme for utxo set size reduction the addition of = MHT tracking would probably make it a failure.

A= lso some risk of creating some new scarce asset class, txouts consisting of= primordial=C2=A0coins that aren't subject to the new rules... sounds like = the sort of thing that NFT degens would absolutely love.=C2=A0 That might n= ot be an issue *generally* for some change with confiscation=C2=A0risk, but= for a change that is specifically intended to lobotomize bitcoin to make i= t less useful to NFT degens, maybe not such a great idea. :P

I mentioned it at all because I thought it could potentially b= e of some use, I'm just more skeptical of it for the current context.=C2=A0= Also luke-jr and crew has moved on to actually propose even more invasive = changes than just limiting the script size, which I anticipated, and has mu= ch more significant=C2=A0issues.=C2=A0 Just size limiting outputs likely do= esn't harm any interests or usages-- and so probably could be viable if the= confiscation issue was addressed, but it also doesn't stick it to people t= ransacting in ways the priests of ocean mining dislike.=C2=A0

>=C2=A0I believe you're pointing ou= t the idea of non economically-rational spammers?

I think it's a mistake to conclude the spammers ar= e economically irrational-- they're often just responding to different econ= omics which may be less legible to your analysis.=C2=A0 In particular, NFT = degens prefer the high cost of transactions as a thing that makes their tok= ens scarce and gives them value.=C2=A0 -- otherwise they wouldn't be swappi= ng for one less efficient encoding for another, they're just be using anoth= er blockchain (perhaps their own) entirely.




On Th= u, Oct 30, 2025 at 1:16=E2=80=AFAM Michael Tidwell <= mtidw...@gmail.com> wrote:
> MRH tracking might make that acceptable, but comes = at a high cost which I think would clearly not be justified.

Gre= g, I want to ask/challenge how bad this is, this seems like a generally reu= sable primitive that could make other upgrades more feasible that also have= the same strict confiscation risk profile.
IIUC, the major pain is, 1= big reindex cost + a few bytes per utxo?

Poelstra,

&= gt; I don't think this is a great idea -- it would be technically hard toimplement and slow deployment indefinitely.

I would like to = know how much of a deal breaker this is in your opinion. Is MRH tracking of= f the table? In terms of the hypothetical presigned transactions that may e= xist using P2MS, is this a hard enough reason to require a MRH idea?
<= br />Greg,

> So, paradoxically this limit might increase the = amount of non-prunable data

I believe you're pointing out the id= ea of non economically-rational spammers? We already see actors ignoring ch= eaper witness inscription methods. If spam shifts to many sub-520 fake pubk= ey outputs (which I believe is less harmful than stamps), that imo is a sep= arate UTXO cost discussion. (like a SF to add weight to outputs). Anywho, t= his point alone doesn't seem sufficient to add as a clear negative reason f= or someone opposed to the proposal.

Thanks,
Tidwell
On Wednesday, October 22, 2025 at 5:55:58=E2=80=AFAM UTC-4 m= oonsettler wrote:
> Confi= scation is a problem because of presigned transactions

Allow 10000 bytes of total scriptPubKey size in each block counting o= nly those outputs that are larger than x (520 as proposed).
The code change is pretty minimal from the most obvious implementatio= n of the original rule.

That makes it technically non-confiscatory. Still non-standard, but i= f anyone out there so obnoxiously foot-gunned themselves, they can't claim = they were rugged by the devs.

BR,
moonsettler

On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io> wrote:

> Hey,
>=20
> First, thank you to everyone who responded, and please continue = to do so. There were many thought provoking responses and this did shift my= perspective quite a bit from the original post, which in of itself was the= goal to a degree.
>=20
> I am currently only going to respond to all of the current conce= rns. Acks; though I like them will be ignored unless new discoveries are in= cluded.
>=20
> Tl;dr (Portlands Perspective)
> - Confiscation is a problem because of presigned transactions
> - DoS mitigation could also occur through marking UTXOs as unspe= ndable if > 520 bytes, this would preserve the proof of publication.
> - Timeout / Sunset logic is compelling
> - The (n) value of acceptable needed bytes is contentious with t= he lower suggested limit being 67
> - Congestion control is worth a look?
>=20
> Next Step:
> - Deeper discussion at the individual level: Antoine Poinsot and= GCC overlap?
> - Write an implementation.
> - Decide to pursue BIP
>=20
> Responses
>=20
> Andrew Poelstra:
> > There is a risk of confiscation of coins which have pre-sig= ned but
> > unpublished transactions spending them to new outputs with = large
> > scriptPubKeys. Due to long-standing standardness rules, and= the presence
> > of P2SH (and now P2WSH) for well over a decade, I'm skeptic= al that any
> > such transactions exist.
>=20
> PortlandHODL: This is a risk that can be incurred and likely not= possible to mitigate as there could be possible chains of transactions so = even when recursively iterating over a chain there is a chance that a presi= gned breaks this rule. Every idea I have had from block redemption limits o= n prevouts seems to just be a coverage issue where you can make the confisc= ation less likely but not completely mitigated.
>=20
> Second, there are already TXs that effectively have been confisc= ated at the policy level (P2SH Cleanstack violation) where the user can not= find any miner with a policy to accept these into their mempool. (3 years)
>=20
> /dev /fd0
> > so it would be great if this was restricted to OP_RETURN
>=20
> PortlandHODL: I reject this completely as this would remove the = UTXOset omission for the scriptPubkey and encourage miners to subvert the O= P_RETURN restriction and instead just use another op_code, this also do not= hit on some of the most important factors such as DoS mitigation and legac= y script attack surface reduction.
>=20
> Peter Todd
> > NACK ...
>=20
> PortlandHODL: You NACK'd for the same reasons that I stated in m= y OP, without including any additional context or reasoning.
>=20
> jeremy
> > I think that this type of rule is OK if we do it as a "suns= etting" restriction -- e.g. a soft fork active for the next N blocks (N =3D= e.g. 2 years, 5 years, 10 years).
>=20
> If action is taken, this is the most reasonable approach. Allevi= ating confiscatory concerns through deferral.
>=20
> > You can argue against this example probably, but it is wort= h considering that absence of evidence of use is not evidence of absence of= use and I myself feel that overall our understanding of Bitcoin transactio= n programming possibilities is still early. If you don't like this example,= I can give you others (probably).
>=20
> Agreed and this also falls into the reasoning for deciding to ut= ilize point 1 in your response. My thoughts on this would be along the line= s of proof of publication as this change only has the effect of stripping a= way the executable portion of a script between 521 and 10_000 bytes or the = published data portion if > 10_000 bytes which the same data could likel= y be published in chunked segments using outpoints.
>=20
> Andrew Poelstra:
> > Aside from proof-of-publication (i.e. data storage directly= in the UTXO
> > set) there is no usage of script which can't be equally (or= better)
> > accomplished by using a Segwit v0 or Taproot script.
>=20
> This sums up the majority of future usecase concern
>=20
> Anthony Towns:
> > (If you restricted the change to only applying to scripts t= hat used
> non-push operators, that would probably still provide upgrade fl= exibility
> while also preventing potential script abuses. But it wouldn't d= o anything
> to prevent publishing data)
>=20
> Could this not be done as segments in multiple outpoints using a= coordination outpoint? I fail to see why publication proof must be in a si= ngle chunk. This does though however bring another alternative to mind, jus= t making these outpoints unspendable but not invalidate the block through i= nclusion...
>=20
> > As far as the "but contiguous data will be regulated more s= trictly"
> argument goes; I don't think "your honour, my offensive content = has
> strings of 4d0802 every 520 bytes
>=20
> Correct, this was never meant to resolve this issue.
>=20
> Luke Dashjr:
> > If we're going this route, we should just close all the gap= s for the immediate future:
>=20
> To put it nicely, this is completely beyond the scope of what is= being proposed.
>=20
> Guus Ellenkamp:
> > If there are really so few OP_RETURN outputs more than 144 = bytes, then
> why increase the limit if that change is so controversial? It se= ems
> people who want to use a larger OP_RETURN size do it anyway, eve= n with
> the current default limits.
>=20
> Completely off topic and irrelevant
>=20
> Greg Tonoski:
> > Limiting the maximum size of the scriptPubKey of a transact= ion to 67 bytes.
>=20
> This leave no room to deal with broken hashing algorithms and ve= ry little future upgradability for hooks. The rest of these points should b= e merged with Lukes response and either hijack my thread or start a new one= with the increased scope, any approach I take will only be related to the = ScriptPubkey
>=20
> Keagan McClelland:
> > Hard NACK on capping the witness size as that would effecti= vely ban large scripts even in the P2SH wrapper which undermines Bitcoin's = ability to be an effectively programmable money.
>=20
> This has nothing to do with the witness size or even the P2SH wr= apper
>=20
> Casey Rodarmor:
> > I think that "Bitcoin could need it in the future?" might b= e a good enough
> reason not to do this.
>=20
> > Script pubkeys are the only variable-length transaction fie= lds which can be
> covered by input signatures, which might make them useful for fu= ture soft
> forks. I can imagine confidential asset schemes or post-quantum = coin recovery
> schemes requiring large proofs in the outputs, where the validit= y of the proof
> determined whether or not the transaction is valid, and thus req= uire the
> proofs to be in the outputs, and not just a hash commitment.
>=20
> Would the ability to publish the data alone be enough? Example m= ake the output unspendable but allow for the existence of the bytes to be c= overed through the signature?
>=20
>=20
> Antoine Poinsot:
> > Limiting the size of created scriptPubKeys is not a suffici= ent mitigation on its own
> I fail to see how this would not be sufficient? To DoS you need = 2 things inputs with ScriptPubkey redemptions + heavy op_codes that require= unique checks. Example DUPing stack element again and again doesn't work. = This then leads to the next part is you could get up to unique complex oper= ations with the current (n) limit included per input.
>=20
> > One of the goal of BIP54 is to address objections to Matt's= earlier proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Co= nnor. Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite dir= ection.
>=20
> Some notes is I would actually go as far as to say the confiscat= ion risk is higher with the TX limit proposed in BIP54 as we actually have = proof of redemption of TXs that break that rule and the input set to do thi= s already exists on-chain no need to even wonder about the whole presigned.= bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>=20
> Please let me know if I am incorrect on any of this.
>=20
> > Furthermore, it's always possible to get the biggest bang f= or our buck in a first step
>=20
> Agreed on bang for the buck regarding DoS.
>=20
> My final point here would be that I would like to discuss more, = and this is response is from the initial view of your response and could be= incomplete or incorrect, This is just my in the moment response.
>=20
> Antoine Riard:
> > Anyway, in the sleeping pond of consensus fixes fishes, I'm= more in favor of prioritizing
> a timewarp fix and limiting dosy spends by old redeem scripts
>=20
> The idea of congestion control is interesting, but this solution= should significantly reduce the total DoS severity of known vectors.
>=20
> On Saturday, October 18, 2025 at 2:25:18=E2=80=AFAM UTC-7 Greg M= axwell wrote:
>=20
> > Limits on block construction that cross transactions make i= t harder to accurately estimate fees and greatly complicate optimal block c= onstruction-- the latter being important because smarter and more computer = powered mining code generating higher profits is a pro centralization facto= r.
> >=20
> > In terms of effectiveness the "spam" will just make itself = indistinguishable from the most common transaction traffic from the perspec= tive of such metrics-- and might well drive up "spam" levels because the hi= gher embedding cost may make some of them use more transactions. The compet= ition for these buckets by other traffic could make it effectively a block = size reduction even against very boring ordinary transactions. ... which is= probably not what most people want.
> >=20
> > I think it's important to keep in mind that bitcoin fee lev= els even at 0.1s/vb are far beyond what other hosting services and other bl= ockchains cost-- so anyone still embedding data in bitcoin *really* want to= be there for some reason and aren't too fee sensitive or else they'd alrea= dy be using something else... some are even in favor of higher costs since = the high fees are what create the scarcity needed for their seigniorage.
> >=20
> > But yeah I think your comments on priorities are correct.
> >=20
> >=20
> >=20
> > On Sat, Oct 18, 2025 at 1:20=E2=80=AFAM Antoine Riard <<= a rel=3D"nofollow">antoin...@gmail.com> wrote:
> >=20
> > > Hi list,
> > >=20
> > > Thanks to the annex covered by the signature, I don't = see how the concern about limiting
> > > the extensibility of bitcoin script with future (post-= quantum) cryptographic schemes.
> > > Previous proposal of the annex were deliberately desig= ned with variable-length fields
> > > to flexibly accomodate a wide range of things.
> > >=20
> > > I believe there is one thing that has not been propose= d to limit unpredictable utterance
> > > of spams on the blockchain, namely congestion control = of categories of outputs (e.g "fat"
> > > scriptpubkeys). Let's say P a block period, T a type o= f scriptpubkey and L a limiting
> > > threshold for the number of T occurences during the pe= riod P. Beyond the L threshold, any
> > > additional T scriptpubkey is making the block invalid.= Or alternatively, any additional
> > > T generating / spending transaction must pay some weig= ht penalty...
> > >=20
> > > Congestion control, which of course comes with its lot= of shenanigans, is not very a novel
> > > idea as I believe it has been floated few times in the= context of lightning to solve mass
> > > closure, where channels out-priced at current feerate = would have their safety timelocks scale
> > > ups.
> > >=20
> > > No need anymore to come to social consensus on what is= quantitative "spam" or not. The blockchain
> > > would automatically throttle out the block space spamm= ing transaction. Qualitative spam it's another
> > > question, for anyone who has ever read shannon's theor= y of communication only effective thing can
> > > be to limit the size of data payload. But probably we'= re kickly back to a non-mathematically solvable
> > > linguistical question again [0].
> > >=20
> > > Anyway, in the sleeping pond of consensus fixes fishes= , I'm more in favor of prioritizing
> > > a timewarp fix and limiting dosy spends by old redeem = scripts, rather than engaging in shooting
> > > ourselves in the foot with ill-designed "spam" consens= us mitigations.
> > >=20
> > > [0] If you have a soul of logician, it would be an int= eresting demonstration to come with
> > > to establish that we cannot come up with mathematicall= y or cryptographically consensus means
> > > to solve qualitative "spam", which in a very pure sens= e is a linguistical issue.
> > >=20
> > > Best,
> > > Antoine
> > > OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6= a640dd4a31d72f0e4999
> > > Le vendredi 17 octobre 2025 =C3=A0 19:45:44 UTC+1, Ant= oine Poinsot a =C3=A9crit :
> > >=20
> > > > Hi,
> > > >=20
> > > > This approach was discussed last year when evalua= ting the best way to mitigate DoS blocks in terms
> > > > of gains compared to confiscatory surface. Limiti= ng the size of created scriptPubKeys is not a
> > > > sufficient mitigation on its own, and has a non-t= rivial confiscatory surface.
> > > >=20
> > > > One of the goal of BIP54 is to address objections= to Matt's earlier proposal, notably the (in my
> > > > opinion reasonable) confiscation concerns voiced = by Russell O'Connor. Limiting the size of
> > > > scriptPubKeys would in this regard be moving in t= he opposite direction.
> > > >=20
> > > > Various approaches of limiting the size of spent = scriptPubKeys were discussed, in forms that would
> > > > mitigate the confiscatory surface, to adopt in ad= dition to (what eventually became) the BIP54 sigops
> > > > limit. However i decided against including this a= dditional measure in BIP54 because:
> > > > - of the inherent complexity of the discussed sch= emes, which would make it hard to reason about
> > > > constructing transactions spending legacy inputs,= and equally hard to evaluate the reduction of
> > > > the confiscatory surface;
> > > > - more importantly, there is steep diminishing re= turns to piling on more mitigations. The BIP54
> > > > limit on its own prevents an externally-motivated= attacker from *unevenly* stalling the network
> > > > for dozens of minutes, and a revenue-maximizing m= iner from regularly stalling its competitions
> > > > for dozens of seconds, at a minimized cost in con= fiscatory surface. Additional mitigations reduce
> > > > the worst case validation time by a smaller facto= r at a higher cost in terms of confiscatory
> > > > surface. It "feels right" to further reduce those= numbers, but it's less clear what the tangible
> > > > gains would be.
> > > >=20
> > > > Furthermore, it's always possible to get the bigg= est bang for our buck in a first step and going the
> > > > extra mile in a later, more controversial, soft f= ork. I previously floated the idea of a "cleanup
> > > > v2" in private discussions, and i think besides a= reduction of the maximum scriptPubKey size it
> > > > should feature a consensus-enforced maximum trans= action size for the reasons stated here:
> > > > https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/173= 2/8. I wouldn't hold my
> > > > breath on such a "cleanup v2", but it may be usef= ul to have it documented somewhere.
> > > >=20
> > > > I'm trying to not go into much details regarding = which mitigations were considered in designing
> > > > BIP54, because they are tightly related to the de= sign of various DoS blocks. But i'm always happy to
> > > > rehash the decisions made there and (re-)consider= alternative approaches on the semi-private Delving
> > > > thread [0] dedicated to this purpose. Feel free t= o ping me to get access if i know you.
> > > >=20
> > > > Best,
> > > > Antoine Poinsot
> > > >=20
> > > > [0]: htt= ps://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > >=20
> > > >=20
> > > >=20
> > > >=20
> > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon= Black <fre...@reardencode.com> wrote:
> > > >=20
> > > > >
> > > > >
> > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg = Maxwell wrote:
> > > > >
> > > > > > But also given that there are essential= ly no violations and no reason to
> > > > > > expect any I'm not sure the proposal is= worth time relative to fixes of
> > > > > > actual moderately serious DOS attack is= sues.
> > > > >
> > > > >
> > > > > I believe this limit would also stop most (a= ll?) of PortlandHODL's
> > > > > DoSblocks without having to make some of the= other changes in GCC. I
> > > > > think it's worthwhile to compare this approa= ch to those proposed by
> > > > > Antoine in solving these DoS vectors.
> > > > >
> > > > > Best,
> > > > >
> > > > > --Brandon
> > > > >
> > > > > --
> > > > > You received this message because you are su= bscribed to the Google Groups "Bitcoin Development Mailing List" group.
> > > > > To unsubscribe from this group and stop rece= iving emails from it, send an email to bitcoindev+...@g= ooglegroups.com.
> > > > > To view this discussion visit https://groups.google.com/d/msgid/bitcoinde= v/aPJ3w6bEoaye3WJ6%40console.
> > >=20
> > > --
> > > You received this message because you are subscribed t= o the Google Groups "Bitcoin Development Mailing List" group.
> > > To unsubscribe from this group and stop receiving emai= ls from it, send an email to bitcoindev+...@googlegroup= s.com.
> >=20
> > > To view this discussion visit https://groups.google.com= /d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.co= m.
>=20
> --
> You received this message because you are subscribed to the Goog= le Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it= , send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/b= itcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+...@googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+...@googlegroups.com.<= br />

--
You received this message because you are subscribed to the Google Groups &= quot;Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoind= ev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoind= ev/09d0aa74-1305-45bd-8da9-03d1506f5784n%40googlegroups.com.
------=_Part_55475_1989108371.1761814520034-- ------=_Part_55474_1493915764.1761814520034--