From mboxrd@z Thu Jan 1 00:00:00 1970 Delivery-date: Thu, 30 Oct 2025 10:37:13 -0700 Received: from mail-oo1-f62.google.com ([209.85.161.62]) by mail.fairlystable.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.94.2) (envelope-from ) id 1vEWaE-0008PZ-DV for bitcoindev@gnusha.org; Thu, 30 Oct 2025 10:37:13 -0700 Received: by mail-oo1-f62.google.com with SMTP id 006d021491bc7-656780556f0sf14147eaf.3 for ; Thu, 30 Oct 2025 10:37:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20230601; t=1761845824; x=1762450624; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:sender:from :to:cc:subject:date:message-id:reply-to; bh=/DbKNko3IcSVBpq9z/KVqVHb4LywaZMJf+x4CJ3CMzI=; b=rX4gP+Pluqhf+GrWB4DPPbVJPHzh35K1U5dShY+xx6voObVredAFo2GX85HJlwEAxN dBxrmDauMQi53LZcutiJcEN26BZ2ZK8rPXZZPcEGbAJsajXA7nVlms3jf+ixaxFM4hlZ Hth8Rte3XmIkAuzHxQ5IUSag9MmVx3XxQ9VnfWP+7Em8RC+vPW/GNTzbslddz5m4Tzw8 bvnJmFBNt6L54e1Edja2Qq+pelFp7/IXfSzcjwqGCkFUxW2VjvpAcJyrtZW1NLBI1E+V 77PKYOVTCr41W8c4hdHYXdGKEmZlG3Dk7k5LvRVsvqJT6aUeIDLF7AjvTVGMQCGKIqVs dkSA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups-com.20230601.gappssmtp.com; s=20230601; t=1761845824; x=1762450624; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=/DbKNko3IcSVBpq9z/KVqVHb4LywaZMJf+x4CJ3CMzI=; b=F3tL9qtGlSS17lEd5unOXubitsIKdIHKXKxWTM2TICYJKV/GsWwSSPdJUgOQVMEya1 cgnXE3n/dAGkTPUCup7AMXsW+9lvBwdlCq+Tk7GKIU45kJhtAJJVIlp0Ssg5OEeUDYwY 5ig+mM51c/2HaPUVjbcFgn8UcqphE5r/wcc6DXZCO0Sg7R/8zszHOONFbr++6TidaFfL BURKW+EDKCo/sEN/LTtWQDa8Snm17OCpN45F/s+80U1dxRKtHt1NvkAjyd0JTIopHwZG fTlu2V6dTXIuiVaer4DVqyCIuiRkQojFVn0Zx3KyAzM5noi6USETCFMPPG+AronZI01S R9GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761845824; x=1762450624; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:x-beenthere :x-gm-message-state:sender:from:to:cc:subject:date:message-id :reply-to; bh=/DbKNko3IcSVBpq9z/KVqVHb4LywaZMJf+x4CJ3CMzI=; b=qLmU3lo7aPdg+8eLuEXZkUm42DQlsbdBzDYdWP/SyduLyv+8a3l3htj1L70GcDOQrh Xoqk0VyUaR2Q6xKFb2Ft7jPw7Hdh1JndepcGjM7L9jrBzEPl1HWJeUvSxn4SIooSz2cM sV+rbrRKJeL0Gy3jO9qOnas0zObXLVeRo2/BjWnUui10kfd/wLQvuC6vv+bO4R/L5qZA UHGWvv8QvkihzbF2mtNgASCWXZn6NL8aeBBfKeqJH6ILYeckZQ4mTrEi/AMODLDGxXT7 Q+jXMRTFRXWWIhTunbHO8N2UsehbROSJtPNzC+YbEgIJojfDYj/ah7InQ1UZzMfKc7OP REog== Sender: bitcoindev@googlegroups.com X-Forwarded-Encrypted: i=1; AJvYcCXrQXr0ERqevHyBK00j28zAGEx95JsIQYOU56NedM7O982L9zfkFvpEkmLCskVSqRZKpFhAAta+y5Gi@gnusha.org X-Gm-Message-State: AOJu0YxtlwoWnWlX0AdPHSjws+XgOcDiSNxJ7OWGgUgseQYUHtZ8ga1e loP7Ry3PJioTa42SYSxbCOtpf6D/5Jfy+9hIHZrnE8C9ESO2PBLQoMDx X-Google-Smtp-Source: AGHT+IHelgWsGyPJM2h56TlZJ7efsZxUU81Yh54mW7JHhQA4616/wt+qXAG7MlzB4s0Aj2hdkPMocQ== X-Received: by 2002:a05:6871:2b2a:b0:3d3:c285:412a with SMTP id 586e51a60fabf-3dacc5d3969mr119413fac.6.1761845823543; Thu, 30 Oct 2025 10:37:03 -0700 (PDT) X-BeenThere: bitcoindev@googlegroups.com; h="Ae8XA+ZpcSGAP20KcZO/k4r+CaabFd6FybowsdWjXKyuQU++sQ==" Received: by 2002:a05:6870:7052:10b0:3d2:c10b:cb41 with SMTP id 586e51a60fabf-3d8bc1edb61ls385996fac.2.-pod-prod-09-us; Thu, 30 Oct 2025 10:36:58 -0700 (PDT) X-Received: by 2002:a05:6808:509e:b0:441:8f74:fbc with SMTP id 5614622812f47-44f9607431cmr180826b6e.57.1761845818680; Thu, 30 Oct 2025 10:36:58 -0700 (PDT) Received: by 2002:a05:690c:a08a:10b0:785:e55d:2dfd with SMTP id 00721157ae682-7862949d80fms7b3; Thu, 30 Oct 2025 09:10:57 -0700 (PDT) X-Received: by 2002:a05:690c:6f05:b0:783:7416:4329 with SMTP id 00721157ae682-78648417f02mr210007b3.31.1761840656118; Thu, 30 Oct 2025 09:10:56 -0700 (PDT) Date: Thu, 30 Oct 2025 09:10:55 -0700 (PDT) From: Tom Harding To: Bitcoin Development Mailing List Message-Id: <793073a7-84b2-4b42-a531-e03e30f89ddcn@googlegroups.com> In-Reply-To: References: <6f6b570f-7f9d-40c0-a771-378eb2c0c701n@googlegroups.com> <961e3c3a-a627-4a07-ae81-eb01f7a375a1n@googlegroups.com> <5135a031-a94e-49b9-ab31-a1eb48875ff2n@googlegroups.com> <78475572-3e52-44e4-8116-8f1a917995a4n@googlegroups.com> Subject: Re: [bitcoindev] Re: [BIP Proposal] Limit ScriptPubkey Size >= 520 Bytes Consensus. MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_210018_774059403.1761840655729" X-Original-Sender: tomh@thinlink.com Precedence: list Mailing-list: list bitcoindev@googlegroups.com; contact bitcoindev+owners@googlegroups.com List-ID: X-Google-Group-Id: 786775582512 List-Post: , List-Help: , List-Archive: , List-Unsubscribe: , X-Spam-Score: -0.7 (/) ------=_Part_210018_774059403.1761840655729 Content-Type: multipart/alternative; boundary="----=_Part_210019_947466408.1761840655729" ------=_Part_210019_947466408.1761840655729 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable We should reflect on the goal of minimizing UTXO set size. Would we as=20 easily say we should minimize the number of people/entities who hold L1=20 coins, or the number of ways each person/entity can hold them? The dire concern with UTXO set size was born with the optimization of the= =20 core bitcoin software for mining, rather than for holding and transfers, in= =20 2012. Some geniuses were involved with that change. Satoshi was not one= =20 of them.=20 On Wednesday, October 29, 2025 at 7:32:10=E2=80=AFPM UTC-7 Greg Maxwell wro= te: "A few bytes" might be on the order of forever 10% increase in the UTXO set= =20 size, plus a full from-network resync of all pruned nodes and a full (e.g.= =20 most of day outage) reindex of all unpruned nodes. Not insignificant but= =20 also not nothing. Such a portion of the existing utxo size is not from=20 outputs over 520 bytes in size, so as a scheme for utxo set size reduction= =20 the addition of MHT tracking would probably make it a failure. Also some risk of creating some new scarce asset class, txouts consisting= =20 of primordial coins that aren't subject to the new rules... sounds like the= =20 sort of thing that NFT degens would absolutely love. That might not be an= =20 issue *generally* for some change with confiscation risk, but for a change= =20 that is specifically intended to lobotomize bitcoin to make it less useful= =20 to NFT degens, maybe not such a great idea. :P I mentioned it at all because I thought it could potentially be of some=20 use, I'm just more skeptical of it for the current context. Also luke-jr= =20 and crew has moved on to actually propose even more invasive changes than= =20 just limiting the script size, which I anticipated, and has much more=20 significant issues. Just size limiting outputs likely doesn't harm any=20 interests or usages-- and so probably could be viable if the confiscation= =20 issue was addressed, but it also doesn't stick it to people transacting in= =20 ways the priests of ocean mining dislike.=20 > I believe you're pointing out the idea of non economically-rational=20 spammers? I think it's a mistake to conclude the spammers are economically=20 irrational-- they're often just responding to different economics which may= =20 be less legible to your analysis. In particular, NFT degens prefer the=20 high cost of transactions as a thing that makes their tokens scarce and=20 gives them value. -- otherwise they wouldn't be swapping for one less=20 efficient encoding for another, they're just be using another blockchain=20 (perhaps their own) entirely. On Thu, Oct 30, 2025 at 1:16=E2=80=AFAM Michael Tidwell wrote: > MRH tracking might make that acceptable, but comes at a high cost which I= =20 think would clearly not be justified. Greg, I want to ask/challenge how bad this is, this seems like a generally= =20 reusable primitive that could make other upgrades more feasible that also= =20 have the same strict confiscation risk profile. IIUC, the major pain is, 1 big reindex cost + a few bytes per utxo? Poelstra, > I don't think this is a great idea -- it would be technically hard to implement and slow deployment indefinitely. I would like to know how much of a deal breaker this is in your opinion. Is= =20 MRH tracking off the table? In terms of the hypothetical presigned=20 transactions that may exist using P2MS, is this a hard enough reason to=20 require a MRH idea? Greg, > So, paradoxically this limit might increase the amount of non-prunable=20 data I believe you're pointing out the idea of non economically-rational=20 spammers? We already see actors ignoring cheaper witness inscription=20 methods. If spam shifts to many sub-520 fake pubkey outputs (which I=20 believe is less harmful than stamps), that imo is a separate UTXO cost=20 discussion. (like a SF to add weight to outputs). Anywho, this point alone= =20 doesn't seem sufficient to add as a clear negative reason for someone=20 opposed to the proposal. Thanks, Tidwell On Wednesday, October 22, 2025 at 5:55:58=E2=80=AFAM UTC-4 moonsettler wrot= e: > Confiscation is a problem because of presigned transactions=20 Allow 10000 bytes of total scriptPubKey size in each block counting only=20 those outputs that are larger than x (520 as proposed).=20 The code change is pretty minimal from the most obvious implementation of= =20 the original rule.=20 That makes it technically non-confiscatory. Still non-standard, but if=20 anyone out there so obnoxiously foot-gunned themselves, they can't claim=20 they were rugged by the devs.=20 BR,=20 moonsettler=20 On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL = =20 wrote:=20 > Hey,=20 >=20 > First, thank you to everyone who responded, and please continue to do so.= =20 There were many thought provoking responses and this did shift my=20 perspective quite a bit from the original post, which in of itself was the= =20 goal to a degree.=20 >=20 > I am currently only going to respond to all of the current concerns.=20 Acks; though I like them will be ignored unless new discoveries are=20 included.=20 >=20 > Tl;dr (Portlands Perspective)=20 > - Confiscation is a problem because of presigned transactions=20 > - DoS mitigation could also occur through marking UTXOs as unspendable if= =20 > 520 bytes, this would preserve the proof of publication.=20 > - Timeout / Sunset logic is compelling=20 > - The (n) value of acceptable needed bytes is contentious with the lower= =20 suggested limit being 67=20 > - Congestion control is worth a look?=20 >=20 > Next Step:=20 > - Deeper discussion at the individual level: Antoine Poinsot and GCC=20 overlap?=20 > - Write an implementation.=20 > - Decide to pursue BIP=20 >=20 > Responses=20 >=20 > Andrew Poelstra:=20 > > There is a risk of confiscation of coins which have pre-signed but=20 > > unpublished transactions spending them to new outputs with large=20 > > scriptPubKeys. Due to long-standing standardness rules, and the=20 presence=20 > > of P2SH (and now P2WSH) for well over a decade, I'm skeptical that any= =20 > > such transactions exist.=20 >=20 > PortlandHODL: This is a risk that can be incurred and likely not possible= =20 to mitigate as there could be possible chains of transactions so even when= =20 recursively iterating over a chain there is a chance that a presigned=20 breaks this rule. Every idea I have had from block redemption limits on=20 prevouts seems to just be a coverage issue where you can make the=20 confiscation less likely but not completely mitigated.=20 >=20 > Second, there are already TXs that effectively have been confiscated at= =20 the policy level (P2SH Cleanstack violation) where the user can not find=20 any miner with a policy to accept these into their mempool. (3 years)=20 >=20 > /dev /fd0=20 > > so it would be great if this was restricted to OP_RETURN=20 >=20 > PortlandHODL: I reject this completely as this would remove the UTXOset= =20 omission for the scriptPubkey and encourage miners to subvert the OP_RETURN= =20 restriction and instead just use another op_code, this also do not hit on= =20 some of the most important factors such as DoS mitigation and legacy script= =20 attack surface reduction.=20 >=20 > Peter Todd=20 > > NACK ...=20 >=20 > PortlandHODL: You NACK'd for the same reasons that I stated in my OP,=20 without including any additional context or reasoning.=20 >=20 > jeremy=20 > > I think that this type of rule is OK if we do it as a "sunsetting"=20 restriction -- e.g. a soft fork active for the next N blocks (N =3D e.g. 2= =20 years, 5 years, 10 years).=20 >=20 > If action is taken, this is the most reasonable approach. Alleviating=20 confiscatory concerns through deferral.=20 >=20 > > You can argue against this example probably, but it is worth=20 considering that absence of evidence of use is not evidence of absence of= =20 use and I myself feel that overall our understanding of Bitcoin transaction= =20 programming possibilities is still early. If you don't like this example, I= =20 can give you others (probably).=20 >=20 > Agreed and this also falls into the reasoning for deciding to utilize=20 point 1 in your response. My thoughts on this would be along the lines of= =20 proof of publication as this change only has the effect of stripping away= =20 the executable portion of a script between 521 and 10_000 bytes or the=20 published data portion if > 10_000 bytes which the same data could likely= =20 be published in chunked segments using outpoints.=20 >=20 > Andrew Poelstra:=20 > > Aside from proof-of-publication (i.e. data storage directly in the UTXO= =20 > > set) there is no usage of script which can't be equally (or better)=20 > > accomplished by using a Segwit v0 or Taproot script.=20 >=20 > This sums up the majority of future usecase concern=20 >=20 > Anthony Towns:=20 > > (If you restricted the change to only applying to scripts that used=20 > non-push operators, that would probably still provide upgrade flexibility= =20 > while also preventing potential script abuses. But it wouldn't do=20 anything=20 > to prevent publishing data)=20 >=20 > Could this not be done as segments in multiple outpoints using a=20 coordination outpoint? I fail to see why publication proof must be in a=20 single chunk. This does though however bring another alternative to mind,= =20 just making these outpoints unspendable but not invalidate the block=20 through inclusion...=20 >=20 > > As far as the "but contiguous data will be regulated more strictly"=20 > argument goes; I don't think "your honour, my offensive content has=20 > strings of 4d0802 every 520 bytes=20 >=20 > Correct, this was never meant to resolve this issue.=20 >=20 > Luke Dashjr:=20 > > If we're going this route, we should just close all the gaps for the=20 immediate future:=20 >=20 > To put it nicely, this is completely beyond the scope of what is being=20 proposed.=20 >=20 > Guus Ellenkamp:=20 > > If there are really so few OP_RETURN outputs more than 144 bytes, then= =20 > why increase the limit if that change is so controversial? It seems=20 > people who want to use a larger OP_RETURN size do it anyway, even with=20 > the current default limits.=20 >=20 > Completely off topic and irrelevant=20 >=20 > Greg Tonoski:=20 > > Limiting the maximum size of the scriptPubKey of a transaction to 67=20 bytes.=20 >=20 > This leave no room to deal with broken hashing algorithms and very little= =20 future upgradability for hooks. The rest of these points should be merged= =20 with Lukes response and either hijack my thread or start a new one with the= =20 increased scope, any approach I take will only be related to the=20 ScriptPubkey=20 >=20 > Keagan McClelland:=20 > > Hard NACK on capping the witness size as that would effectively ban=20 large scripts even in the P2SH wrapper which undermines Bitcoin's ability= =20 to be an effectively programmable money.=20 >=20 > This has nothing to do with the witness size or even the P2SH wrapper=20 >=20 > Casey Rodarmor:=20 > > I think that "Bitcoin could need it in the future?" might be a good=20 enough=20 > reason not to do this.=20 >=20 > > Script pubkeys are the only variable-length transaction fields which=20 can be=20 > covered by input signatures, which might make them useful for future soft= =20 > forks. I can imagine confidential asset schemes or post-quantum coin=20 recovery=20 > schemes requiring large proofs in the outputs, where the validity of the= =20 proof=20 > determined whether or not the transaction is valid, and thus require the= =20 > proofs to be in the outputs, and not just a hash commitment.=20 >=20 > Would the ability to publish the data alone be enough? Example make the= =20 output unspendable but allow for the existence of the bytes to be covered= =20 through the signature?=20 >=20 >=20 > Antoine Poinsot:=20 > > Limiting the size of created scriptPubKeys is not a sufficient=20 mitigation on its own=20 > I fail to see how this would not be sufficient? To DoS you need 2 things= =20 inputs with ScriptPubkey redemptions + heavy op_codes that require unique= =20 checks. Example DUPing stack element again and again doesn't work. This=20 then leads to the next part is you could get up to unique complex=20 operations with the current (n) limit included per input.=20 >=20 > > One of the goal of BIP54 is to address objections to Matt's earlier=20 proposal, notably the (in my=20 > opinion reasonable) confiscation concerns voiced by Russell O'Connor.=20 Limiting the size of=20 > scriptPubKeys would in this regard be moving in the opposite direction.= =20 >=20 > Some notes is I would actually go as far as to say the confiscation risk= =20 is higher with the TX limit proposed in BIP54 as we actually have proof of= =20 redemption of TXs that break that rule and the input set to do this already= =20 exists on-chain no need to even wonder about the whole presigned.=20 bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08=20 >=20 > Please let me know if I am incorrect on any of this.=20 >=20 > > Furthermore, it's always possible to get the biggest bang for our buck= =20 in a first step=20 >=20 > Agreed on bang for the buck regarding DoS.=20 >=20 > My final point here would be that I would like to discuss more, and this= =20 is response is from the initial view of your response and could be=20 incomplete or incorrect, This is just my in the moment response.=20 >=20 > Antoine Riard:=20 > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in=20 favor of prioritizing=20 > a timewarp fix and limiting dosy spends by old redeem scripts=20 >=20 > The idea of congestion control is interesting, but this solution should= =20 significantly reduce the total DoS severity of known vectors.=20 >=20 > On Saturday, October 18, 2025 at 2:25:18=E2=80=AFAM UTC-7 Greg Maxwell wr= ote:=20 >=20 > > Limits on block construction that cross transactions make it harder to= =20 accurately estimate fees and greatly complicate optimal block=20 construction-- the latter being important because smarter and more computer= =20 powered mining code generating higher profits is a pro centralization=20 factor.=20 > >=20 > > In terms of effectiveness the "spam" will just make itself=20 indistinguishable from the most common transaction traffic from the=20 perspective of such metrics-- and might well drive up "spam" levels because= =20 the higher embedding cost may make some of them use more transactions. The= =20 competition for these buckets by other traffic could make it effectively a= =20 block size reduction even against very boring ordinary transactions. ...=20 which is probably not what most people want.=20 > >=20 > > I think it's important to keep in mind that bitcoin fee levels even at= =20 0.1s/vb are far beyond what other hosting services and other blockchains=20 cost-- so anyone still embedding data in bitcoin *really* want to be there= =20 for some reason and aren't too fee sensitive or else they'd already be=20 using something else... some are even in favor of higher costs since the=20 high fees are what create the scarcity needed for their seigniorage.=20 > >=20 > > But yeah I think your comments on priorities are correct.=20 > >=20 > >=20 > >=20 > > On Sat, Oct 18, 2025 at 1:20=E2=80=AFAM Antoine Riard =20 wrote:=20 > >=20 > > > Hi list,=20 > > >=20 > > > Thanks to the annex covered by the signature, I don't see how the=20 concern about limiting=20 > > > the extensibility of bitcoin script with future (post-quantum)=20 cryptographic schemes.=20 > > > Previous proposal of the annex were deliberately designed with=20 variable-length fields=20 > > > to flexibly accomodate a wide range of things.=20 > > >=20 > > > I believe there is one thing that has not been proposed to limit=20 unpredictable utterance=20 > > > of spams on the blockchain, namely congestion control of categories= =20 of outputs (e.g "fat"=20 > > > scriptpubkeys). Let's say P a block period, T a type of scriptpubkey= =20 and L a limiting=20 > > > threshold for the number of T occurences during the period P. Beyond= =20 the L threshold, any=20 > > > additional T scriptpubkey is making the block invalid. Or=20 alternatively, any additional=20 > > > T generating / spending transaction must pay some weight penalty...= =20 > > >=20 > > > Congestion control, which of course comes with its lot of=20 shenanigans, is not very a novel=20 > > > idea as I believe it has been floated few times in the context of=20 lightning to solve mass=20 > > > closure, where channels out-priced at current feerate would have=20 their safety timelocks scale=20 > > > ups.=20 > > >=20 > > > No need anymore to come to social consensus on what is quantitative= =20 "spam" or not. The blockchain=20 > > > would automatically throttle out the block space spamming=20 transaction. Qualitative spam it's another=20 > > > question, for anyone who has ever read shannon's theory of=20 communication only effective thing can=20 > > > be to limit the size of data payload. But probably we're kickly back= =20 to a non-mathematically solvable=20 > > > linguistical question again [0].=20 > > >=20 > > > Anyway, in the sleeping pond of consensus fixes fishes, I'm more in= =20 favor of prioritizing=20 > > > a timewarp fix and limiting dosy spends by old redeem scripts, rather= =20 than engaging in shooting=20 > > > ourselves in the foot with ill-designed "spam" consensus mitigations.= =20 > > >=20 > > > [0] If you have a soul of logician, it would be an interesting=20 demonstration to come with=20 > > > to establish that we cannot come up with mathematically or=20 cryptographically consensus means=20 > > > to solve qualitative "spam", which in a very pure sense is a=20 linguistical issue.=20 > > >=20 > > > Best,=20 > > > Antoine=20 > > > OTS hash:=20 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6a640dd4a31d72f0e4999=20 > > > Le vendredi 17 octobre 2025 =C3=A0 19:45:44 UTC+1, Antoine Poinsot a = =C3=A9crit=20 :=20 > > >=20 > > > > Hi,=20 > > > >=20 > > > > This approach was discussed last year when evaluating the best way= =20 to mitigate DoS blocks in terms=20 > > > > of gains compared to confiscatory surface. Limiting the size of=20 created scriptPubKeys is not a=20 > > > > sufficient mitigation on its own, and has a non-trivial=20 confiscatory surface.=20 > > > >=20 > > > > One of the goal of BIP54 is to address objections to Matt's earlier= =20 proposal, notably the (in my=20 > > > > opinion reasonable) confiscation concerns voiced by Russell=20 O'Connor. Limiting the size of=20 > > > > scriptPubKeys would in this regard be moving in the opposite=20 direction.=20 > > > >=20 > > > > Various approaches of limiting the size of spent scriptPubKeys were= =20 discussed, in forms that would=20 > > > > mitigate the confiscatory surface, to adopt in addition to (what=20 eventually became) the BIP54 sigops=20 > > > > limit. However i decided against including this additional measure= =20 in BIP54 because:=20 > > > > - of the inherent complexity of the discussed schemes, which would= =20 make it hard to reason about=20 > > > > constructing transactions spending legacy inputs, and equally hard= =20 to evaluate the reduction of=20 > > > > the confiscatory surface;=20 > > > > - more importantly, there is steep diminishing returns to piling on= =20 more mitigations. The BIP54=20 > > > > limit on its own prevents an externally-motivated attacker from=20 *unevenly* stalling the network=20 > > > > for dozens of minutes, and a revenue-maximizing miner from=20 regularly stalling its competitions=20 > > > > for dozens of seconds, at a minimized cost in confiscatory surface.= =20 Additional mitigations reduce=20 > > > > the worst case validation time by a smaller factor at a higher cost= =20 in terms of confiscatory=20 > > > > surface. It "feels right" to further reduce those numbers, but it's= =20 less clear what the tangible=20 > > > > gains would be.=20 > > > >=20 > > > > Furthermore, it's always possible to get the biggest bang for our= =20 buck in a first step and going the=20 > > > > extra mile in a later, more controversial, soft fork. I previously= =20 floated the idea of a "cleanup=20 > > > > v2" in private discussions, and i think besides a reduction of the= =20 maximum scriptPubKey size it=20 > > > > should feature a consensus-enforced maximum transaction size for=20 the reasons stated here:=20 > > > >=20 https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/1732= /8.=20 I wouldn't hold my=20 > > > > breath on such a "cleanup v2", but it may be useful to have it=20 documented somewhere.=20 > > > >=20 > > > > I'm trying to not go into much details regarding which mitigations= =20 were considered in designing=20 > > > > BIP54, because they are tightly related to the design of various=20 DoS blocks. But i'm always happy to=20 > > > > rehash the decisions made there and (re-)consider alternative=20 approaches on the semi-private Delving=20 > > > > thread [0] dedicated to this purpose. Feel free to ping me to get= =20 access if i know you.=20 > > > >=20 > > > > Best,=20 > > > > Antoine Poinsot=20 > > > >=20 > > > > [0]:=20 https://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711=20 > > > >=20 > > > >=20 > > > >=20 > > > >=20 > > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon Black < fre...@reardencode.com> wrote:=20 > > > >=20 > > > > >=20 > > > > >=20 > > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg Maxwell wrote:=20 > > > > >=20 > > > > > > But also given that there are essentially no violations and no= =20 reason to=20 > > > > > > expect any I'm not sure the proposal is worth time relative to= =20 fixes of=20 > > > > > > actual moderately serious DOS attack issues.=20 > > > > >=20 > > > > >=20 > > > > > I believe this limit would also stop most (all?) of=20 PortlandHODL's=20 > > > > > DoSblocks without having to make some of the other changes in=20 GCC. I=20 > > > > > think it's worthwhile to compare this approach to those proposed= =20 by=20 > > > > > Antoine in solving these DoS vectors.=20 > > > > >=20 > > > > > Best,=20 > > > > >=20 > > > > > --Brandon=20 > > > > >=20 > > > > > --=20 > > > > > You received this message because you are subscribed to the=20 Google Groups "Bitcoin Development Mailing List" group.=20 > > > > > To unsubscribe from this group and stop receiving emails from it,= =20 send an email to bitcoindev+...@googlegroups.com.=20 > > > > > To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/aPJ3w6bEoaye3WJ6%40console.=20 > > >=20 > > > --=20 > > > You received this message because you are subscribed to the Google=20 Groups "Bitcoin Development Mailing List" group.=20 > > > To unsubscribe from this group and stop receiving emails from it,=20 send an email to bitcoindev+...@googlegroups.com.=20 > >=20 > > > To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48= 875ff2n%40googlegroups.com.=20 >=20 > --=20 > You received this message because you are subscribed to the Google Groups= =20 "Bitcoin Development Mailing List" group.=20 > To unsubscribe from this group and stop receiving emails from it, send an= =20 email to bitcoindev+...@googlegroups.com.=20 > To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/78475572-3e52-44e4-8116-8f1a91= 7995a4n%40googlegroups.com.=20 --=20 You received this message because you are subscribed to the Google Groups= =20 "Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an= =20 email to bitcoindev+...@googlegroups.com. To view this discussion visit=20 https://groups.google.com/d/msgid/bitcoindev/f4755fb6-b031-4c60-b304-f123ba= 2ff473n%40googlegroups.com=20 . --=20 You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/= 793073a7-84b2-4b42-a531-e03e30f89ddcn%40googlegroups.com. ------=_Part_210019_947466408.1761840655729 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
We should reflect on the goal of minimizing UTXO set size.=C2=A0 Would= we as easily say we should minimize the number of people/entities who hold= L1 coins, or the number of ways each person/entity can hold them?

The dire concern with UTXO set size was born with the op= timization of the core bitcoin software for mining, rather than for holding= and transfers, in 2012.=C2=A0 Some geniuses were involved with that change= .=C2=A0 Satoshi was not one of them.=C2=A0


On Wednesday, October 29, 2025 at 7:32:10=E2= =80=AFPM UTC-7 Greg Maxwell wrote:
"A few bytes" might be on the order of forever= 10% increase in the UTXO set size, plus a full from-network resync of all = pruned nodes and a full (e.g. most of day outage) reindex of all unpruned n= odes.=C2=A0 Not insignificant=C2=A0but also not nothing.=C2=A0 Such a porti= on of the=C2=A0existing utxo size is not from outputs over 520 bytes in siz= e, so as a scheme for utxo set size reduction the addition of MHT tracking = would probably make it a failure.

Also some risk= of creating some new scarce asset class, txouts consisting of primordial= =C2=A0coins that aren't subject to the new rules... sounds like the sort of= thing that NFT degens would absolutely love.=C2=A0 That might not be an is= sue *generally* for some change with confiscation=C2=A0risk, but for a chan= ge that is specifically intended to lobotomize bitcoin to make it less usef= ul to NFT degens, maybe not such a great idea. :P

I mentioned it at all because I thought it could potentially be of some u= se, I'm just more skeptical of it for the current context.=C2=A0 Also luke-= jr and crew has moved on to actually propose even more invasive changes tha= n just limiting the script size, which I anticipated, and has much more sig= nificant=C2=A0issues.=C2=A0 Just size limiting outputs likely doesn't harm = any interests or usages-- and so probably could be viable if the confiscati= on issue was addressed, but it also doesn't stick it to people transacting = in ways the priests of ocean mining dislike.=C2=A0

>=C2=A0I believe you're pointing out the idea = of non economically-rational spammers?

I think it's a mistake to conclude the spammers are economica= lly irrational-- they're often just responding to different economics which= may be less legible to your analysis.=C2=A0 In particular, NFT degens pref= er the high cost of transactions as a thing that makes their tokens scarce = and gives them value.=C2=A0 -- otherwise they wouldn't be swapping for one = less efficient encoding for another, they're just be using another blockcha= in (perhaps their own) entirely.




On Thu, Oct 30, = 2025 at 1:16=E2=80=AFAM Michael Tidwell <m= tidw...@gmail.com> wrote:
> MRH tracking might make that acceptable, but comes at = a high cost which I think would clearly not be justified.

Greg, = I want to ask/challenge how bad this is, this seems like a generally reusab= le primitive that could make other upgrades more feasible that also have th= e same strict confiscation risk profile.
IIUC, the major pain is, 1 bi= g reindex cost + a few bytes per utxo?

Poelstra,

>= I don't think this is a great idea -- it would be technically hard to
implement and slow deployment indefinitely.

I would like to kno= w how much of a deal breaker this is in your opinion. Is MRH tracking off t= he table? In terms of the hypothetical presigned transactions that may exis= t using P2MS, is this a hard enough reason to require a MRH idea?

Greg,

> So, paradoxically this limit might increase the amo= unt of non-prunable data

I believe you're pointing out the idea = of non economically-rational spammers? We already see actors ignoring cheap= er witness inscription methods. If spam shifts to many sub-520 fake pubkey = outputs (which I believe is less harmful than stamps), that imo is a separa= te UTXO cost discussion. (like a SF to add weight to outputs). Anywho, this= point alone doesn't seem sufficient to add as a clear negative reason for = someone opposed to the proposal.

Thanks,
Tidwell
On Wednesday, October 22, 2025 at 5:55:58=E2=80=AFAM UTC-4 moon= settler wrote:
> Confisca= tion is a problem because of presigned transactions

Allow 10000 bytes of total scriptPubKey size in each block counting o= nly those outputs that are larger than x (520 as proposed).
The code change is pretty minimal from the most obvious implementatio= n of the original rule.

That makes it technically non-confiscatory. Still non-standard, but i= f anyone out there so obnoxiously foot-gunned themselves, they can't claim = they were rugged by the devs.

BR,
moonsettler

On Saturday, October 18th, 2025 at 3:15 PM, PortlandHODL <ad...@qrsnap.io> wrote:

> Hey,
>=20
> First, thank you to everyone who responded, and please continue = to do so. There were many thought provoking responses and this did shift my= perspective quite a bit from the original post, which in of itself was the= goal to a degree.
>=20
> I am currently only going to respond to all of the current conce= rns. Acks; though I like them will be ignored unless new discoveries are in= cluded.
>=20
> Tl;dr (Portlands Perspective)
> - Confiscation is a problem because of presigned transactions
> - DoS mitigation could also occur through marking UTXOs as unspe= ndable if > 520 bytes, this would preserve the proof of publication.
> - Timeout / Sunset logic is compelling
> - The (n) value of acceptable needed bytes is contentious with t= he lower suggested limit being 67
> - Congestion control is worth a look?
>=20
> Next Step:
> - Deeper discussion at the individual level: Antoine Poinsot and= GCC overlap?
> - Write an implementation.
> - Decide to pursue BIP
>=20
> Responses
>=20
> Andrew Poelstra:
> > There is a risk of confiscation of coins which have pre-sig= ned but
> > unpublished transactions spending them to new outputs with = large
> > scriptPubKeys. Due to long-standing standardness rules, and= the presence
> > of P2SH (and now P2WSH) for well over a decade, I'm skeptic= al that any
> > such transactions exist.
>=20
> PortlandHODL: This is a risk that can be incurred and likely not= possible to mitigate as there could be possible chains of transactions so = even when recursively iterating over a chain there is a chance that a presi= gned breaks this rule. Every idea I have had from block redemption limits o= n prevouts seems to just be a coverage issue where you can make the confisc= ation less likely but not completely mitigated.
>=20
> Second, there are already TXs that effectively have been confisc= ated at the policy level (P2SH Cleanstack violation) where the user can not= find any miner with a policy to accept these into their mempool. (3 years)
>=20
> /dev /fd0
> > so it would be great if this was restricted to OP_RETURN
>=20
> PortlandHODL: I reject this completely as this would remove the = UTXOset omission for the scriptPubkey and encourage miners to subvert the O= P_RETURN restriction and instead just use another op_code, this also do not= hit on some of the most important factors such as DoS mitigation and legac= y script attack surface reduction.
>=20
> Peter Todd
> > NACK ...
>=20
> PortlandHODL: You NACK'd for the same reasons that I stated in m= y OP, without including any additional context or reasoning.
>=20
> jeremy
> > I think that this type of rule is OK if we do it as a "suns= etting" restriction -- e.g. a soft fork active for the next N blocks (N =3D= e.g. 2 years, 5 years, 10 years).
>=20
> If action is taken, this is the most reasonable approach. Allevi= ating confiscatory concerns through deferral.
>=20
> > You can argue against this example probably, but it is wort= h considering that absence of evidence of use is not evidence of absence of= use and I myself feel that overall our understanding of Bitcoin transactio= n programming possibilities is still early. If you don't like this example,= I can give you others (probably).
>=20
> Agreed and this also falls into the reasoning for deciding to ut= ilize point 1 in your response. My thoughts on this would be along the line= s of proof of publication as this change only has the effect of stripping a= way the executable portion of a script between 521 and 10_000 bytes or the = published data portion if > 10_000 bytes which the same data could likel= y be published in chunked segments using outpoints.
>=20
> Andrew Poelstra:
> > Aside from proof-of-publication (i.e. data storage directly= in the UTXO
> > set) there is no usage of script which can't be equally (or= better)
> > accomplished by using a Segwit v0 or Taproot script.
>=20
> This sums up the majority of future usecase concern
>=20
> Anthony Towns:
> > (If you restricted the change to only applying to scripts t= hat used
> non-push operators, that would probably still provide upgrade fl= exibility
> while also preventing potential script abuses. But it wouldn't d= o anything
> to prevent publishing data)
>=20
> Could this not be done as segments in multiple outpoints using a= coordination outpoint? I fail to see why publication proof must be in a si= ngle chunk. This does though however bring another alternative to mind, jus= t making these outpoints unspendable but not invalidate the block through i= nclusion...
>=20
> > As far as the "but contiguous data will be regulated more s= trictly"
> argument goes; I don't think "your honour, my offensive content = has
> strings of 4d0802 every 520 bytes
>=20
> Correct, this was never meant to resolve this issue.
>=20
> Luke Dashjr:
> > If we're going this route, we should just close all the gap= s for the immediate future:
>=20
> To put it nicely, this is completely beyond the scope of what is= being proposed.
>=20
> Guus Ellenkamp:
> > If there are really so few OP_RETURN outputs more than 144 = bytes, then
> why increase the limit if that change is so controversial? It se= ems
> people who want to use a larger OP_RETURN size do it anyway, eve= n with
> the current default limits.
>=20
> Completely off topic and irrelevant
>=20
> Greg Tonoski:
> > Limiting the maximum size of the scriptPubKey of a transact= ion to 67 bytes.
>=20
> This leave no room to deal with broken hashing algorithms and ve= ry little future upgradability for hooks. The rest of these points should b= e merged with Lukes response and either hijack my thread or start a new one= with the increased scope, any approach I take will only be related to the = ScriptPubkey
>=20
> Keagan McClelland:
> > Hard NACK on capping the witness size as that would effecti= vely ban large scripts even in the P2SH wrapper which undermines Bitcoin's = ability to be an effectively programmable money.
>=20
> This has nothing to do with the witness size or even the P2SH wr= apper
>=20
> Casey Rodarmor:
> > I think that "Bitcoin could need it in the future?" might b= e a good enough
> reason not to do this.
>=20
> > Script pubkeys are the only variable-length transaction fie= lds which can be
> covered by input signatures, which might make them useful for fu= ture soft
> forks. I can imagine confidential asset schemes or post-quantum = coin recovery
> schemes requiring large proofs in the outputs, where the validit= y of the proof
> determined whether or not the transaction is valid, and thus req= uire the
> proofs to be in the outputs, and not just a hash commitment.
>=20
> Would the ability to publish the data alone be enough? Example m= ake the output unspendable but allow for the existence of the bytes to be c= overed through the signature?
>=20
>=20
> Antoine Poinsot:
> > Limiting the size of created scriptPubKeys is not a suffici= ent mitigation on its own
> I fail to see how this would not be sufficient? To DoS you need = 2 things inputs with ScriptPubkey redemptions + heavy op_codes that require= unique checks. Example DUPing stack element again and again doesn't work. = This then leads to the next part is you could get up to unique complex oper= ations with the current (n) limit included per input.
>=20
> > One of the goal of BIP54 is to address objections to Matt's= earlier proposal, notably the (in my
> opinion reasonable) confiscation concerns voiced by Russell O'Co= nnor. Limiting the size of
> scriptPubKeys would in this regard be moving in the opposite dir= ection.
>=20
> Some notes is I would actually go as far as to say the confiscat= ion risk is higher with the TX limit proposed in BIP54 as we actually have = proof of redemption of TXs that break that rule and the input set to do thi= s already exists on-chain no need to even wonder about the whole presigned.= bb41a757f405890fb0f5856228e23b715702d714d59bf2b1feb70d8b2b4e3e08
>=20
> Please let me know if I am incorrect on any of this.
>=20
> > Furthermore, it's always possible to get the biggest bang f= or our buck in a first step
>=20
> Agreed on bang for the buck regarding DoS.
>=20
> My final point here would be that I would like to discuss more, = and this is response is from the initial view of your response and could be= incomplete or incorrect, This is just my in the moment response.
>=20
> Antoine Riard:
> > Anyway, in the sleeping pond of consensus fixes fishes, I'm= more in favor of prioritizing
> a timewarp fix and limiting dosy spends by old redeem scripts
>=20
> The idea of congestion control is interesting, but this solution= should significantly reduce the total DoS severity of known vectors.
>=20
> On Saturday, October 18, 2025 at 2:25:18=E2=80=AFAM UTC-7 Greg M= axwell wrote:
>=20
> > Limits on block construction that cross transactions make i= t harder to accurately estimate fees and greatly complicate optimal block c= onstruction-- the latter being important because smarter and more computer = powered mining code generating higher profits is a pro centralization facto= r.
> >=20
> > In terms of effectiveness the "spam" will just make itself = indistinguishable from the most common transaction traffic from the perspec= tive of such metrics-- and might well drive up "spam" levels because the hi= gher embedding cost may make some of them use more transactions. The compet= ition for these buckets by other traffic could make it effectively a block = size reduction even against very boring ordinary transactions. ... which is= probably not what most people want.
> >=20
> > I think it's important to keep in mind that bitcoin fee lev= els even at 0.1s/vb are far beyond what other hosting services and other bl= ockchains cost-- so anyone still embedding data in bitcoin *really* want to= be there for some reason and aren't too fee sensitive or else they'd alrea= dy be using something else... some are even in favor of higher costs since = the high fees are what create the scarcity needed for their seigniorage.
> >=20
> > But yeah I think your comments on priorities are correct.
> >=20
> >=20
> >=20
> > On Sat, Oct 18, 2025 at 1:20=E2=80=AFAM Antoine Riard <<= a rel=3D"nofollow">antoin...@gmail.com> wrote:
> >=20
> > > Hi list,
> > >=20
> > > Thanks to the annex covered by the signature, I don't = see how the concern about limiting
> > > the extensibility of bitcoin script with future (post-= quantum) cryptographic schemes.
> > > Previous proposal of the annex were deliberately desig= ned with variable-length fields
> > > to flexibly accomodate a wide range of things.
> > >=20
> > > I believe there is one thing that has not been propose= d to limit unpredictable utterance
> > > of spams on the blockchain, namely congestion control = of categories of outputs (e.g "fat"
> > > scriptpubkeys). Let's say P a block period, T a type o= f scriptpubkey and L a limiting
> > > threshold for the number of T occurences during the pe= riod P. Beyond the L threshold, any
> > > additional T scriptpubkey is making the block invalid.= Or alternatively, any additional
> > > T generating / spending transaction must pay some weig= ht penalty...
> > >=20
> > > Congestion control, which of course comes with its lot= of shenanigans, is not very a novel
> > > idea as I believe it has been floated few times in the= context of lightning to solve mass
> > > closure, where channels out-priced at current feerate = would have their safety timelocks scale
> > > ups.
> > >=20
> > > No need anymore to come to social consensus on what is= quantitative "spam" or not. The blockchain
> > > would automatically throttle out the block space spamm= ing transaction. Qualitative spam it's another
> > > question, for anyone who has ever read shannon's theor= y of communication only effective thing can
> > > be to limit the size of data payload. But probably we'= re kickly back to a non-mathematically solvable
> > > linguistical question again [0].
> > >=20
> > > Anyway, in the sleeping pond of consensus fixes fishes= , I'm more in favor of prioritizing
> > > a timewarp fix and limiting dosy spends by old redeem = scripts, rather than engaging in shooting
> > > ourselves in the foot with ill-designed "spam" consens= us mitigations.
> > >=20
> > > [0] If you have a soul of logician, it would be an int= eresting demonstration to come with
> > > to establish that we cannot come up with mathematicall= y or cryptographically consensus means
> > > to solve qualitative "spam", which in a very pure sens= e is a linguistical issue.
> > >=20
> > > Best,
> > > Antoine
> > > OTS hash: 6cb50fe36ca0ec5cb9a88517dd4ce9bb50dd6ad1d2d6= a640dd4a31d72f0e4999
> > > Le vendredi 17 octobre 2025 =C3=A0 19:45:44 UTC+1, Ant= oine Poinsot a =C3=A9crit :
> > >=20
> > > > Hi,
> > > >=20
> > > > This approach was discussed last year when evalua= ting the best way to mitigate DoS blocks in terms
> > > > of gains compared to confiscatory surface. Limiti= ng the size of created scriptPubKeys is not a
> > > > sufficient mitigation on its own, and has a non-t= rivial confiscatory surface.
> > > >=20
> > > > One of the goal of BIP54 is to address objections= to Matt's earlier proposal, notably the (in my
> > > > opinion reasonable) confiscation concerns voiced = by Russell O'Connor. Limiting the size of
> > > > scriptPubKeys would in this regard be moving in t= he opposite direction.
> > > >=20
> > > > Various approaches of limiting the size of spent = scriptPubKeys were discussed, in forms that would
> > > > mitigate the confiscatory surface, to adopt in ad= dition to (what eventually became) the BIP54 sigops
> > > > limit. However i decided against including this a= dditional measure in BIP54 because:
> > > > - of the inherent complexity of the discussed sch= emes, which would make it hard to reason about
> > > > constructing transactions spending legacy inputs,= and equally hard to evaluate the reduction of
> > > > the confiscatory surface;
> > > > - more importantly, there is steep diminishing re= turns to piling on more mitigations. The BIP54
> > > > limit on its own prevents an externally-motivated= attacker from *unevenly* stalling the network
> > > > for dozens of minutes, and a revenue-maximizing m= iner from regularly stalling its competitions
> > > > for dozens of seconds, at a minimized cost in con= fiscatory surface. Additional mitigations reduce
> > > > the worst case validation time by a smaller facto= r at a higher cost in terms of confiscatory
> > > > surface. It "feels right" to further reduce those= numbers, but it's less clear what the tangible
> > > > gains would be.
> > > >=20
> > > > Furthermore, it's always possible to get the bigg= est bang for our buck in a first step and going the
> > > > extra mile in a later, more controversial, soft f= ork. I previously floated the idea of a "cleanup
> > > > v2" in private discussions, and i think besides a= reduction of the maximum scriptPubKey size it
> > > > should feature a consensus-enforced maximum trans= action size for the reasons stated here:
> > > > https://delvingbitcoin.org/t/non-confiscatory-transaction-weight-limit/173= 2/8. I wouldn't hold my
> > > > breath on such a "cleanup v2", but it may be usef= ul to have it documented somewhere.
> > > >=20
> > > > I'm trying to not go into much details regarding = which mitigations were considered in designing
> > > > BIP54, because they are tightly related to the de= sign of various DoS blocks. But i'm always happy to
> > > > rehash the decisions made there and (re-)consider= alternative approaches on the semi-private Delving
> > > > thread [0] dedicated to this purpose. Feel free t= o ping me to get access if i know you.
> > > >=20
> > > > Best,
> > > > Antoine Poinsot
> > > >=20
> > > > [0]: htt= ps://delvingbitcoin.org/t/worst-block-validation-time-inquiry/711
> > > >=20
> > > >=20
> > > >=20
> > > >=20
> > > > On Friday, October 17th, 2025 at 1:12 PM, Brandon= Black <fre...@reardencode.com> wrote:
> > > >=20
> > > > >
> > > > >
> > > > > On 2025-10-16 (Thu) at 00:06:41 +0000, Greg = Maxwell wrote:
> > > > >
> > > > > > But also given that there are essential= ly no violations and no reason to
> > > > > > expect any I'm not sure the proposal is= worth time relative to fixes of
> > > > > > actual moderately serious DOS attack is= sues.
> > > > >
> > > > >
> > > > > I believe this limit would also stop most (a= ll?) of PortlandHODL's
> > > > > DoSblocks without having to make some of the= other changes in GCC. I
> > > > > think it's worthwhile to compare this approa= ch to those proposed by
> > > > > Antoine in solving these DoS vectors.
> > > > >
> > > > > Best,
> > > > >
> > > > > --Brandon
> > > > >
> > > > > --
> > > > > You received this message because you are su= bscribed to the Google Groups "Bitcoin Development Mailing List" group.
> > > > > To unsubscribe from this group and stop rece= iving emails from it, send an email to bitcoindev+...@g= ooglegroups.com.
> > > > > To view this discussion visit https://groups.google.com/d/msgid/bitcoinde= v/aPJ3w6bEoaye3WJ6%40console.
> > >=20
> > > --
> > > You received this message because you are subscribed t= o the Google Groups "Bitcoin Development Mailing List" group.
> > > To unsubscribe from this group and stop receiving emai= ls from it, send an email to bitcoindev+...@googlegroup= s.com.
> >=20
> > > To view this discussion visit https://groups.google.com= /d/msgid/bitcoindev/5135a031-a94e-49b9-ab31-a1eb48875ff2n%40googlegroups.co= m.
>=20
> --
> You received this message because you are subscribed to the Goog= le Groups "Bitcoin Development Mailing List" group.
> To unsubscribe from this group and stop receiving emails from it= , send an email to bitcoindev+...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/b= itcoindev/78475572-3e52-44e4-8116-8f1a917995a4n%40googlegroups.com.

--
You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+...@googlegroups.com.<= br />

--
You received this message because you are subscribed to the Google Groups &= quot;Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoind= ev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoind= ev/793073a7-84b2-4b42-a531-e03e30f89ddcn%40googlegroups.com.
------=_Part_210019_947466408.1761840655729-- ------=_Part_210018_774059403.1761840655729--