From mboxrd@z Thu Jan 1 00:00:00 1970 Delivery-date: Sun, 14 Dec 2025 19:51:56 -0800 Received: from mail-ot1-f61.google.com ([209.85.210.61]) by mail.fairlystable.org with esmtps (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.94.2) (envelope-from ) id 1vUzco-0000RH-Rk for bitcoindev@gnusha.org; Sun, 14 Dec 2025 19:51:56 -0800 Received: by mail-ot1-f61.google.com with SMTP id 46e09a7af769-7c6ce3b9fa0sf3938912a34.0 for ; Sun, 14 Dec 2025 19:51:54 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=googlegroups.com; s=20230601; t=1765770705; x=1766375505; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:sender:from :to:cc:subject:date:message-id:reply-to; bh=qiYAgy77asx6bh7DONMBI2wDD/h1fnGNgv12LhfU390=; b=hJBxS1Io3K/nqxtblPjFMe0RfHdtN2TbViH2Bg/2RDuufg+1iySZz5OZEv/FElzxyr brgYMz6/8tenlwbHoNd/GwWoo6OiMsfK/hSVPjVOsVc0G0G/+cCSixs94pTfLPsbF1HY KYfYynmXho8Z601/pWwHXgEaxTwxLh+1bvFq4fX88Y7cgXUWP6I8e9vikEEnx4kS7X0O fh3lPlzf66/JeB2pA8tbHcHhg9M1vPEmUgQMAaWdGmeExtbcgB2SfERQJhLIhcGCjWlc gxkBLMdVdvrydbic3ywr67jA+K+GzxSG4uPRVxXMk5smbOUzoQkrKtr3HaqZXLC2L83F 3vKQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1765770705; x=1766375505; darn=gnusha.org; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:from:to:cc :subject:date:message-id:reply-to; bh=qiYAgy77asx6bh7DONMBI2wDD/h1fnGNgv12LhfU390=; b=WbknoggmKo/tv/RE6ghs+x+tMKHC8EkOU3l6q8EQit1HgYtbVpp24zvtanFp1VnbBj o7vqXPeuMYiV5kowN0WACCDnk1lUmjrLoV8d8lOdmsbjRBzPzgdNczy+TBvD/GVpljkg HoGVDf/ejVZ4vDYoNQq7C6tgayLpdwbbCRkzkF4HhldDur3vXdvSOFo8FC/mwVrfXTHS Lz3Am0QJRCsAeksnaCsJZGhJrT1M4WZXMV223jTGag6fPMt92sWII84op2DWO8yRFnjV pknr2VPn7M2/xBWx679EyXnVeIMlf1Wh49qMy2OdbIzDGmlWfRyAI4dydepYtRkG3T5L lytg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1765770706; x=1766375506; h=list-unsubscribe:list-subscribe:list-archive:list-help:list-post :list-id:mailing-list:precedence:x-original-sender:mime-version :subject:references:in-reply-to:message-id:to:from:date:x-beenthere :x-gm-message-state:sender:from:to:cc:subject:date:message-id :reply-to; bh=qiYAgy77asx6bh7DONMBI2wDD/h1fnGNgv12LhfU390=; b=JcyLXSEJ5pELpcHpEIFl7sMOQRqcN2s8Hzp56wukzUiQfFJ5J3qB7Em075QippQ31A j4Akple6LJDwnS9K4rM0CXkoDgkyvwn37GYDnIdxzigaaXxgjI504QxLWTRqu4oAyl3l xP7B9LgQuXeostj5mu8F4HrijTmHFGn0Xs6dspk+FhS+fUlFj2P8LjlaVAZoHP6uvVqe 7+1euMcXRTzGyoB95OcPpW8p4xPlYM4AsIV7NA2JjK0In8TBOPUUIX13qGZoFS61HLUO SdS66lYlfCRvIIwBwD7TmUdqgpugVl2wCOlt5Ad7mlt8/P108ritZO3Xxt7WNEl/IJpZ p77g== Sender: bitcoindev@googlegroups.com X-Forwarded-Encrypted: i=1; AJvYcCULA8WcsVz0uBdRnkb0rVpYGep6HcL5rojQ1fGHSXX4qZpaNWtW5JZvPIRosf4U1FgKqnKe4EES7Uqq@gnusha.org X-Gm-Message-State: AOJu0YwMA9VuYM3eP80+g/MgsYDudgrFqYGWdNBPTgPcRegae2jYhfYb MhjpDFEO3LKOeTKF03KnN6yBUIToU+aZGYD7WxgGJ48zpRre53hA2pS6 X-Google-Smtp-Source: AGHT+IGHpjP9ZWGpB0SGq6sXEpYO4W9mu2kcZkSeyvmwWW4PQbCBm9WjuYDEl8FUxGZXdAY5BVm4QA== X-Received: by 2002:a05:6820:20b:b0:659:9a49:8e4e with SMTP id 006d021491bc7-65b451b06f7mr3807377eaf.30.1765770705492; Sun, 14 Dec 2025 19:51:45 -0800 (PST) X-BeenThere: bitcoindev@googlegroups.com; h="AWVwgWb/lIfqk/1F1G0aE50r5oad4HR+Mn6OaaXwA6zpi0C3Dw==" Received: by 2002:a4a:e4c7:0:b0:659:7909:177b with SMTP id 006d021491bc7-65b4397b23els1738970eaf.2.-pod-prod-09-us; Sun, 14 Dec 2025 19:51:40 -0800 (PST) X-Received: by 2002:a05:6808:30a2:b0:450:c877:fd6f with SMTP id 5614622812f47-455ac9cc922mr4978961b6e.67.1765770700359; Sun, 14 Dec 2025 19:51:40 -0800 (PST) Received: by 2002:a05:690c:950d:b0:786:8d90:70d8 with SMTP id 00721157ae682-78e64591f1ems7b3; Sun, 14 Dec 2025 18:10:15 -0800 (PST) X-Received: by 2002:a05:690e:1890:b0:645:531e:7d83 with SMTP id 956f58d0204a3-645555d2446mr7318536d50.20.1765764614742; Sun, 14 Dec 2025 18:10:14 -0800 (PST) Date: Sun, 14 Dec 2025 18:10:14 -0800 (PST) From: Antoine Riard To: Bitcoin Development Mailing List Message-Id: <7cceae55-0885-4a66-9e1f-55e1537e2e17n@googlegroups.com> In-Reply-To: References: Subject: [bitcoindev] Re: Splitting more block, addr and tx classes of network traffic MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_543900_1291833961.1765764614389" X-Original-Sender: antoine.riard@gmail.com Precedence: list Mailing-list: list bitcoindev@googlegroups.com; contact bitcoindev+owners@googlegroups.com List-ID: X-Google-Group-Id: 786775582512 List-Post: , List-Help: , List-Archive: , List-Unsubscribe: , X-Spam-Score: -0.5 (/) ------=_Part_543900_1291833961.1765764614389 Content-Type: multipart/alternative; boundary="----=_Part_543901_1843144086.1765764614389" ------=_Part_543901_1843144086.1765764614389 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =20 Hi Defenwycke, I'm already working on a native multi-process architecture where the traffi= c classes are isolated on different runtimes, and the "old" block store is=20 shared. All the points, you made about explicit signaling and the drawbacks are=20 valid, and one of the latest time the idea to add a signaling bit for full-rbf=20 peers came up, privacy concerns were raised. The drawback for the multi-process, multi-socket design approach is to=20 multiply the number of inbound sockets consumed by a peer, though in the case of a= =20 "cold block" archive process it's the inbound peer initiating the connection. Bandwidth-consumption wise, getting messages like BIP 0338 this is still an outbound bandwidth win for your full-node peers adopting it, and more=20 generally for any ingress filtering at the network-level. Best, Antoine OTS hash: e1b51b6a80bc77a1cd9e65b1fb74e9b5f52b93473d9e1f1390015eae70674b4c Le mercredi 10 d=C3=A9cembre 2025 =C3=A0 18:12:32 UTC, defenwycke a =C3=A9c= rit : > Hello Antoine, > > This is an interesting problem, and introducing finer-grained traffic=20 > classes certainly makes sense. The three areas that stand out to me are= =20 > peer declaration, topology inference and system bottlenecks. > > Peer declaration:=20 > > Explicit signalling of specialised roles (Example - I only relay=20 > hot-blocks) to peers increases the fingerprint/profile. We already see=20 > topology inference attacks via relay behaviour; adding public role=20 > declarations may expand that surface. Nodes can already drop or=20 > deprioritise whatever they wish locally, so explicit signalling may not b= e=20 > necessary. > > Topology inference: > > Since topology inference can be drawn from tx-relay timing and relay=20 > behaviour, an internal class-based model also allows the node to randomis= e=20 > acceptance, forwarding, and scheduling behaviour per class. Even small=20 > amounts of deliberate jitter or probabilistic message handling make it fa= r=20 > harder for an observer to infer which peers are responsible for block-rel= ay=20 > versus tx-relay traffic. This further reduces the value of explicit=20 > capability signalling, since role exposure can be disguised at the=20 > behavioural level instead. > > System bottlenecks:=20 > > Introducing multiple traffic classes as separate processes and sockets=20 > increases resource consumption, as you=E2=80=99ve noted. A single connect= ion can=20 > already multiplex all P2P message types. > > A cleaner approach might be to integrate the class separation internally,= =20 > without advertising anything to peers. Incoming messages can be classifie= d=20 > (Examples - hot blocks, cold blocks, tx, address gossip, etc.) and=20 > per-class policies applied locally. Since a node doesn=E2=80=99t need to = receive or=20 > forward traffic it doesn=E2=80=99t want to handle, it seems unnecessary t= o declare=20 > toggle roles at handshake time. > > An additional benefit of integrating classes internally is that it gives = a=20 > natural place for per-class bandwidth accounting and load shedding. Under= =20 > congestion, hot-block traffic could be prioritised while less critical=20 > classes (Example - cold block serving) are throttled, without multiplying= =20 > sockets or increasing signalling surfaces. > > Externally the peer sees a normal connection; internally the node routes= =20 > each message into class-specific queues and rate-limit buckets. This=20 > preserves flexibility (isolated CPU paths, independent scheduling) while= =20 > avoiding multiple processes, multiple VERSION messages, and multiple=20 > physical sockets. > > Conceptually: > > ``` > enum TrafficClass { HOT_BLOCK, COLD_BLOCK, TX_RELAY, ADDR_GOSSIP,=20 > META_HEAVY } > > function classify(msg): > if msg.type =3D=3D BLOCK and is_recent(msg): return HOT_BLOCK > if msg.type =3D=3D BLOCK and is_old(msg): return COLD_BLOCK > if msg.type in {TX, INV_TX}: return TX_RELAY > if msg.type in {ADDR, ADDRV2}: return ADDR_GOSSIP > return META_HEAVY > > function allow(class): > if class =3D=3D TX_RELAY and !config.tx_relay: return false > if class =3D=3D COLD_BLOCK and !config.archive: return false > return true > ``` > Obviously the exact implementation details would differ, but the idea is= =20 > to show that a single-process, single-socket design can still support=20 > multiple internal relay lanes. > > Do you see any drawbacks with internalising the traffic classes rather=20 > than exposing multiple processes, sockets or service bits? Curious if I= =E2=80=99m=20 > missing a constraint. > > Kind regards, > > Defenwycke > > On Thursday, December 4, 2025 at 11:16:52=E2=80=AFPM UTC Antoine Riard wr= ote: > >> Hi list, >> >> Surfacing an old idea concerning the network-level and the current=20 >> meddling of block, >> tx and addr messages traffic generally all over one network link.=20 >> Historically, for >> example, if you consider bitcoin core by default connections are going t= o=20 >> be FULL_RELAY. >> Over the last years, there has been few improvements to separate network= =20 >> links by types >> e.g with the introduction of dedicated outbound BLOCK-RELAY connections= =20 >> [1], without the >> segregation at the network-level between the class of traffic really=20 >> being pursued, or at >> least more flexibility in network mechanisms to signal to a node's peers= =20 >> what categories >> of messages will be processed on a given link. >> >> Previously it has been shown that leveraging tx-relay's orphan mechanism= =20 >> can allow to map >> a peer's network-topology [2] (sadly, one trick among others). Being abl= e=20 >> to infer a peer's >> "likely" network topology from tx traffic, one can guess the peers used= =20 >> to carry block-relay >> traffic. From the PoV of an economical node, dissimulating the=20 >> block-relay traffic is a very >> valuable to minimize the risks of escalation attacks based on=20 >> network-topology (e.g for >> lightning nodes [3]). >> >> Segregating more network traffic by class of messages sounds to suppose= =20 >> 1) being able to signal >> among the {ADDR, ADDRV2} service bits if block, addr or tx relay is=20 >> supported on a link to be >> opened for a pair of a (net_addr, port) or alternatively 2) if network= =20 >> link are open blindly >> with peers, being to signal in the VERSION message or with a dedicated= =20 >> message what class of >> message is supported. There is already a signaling mechanism in the=20 >> VERSION message to >> disable tx-relay (i.e `fRelay`), however there is no signaling to disabl= e=20 >> block-relay over a link. >> Alternatively, it has been proposed in the past to add a new early=20 >> message among all the other >> handshake messages between the VERSION / VERACK flow, but it has never= =20 >> been implemented [4]. >> >> For bitcoin backbone, started to natively isolate each class of traffic= =20 >> in its own process, and >> only strictly signaling what is needed in the VERSION message. Though,= =20 >> I'm starting to reach >> the limit of the current network mechanisms, e.g I've an `archive_relayd= `=20 >> process to service "cold" >> blocks, dissociate from the process doing full block-relay traffic, and= =20 >> this process is emitting versions >> messages, with the NODE_NETWORK bit set and the others process would hav= e >> NODE_NETWORK_LIMITED. If you're asking the why of dissociating "cold"=20 >> from "hot" block relay >> servicing, that avoids wasting CPU cycles on a busy code path. >> >> Anyway, for now I think I can come up with good hacks with the service= =20 >> field and experimental bit >> services. One drawback, it's just one "logical" node might start to=20 >> occupy multiple "physical" sockets >> of its peers (one for tx-relay, one for block-relay), but network-wide= =20 >> this might not be the most >> ressource-preserving approach, so I'm wondering if better mechanisms are= =20 >> worthy to muse about. >> >> Cheers, >> Antoine >> OTS hash: 22f8cfbd2b1fd093f6bb8737f3ddcdb956f8dadb1b9436dab3c8491e4b5583= fd >> >> [0]=20 >> https://github.com/bitcoin/bitcoin/blob/master/src/node/connection_types= .h >> [1] https://github.com/bitcoin/bitcoin/pull/15759 >> [2] https://discovery.ucl.ac.uk/id/eprint/10063352/1/txprobe_final.pdf >> [3] https://arxiv.org/pdf/2006.01418 >> [4] https://github.com/bitcoin/bips/blob/master/bip-0338.mediawiki >> > --=20 You received this message because you are subscribed to the Google Groups "= Bitcoin Development Mailing List" group. To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoindev+unsubscribe@googlegroups.com. To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/= 7cceae55-0885-4a66-9e1f-55e1537e2e17n%40googlegroups.com. ------=_Part_543901_1843144086.1765764614389 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Hi Defenwycke,

I'm already working on a native multi-process = architecture where the traffic
classes are isolated on different runti= mes, and the "old" block store is shared.
All the points, you made abo= ut explicit signaling and the drawbacks are valid,
and one of the late= st time the idea to add a signaling bit for full-rbf peers
came up, pr= ivacy concerns were raised.

The drawback for the multi-process, = multi-socket design approach is to multiply
the number of inbound sock= ets consumed by a peer, though in the case of a "cold
block" archive p= rocess it's the inbound peer initiating the connection.

Bandwidt= h-consumption wise, getting messages like BIP 0338 this is still an
ou= tbound bandwidth win for your full-node peers adopting it, and more general= ly
for any ingress filtering at the network-level.

Best,Antoine
OTS hash: e1b51b6a80bc77a1cd9e65b1fb74e9b5f52b93473d9e1f139= 0015eae70674b4c

Le mercredi 10 d=C3=A9cembre 2025 =C3=A0 18:12:32 UTC, defenwycke= a =C3=A9crit=C2=A0:
Hello Antoine,

This is an interesting problem, and introduci= ng finer-grained traffic classes certainly makes sense. The three areas tha= t stand out to me are peer declaration, topology inference and system bottl= enecks.

Peer declaration:

Explicit signalling of specialised= roles (Example - I only relay hot-blocks) to peers increases the fingerpri= nt/profile. We already see topology inference attacks via relay behaviour; = adding public role declarations may expand that surface. Nodes can already = drop or deprioritise whatever they wish locally, so explicit signalling may= not be necessary.

Topology inference:

Since topology inference can be drawn from tx-relay timing and relay behav= iour, an internal class-based model also allows the node to randomise accep= tance, forwarding, and scheduling behaviour per class. Even small amounts o= f deliberate jitter or probabilistic message handling make it far harder fo= r an observer to infer which peers are responsible for block-relay versus t= x-relay traffic. This further reduces the value of explicit capability sign= alling, since role exposure can be disguised at the behavioural level inste= ad.

System bottlenecks:

Introducing multiple traffic = classes as separate processes and sockets increases resource consumption, a= s you=E2=80=99ve noted. A single connection can already multiplex all P2P m= essage types.

A cleaner approach might be to integrate the class sep= aration internally, without advertising anything to peers. Incoming message= s can be classified (Examples - hot blocks, cold blocks, tx, address gossip= , etc.) and per-class policies applied locally. Since a node doesn=E2=80=99= t need to receive or forward traffic it doesn=E2=80=99t want to handle, it = seems unnecessary to declare toggle roles at handshake time.

An addi= tional benefit of integrating classes internally is that it gives a natural= place for per-class bandwidth accounting and load shedding. Under congesti= on, hot-block traffic could be prioritised while less critical classes (Exa= mple - cold block serving) are throttled, without multiplying sockets or in= creasing signalling surfaces.

Externally the peer sees a normal conn= ection; internally the node routes each message into class-specific queues = and rate-limit buckets. This preserves flexibility (isolated CPU paths, ind= ependent scheduling) while avoiding multiple processes, multiple VERSION me= ssages, and multiple physical sockets.

Conceptually:

```=
enum TrafficClass { HOT_BLOCK, COLD_BLOCK, TX_RELAY, ADDR_GOSSIP, META_= HEAVY }

function classify(msg):
=C2=A0 =C2=A0 if msg.type =3D=3D = BLOCK and is_recent(msg): return HOT_BLOCK
=C2=A0 =C2=A0 if msg.type =3D= =3D BLOCK and is_old(msg): =C2=A0 =C2=A0return COLD_BLOCK
=C2=A0 =C2=A0 = if msg.type in {TX, INV_TX}: =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 retu= rn TX_RELAY
=C2=A0 =C2=A0 if msg.type in {ADDR, ADDRV2}: =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 return ADDR_GOSSIP
=C2=A0 =C2=A0 return META_HEAVY<= br>
function allow(class):
=C2=A0 =C2=A0 if class =3D=3D TX_RELAY =C2= =A0and !config.tx_relay: =C2=A0 =C2=A0return false
=C2=A0 =C2=A0 if clas= s =3D=3D COLD_BLOCK and !config.archive: =C2=A0 =C2=A0return false
=C2= =A0 =C2=A0 return true
```
Obviously the exact implementation = details would differ, but the idea is to show that a single-process, single= -socket design can still support multiple internal relay lanes.

Do y= ou see any drawbacks with internalising the traffic classes rather than exp= osing multiple processes, sockets or service bits? Curious if I=E2=80=99m m= issing a constraint.

Kind regards,

Defenwycke

On Th= ursday, December 4, 2025 at 11:16:52=E2=80=AFPM UTC Antoine Riard wrote:
Hi list= ,

Surfacing an old idea concerning the network-level and the current= meddling of block,
tx and addr messages traffic generally all over one = network link. Historically, for
example, if you consider bitcoin core by= default connections are going to be FULL_RELAY.
Over the last years, th= ere has been few improvements to separate network links by types
e.g wit= h the introduction of dedicated outbound BLOCK-RELAY connections [1], witho= ut the
segregation at the network-level between the class of traffic rea= lly being pursued, or at
least more flexibility in network mechanisms to= signal to a node's peers what categories
of messages will be proces= sed on a given link.

Previously it has been shown that leveraging tx= -relay's orphan mechanism can allow to map
a peer's network-topo= logy [2] (sadly, one trick among others). Being able to infer a peer's<= br>"likely" network topology from tx traffic, one can guess the p= eers used to carry block-relay
traffic. From the PoV of an economical no= de, dissimulating the block-relay traffic is a very
valuable to minimize= the risks of escalation attacks based on network-topology (e.g for
ligh= tning nodes [3]).

Segregating more network traffic by class of messa= ges sounds to suppose 1) being able to signal
among the {ADDR, ADDRV2} s= ervice bits if block, addr or tx relay is supported on a link to be
open= ed for a pair of a (net_addr, port) or alternatively 2) if network link are= open blindly
with peers, being to signal in the VERSION message or with= a dedicated message what class of
message is supported. There is alread= y a signaling mechanism in the VERSION message to
disable tx-relay (i.e = `fRelay`), however there is no signaling to disable block-relay over a link= .
Alternatively, it has been proposed in the past to add a new early mes= sage among all the other
handshake messages between the VERSION / VERACK= flow, but it has never been implemented [4].

For bitcoin backbone, = started to natively isolate each class of traffic in its own process, andonly strictly signaling what is needed in the VERSION message. Though, I&= #39;m starting to reach
the limit of the current network mechanisms, e.g= I've an `archive_relayd` process to service "cold"
blocks= , dissociate from the process doing full block-relay traffic, and this proc= ess is emitting versions
messages, with the NODE_NETWORK bit set and the= others process would have
NODE_NETWORK_LIMITED. If you're asking th= e why of dissociating "cold" from "hot" block relay
= servicing, that avoids wasting CPU cycles on a busy code path.

Anywa= y, for now I think I can come up with good hacks with the service field and= experimental bit
services. One drawback, it's just one "logica= l" node might start to occupy multiple "physical" socketsof its peers (one for tx-relay, one for block-relay), but network-wide thi= s might not be the most
ressource-preserving approach, so I'm wonder= ing if better mechanisms are worthy to muse about.

Cheers,
Antoin= e
OTS hash: 22f8cfbd2b1fd093f6bb8737f3ddcdb956f8dadb1b9436dab3c8491e4b55= 83fd

[0] https://g= ithub.com/bitcoin/bitcoin/blob/master/src/node/connection_types.h
[1= ] http= s://github.com/bitcoin/bitcoin/pull/15759
[2] https://discovery.ucl.ac.uk/id/eprint/10063352/1/txprobe_final.= pdf
[3] https://arxiv.o= rg/pdf/2006.01418
[4] https://github.com/b= itcoin/bips/blob/master/bip-0338.mediawiki

--
You received this message because you are subscribed to the Google Groups &= quot;Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an e= mail to bitcoind= ev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoind= ev/7cceae55-0885-4a66-9e1f-55e1537e2e17n%40googlegroups.com.
------=_Part_543901_1843144086.1765764614389-- ------=_Part_543900_1291833961.1765764614389--