Bitcoin Development Mailinglist
 help / color / mirror / Atom feed
From: defenwycke <cal.defenwycke@gmail.com>
To: Bitcoin Development Mailing List <bitcoindev@googlegroups.com>
Subject: [bitcoindev] Re: Splitting more block, addr and tx classes of network traffic
Date: Tue, 9 Dec 2025 15:13:05 -0800 (PST)	[thread overview]
Message-ID: <de958fe0-5897-4fb0-9b4c-b41f5c63296bn@googlegroups.com> (raw)
In-Reply-To: <CALZpt+Hx9vFwNQd6qGSFMWXU=A6j82m6ZjJg3JaHK26WW0UQZw@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 7494 bytes --]

Hello Antoine,

This is an interesting problem, and introducing finer-grained traffic 
classes certainly makes sense. The three areas that stand out to me are 
peer declaration, topology inference and system bottlenecks.

Peer declaration: 

Explicit signalling of specialised roles (Example - I only relay 
hot-blocks) to peers increases the fingerprint/profile. We already see 
topology inference attacks via relay behaviour; adding public role 
declarations may expand that surface. Nodes can already drop or 
deprioritise whatever they wish locally, so explicit signalling may not be 
necessary.

Topology inference:

Since topology inference can be drawn from tx-relay timing and relay 
behaviour, an internal class-based model also allows the node to randomise 
acceptance, forwarding, and scheduling behaviour per class. Even small 
amounts of deliberate jitter or probabilistic message handling make it far 
harder for an observer to infer which peers are responsible for block-relay 
versus tx-relay traffic. This further reduces the value of explicit 
capability signalling, since role exposure can be disguised at the 
behavioural level instead.

System bottlenecks: 

Introducing multiple traffic classes as separate processes and sockets 
increases resource consumption, as you’ve noted. A single connection can 
already multiplex all P2P message types.

A cleaner approach might be to integrate the class separation internally, 
without advertising anything to peers. Incoming messages can be classified 
(Examples - hot blocks, cold blocks, tx, address gossip, etc.) and 
per-class policies applied locally. Since a node doesn’t need to receive or 
forward traffic it doesn’t want to handle, it seems unnecessary to declare 
toggle roles at handshake time.

An additional benefit of integrating classes internally is that it gives a 
natural place for per-class bandwidth accounting and load shedding. Under 
congestion, hot-block traffic could be prioritised while less critical 
classes (Example - cold block serving) are throttled, without multiplying 
sockets or increasing signalling surfaces.

Externally the peer sees a normal connection; internally the node routes 
each message into class-specific queues and rate-limit buckets. This 
preserves flexibility (isolated CPU paths, independent scheduling) while 
avoiding multiple processes, multiple VERSION messages, and multiple 
physical sockets.

Conceptually:

```
enum TrafficClass { HOT_BLOCK, COLD_BLOCK, TX_RELAY, ADDR_GOSSIP, 
META_HEAVY }

function classify(msg):
    if msg.type == BLOCK and is_recent(msg): return HOT_BLOCK
    if msg.type == BLOCK and is_old(msg):    return COLD_BLOCK
    if msg.type in {TX, INV_TX}:             return TX_RELAY
    if msg.type in {ADDR, ADDRV2}:           return ADDR_GOSSIP
    return META_HEAVY

function allow(class):
    if class == TX_RELAY  and !config.tx_relay:    return false
    if class == COLD_BLOCK and !config.archive:    return false
    return true
```
Obviously the exact implementation details would differ, but the idea is to 
show that a single-process, single-socket design can still support multiple 
internal relay lanes.

Do you see any drawbacks with internalising the traffic classes rather than 
exposing multiple processes, sockets or service bits? Curious if I’m 
missing a constraint.

Kind regards,

Defenwycke

On Thursday, December 4, 2025 at 11:16:52 PM UTC Antoine Riard wrote:

> Hi list,
>
> Surfacing an old idea concerning the network-level and the current 
> meddling of block,
> tx and addr messages traffic generally all over one network link. 
> Historically, for
> example, if you consider bitcoin core by default connections are going to 
> be FULL_RELAY.
> Over the last years, there has been few improvements to separate network 
> links by types
> e.g with the introduction of dedicated outbound BLOCK-RELAY connections 
> [1], without the
> segregation at the network-level between the class of traffic really being 
> pursued, or at
> least more flexibility in network mechanisms to signal to a node's peers 
> what categories
> of messages will be processed on a given link.
>
> Previously it has been shown that leveraging tx-relay's orphan mechanism 
> can allow to map
> a peer's network-topology [2] (sadly, one trick among others). Being able 
> to infer a peer's
> "likely" network topology from tx traffic, one can guess the peers used to 
> carry block-relay
> traffic. From the PoV of an economical node, dissimulating the block-relay 
> traffic is a very
> valuable to minimize the risks of escalation attacks based on 
> network-topology (e.g for
> lightning nodes [3]).
>
> Segregating more network traffic by class of messages sounds to suppose 1) 
> being able to signal
> among the {ADDR, ADDRV2} service bits if block, addr or tx relay is 
> supported on a link to be
> opened for a pair of a (net_addr, port) or alternatively 2) if network 
> link are open blindly
> with peers, being to signal in the VERSION message or with a dedicated 
> message what class of
> message is supported. There is already a signaling mechanism in the 
> VERSION message to
> disable tx-relay (i.e `fRelay`), however there is no signaling to disable 
> block-relay over a link.
> Alternatively, it has been proposed in the past to add a new early message 
> among all the other
> handshake messages between the VERSION / VERACK flow, but it has never 
> been implemented [4].
>
> For bitcoin backbone, started to natively isolate each class of traffic in 
> its own process, and
> only strictly signaling what is needed in the VERSION message. Though, I'm 
> starting to reach
> the limit of the current network mechanisms, e.g I've an `archive_relayd` 
> process to service "cold"
> blocks, dissociate from the process doing full block-relay traffic, and 
> this process is emitting versions
> messages, with the NODE_NETWORK bit set and the others process would have
> NODE_NETWORK_LIMITED. If you're asking the why of dissociating "cold" from 
> "hot" block relay
> servicing, that avoids wasting CPU cycles on a busy code path.
>
> Anyway, for now I think I can come up with good hacks with the service 
> field and experimental bit
> services. One drawback, it's just one "logical" node might start to occupy 
> multiple "physical" sockets
> of its peers (one for tx-relay, one for block-relay), but network-wide 
> this might not be the most
> ressource-preserving approach, so I'm wondering if better mechanisms are 
> worthy to muse about.
>
> Cheers,
> Antoine
> OTS hash: 22f8cfbd2b1fd093f6bb8737f3ddcdb956f8dadb1b9436dab3c8491e4b5583fd
>
> [0] 
> https://github.com/bitcoin/bitcoin/blob/master/src/node/connection_types.h
> [1] https://github.com/bitcoin/bitcoin/pull/15759
> [2] https://discovery.ucl.ac.uk/id/eprint/10063352/1/txprobe_final.pdf
> [3] https://arxiv.org/pdf/2006.01418
> [4] https://github.com/bitcoin/bips/blob/master/bip-0338.mediawiki
>

-- 
You received this message because you are subscribed to the Google Groups "Bitcoin Development Mailing List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bitcoindev+unsubscribe@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/bitcoindev/de958fe0-5897-4fb0-9b4c-b41f5c63296bn%40googlegroups.com.

[-- Attachment #1.2: Type: text/html, Size: 9661 bytes --]

  reply	other threads:[~2025-12-10 18:12 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-04 22:33 [bitcoindev] " Antoine Riard
2025-12-09 23:13 ` defenwycke [this message]
2025-12-15  2:10   ` [bitcoindev] " Antoine Riard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=de958fe0-5897-4fb0-9b4c-b41f5c63296bn@googlegroups.com \
    --to=cal.defenwycke@gmail.com \
    --cc=bitcoindev@googlegroups.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox