other non-observable ones (netgroup sort, you could imagine peer nonce sort).
Not sure if it was clear, but the motivation for netgroup diverse peers was also along the lines of trying to restore topology diversity.
Many criteria– latency, block relay, and tx relay all somewhat favor peers that are geographically close. So you don’t want a situation where most nodes are just incestuously linked within the same geography, resulting in a small min-cut to other continents. It’s easy to imaging the network becoming that way over time when you factor in the fact that further links crossing more networks are a little less reliable than shorter ones.
I fear good-behavioral ones are on the low scale of hardness, because once you have an attacker infrastructure deployed it sounds a marginal cost to fake good-behavioral ones
Sure, though if they stop being good, they’ll quickly lose the advantage it gave them. … assuming they haven’t already successfully eclipsed you entirely. If they keep being good then hurrah, you won.
, once deployed in this netgroup can exhaust the quota towards every listening peers
Yes, though the preference isn’t directly observable.
presume that’s already an effect of our eviction heuristics but I would be cautious going further and such breaking some network-wise diversity of peers
I really disagree. It’s a reason to be not overly specific in what constitutes helpful, but there are a pretty obvious set of behaviours (like relaying us blocks) which if a peer doesn’t do, it isn’t helping us stay non-partitioned from the honest network. A peer is free to not to helpful stuff, they wouldn’t be disconnected directly for it– but for criteria that is expressly intended to protect the nodes’ connectivity, it can make sense to filter it.
I don’t even think it would be completely absurd to reserve some connection capacity for peers which claim to be exactly the same protocol & subversion as you. Sure, a malicious party could spoof it, always matching it– but there have been several incidents in the past where we were concerned about fork software accidentally engaging in a partitioning attack: Say Mallory release a fork of the software that on some date will change its consensus rules to require every block sends half its coinbase to her, Mallory isn’t malicious, just stupid. She suckers a fair number of people into running this fork. When the time comes, your node is, by chance, totally surrounded by BitcoinMallory nodes. No one mines any Mallory blocks (or, if they do– the Mallory node graph may be partitioned and not get them to any of your neighbours) and the Mallory peers block the ‘invalid’ valid blocks from reaching you. Without any blocks showing up you can’t tell the Mallory nodes consensus rules differ. The stale tip detection is a last ditch hail marry to recover from this state, but it may not work well when you need it (e.g. due to honest nodes being full or attacked, or just taking a long time to find one that’s up and not a Mallory node). Wouldn’t it have been better if a couple of your connections were running the same software as you– so that if any of them were connected to the graph of healthy nodes you would be too?
It’s critical to keep in mind that broken software radically outnumbers attackers at almost all times. Esp because if countermeasures against attackers are generally effective then there won’t be attackers. Also, if merely being a “broken” peer is enough to harm the network
Note, we do addr-relay for blocks-only peers,
It’s a choice of the peer if they want to announce themselves through us. You make a great point that a peer might be wise to suppress its announcements toward and from its outbound block-only connections because those connections exist in part to help obscure the node connectivity graph. I thought they already did this.