CConMan cannot access the chain state, the sync progress, in-flight block requests
I don’t see how the connection opening logic needs access to any of those things.
I believe that we are not in sync regarding the distinction between (1) the logic for deciding whether to connect to a specific address and (2) the thread/process responsible for opening such a connection. The former requires access to that contextual information; its usage is behind the extra_block_relay
and try_another_outbound_peer
bool flags, and also behind the existing HasAllDesirableServiceFlags()
call.
What I intended to say on that sentence was that a node could have different connections strategies in the future, even though it doesn’t have them now. These strategies would need access to the overall node state. Thus why I’m saying that the current approach of adding fields inside CConnMan with each new connection decision logic change make the code harder to work with, rather than simplifying it.
Still, adding a callback as I do here isn’t the saint grail neither. It is slightly better for the reasons I state below, but in the future, with the introduction of more connection strategies, I think we should consider moving part of the ThreadOpenConnections
logic to PeerManager
.
number of compact block filters
Bitcoin Core doesn’t download compact block filters.
It was just an example of a modification we could add to the connections decision-making process. We can replace it with a more general “reserve X outbound slots for peers supporting Y services” which could also depend on certain contextual factors. This is something either the user or we might want in the future. And like this one, there could be many other modifications that could be added.
But, in case it helps, I do have an experimental branch implementing compact block filters download and the entire BIP157 client mode.
v2 encrypted connections, etc.
Connman is doing this right now using our service flags and the service flag of the potential new peer. It has all the data it needs.
You are talking about the way we connect to potential v2 peers and fallback to v1 based on their response right?. My point was about preferring certain services over others, for a limited number of outbound connection slots, before opening the socket. This prioritization could change depending on the node’s context; e.g. taking into consideration the chain progress and/or if the user has configured the node differently.
Essentially, in my view, the approach is resource-wasteful and could lead to race conditions.
Your approach has the exact same “flaw” w.r.t. connecting to desired service flags, e.g.:
- connman calls
HasDesireableServiceFlags
for peer A which returns true
- connman proceeds with opening the connection to peer A
- some event occurs (e.g. messages from other peers are processed) that would make
HasDesireableServiceFlags
return false for peer A
- connection to peer A is established
Peer A is now connected even-though HasDesireableServiceFlags
would return false. These races can’t really be avoided, so it’s best to stick with the interface design we’ve already got imo.
The difference between your proposed design and mine is essentially polling to set the value vs calculating the value on-demand.
In your design, the code needs to check data at regular intervals to update the flag in another component. If this process is executed too rapidly, it can harm performance, and if it is done too slowly, it wastes resources.
In the one I’m proposing, because the value is calculated only when is needed, it shortens the race window.
Are we in agreement on that?
Still, following to what I wrote in my previous comment, I would like to re-think ThreadOpenConnections
because otherwise, we will keep adding fields to CConMan that are unrelated to the component itself.