I’ve discovered a paper published earlier this month. I haven’t found any discussion in the repo, so I will summarize the relevant parts here and share my thoughts.
Let’s say we have a node that operates both over ipv4 and TOR. We don’t want an observer to link these two addresses to the same node. For example, ADDR caching (#18991) was implemented for this reason.
The paper suggests the following attack:
- Fill all (115?) victim’s inbound slots in
networkA.
- Make sure these connections are candidates for eviction (higher latency, etc.).
- Make connection to the presumably-victim’s node in
networkB
- Observe whether
networkA
connections from (1) are dropped as you add connections in networkB.
I haven’t verified the experiments, but even understanding the attack assures me the problem exists. Authors claim to reach high precision at a very low cost (optimized by inspectingVERSION
data and block relay data).
They suggest the following countermeasures:
- Have separate connection pools (for the sake of eviction) for each network (say TOR and ipv4).
- Make eviction unpredictable (e.g. after a random delay)