Ok, updated stats based on 257k RequestedTx calls with wtxidrelay disabled (added a false && to the if’s in net_processing)
I’m seeing 98.04% of requests being trivial cases – there’s no alternatives that have hit reqtime, and this is the first request of a tx. Despite having lots of (up to ~50?) inbounds, it’s still choosing outbounds for almost all txs.
Preferredness only mattered in 0.67% of cases, and only in 0.59% of cases did it matter for the first request.
“First” marker only mattered in 0.77% of cases, and only ever for choosing between two non-preferred peers; though it was at least (almost) always for the first actual request for the txid.
FWIW, ignoring the first hour, I got 520 “accepted orphan tx” messages. I’m running this node with mempool expiry set to 24h so that might be a bit high.
 086.59% 222556 ABCD  requested  preferred=1  first=1  candidates=[-,-]  completed=[-,-]
 111.45%  29429 ABCD  requested  preferred=0  first=1  candidates=[-,-]  completed=[-,-]
 2 0.18%    455 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[Y,-]
 3 0.09%    222 ABCD  requested  preferred=0  first=1  candidates=[-,-]  completed=[Y,-]
 4 0.08%    201 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[-,-]
 5 0.05%    127 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[Y,Y]
 6 0.04%    111 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[Y,Y]
 7 0.03%     79 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[Y,-]
 8 0.01%     17 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[-,Y]
 9 0.01%     15 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[-,Y]
10
11 0.59%   1524 ABCD  requested  preferred=1  first=1  candidates=[-,Y]  completed=[-,-]
12 0.02%     64 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[-,Y]
13 0.02%     51 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[Y,Y]
14 0.02%     50 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[Y,-]
15 0.01%     38 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[Y,Y]
16 0.01%     32 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[Y,-]
17 0.01%     24 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[Y,Y]
18 0.00%     10 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[Y,-]
19 0.00%      1 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[-,Y]
20 0.00%      1 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[-,Y]
21
22 0.77%   1978 ABCD  requested  preferred=0  first=1  candidates=[-,Y]  completed=[-,-]
23 0.01%     20 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[Y,Y]
24 0.00%      8 ABCD  requested  preferred=0  first=1  candidates=[-,Y]  completed=[Y,-]
25 0.00%      2 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[Y,-]
26 0.00%      1 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[-,Y]
Just considering the first request for a tx (ie, candidates=[-,-] completed=[-,-] cases) the number of announcements for the tx in CANDIDATE_DELAYED was distributed something like:
 0  48839 ABCD requested preferred=1 delayed=0
 1  37457 ABCD requested preferred=1 delayed=1
 2  32913 ABCD requested preferred=1 delayed=2
 3  27587 ABCD requested preferred=1 delayed=3
 4  22798 ABCD requested preferred=1 delayed=4
 5  17525 ABCD requested preferred=1 delayed=5
 6  12564 ABCD requested preferred=1 delayed=6
 7   8071 ABCD requested preferred=1 delayed=7
 8   5148 ABCD requested preferred=1 delayed=8
 9   2965 ABCD requested preferred=1 delayed=9
10   1822 ABCD requested preferred=1 delayed=10
11
12   4437 ABCD requested preferred=0 delayed=0
13   2025 ABCD requested preferred=0 delayed=5
14   1981 ABCD requested preferred=0 delayed=6
15   1780 ABCD requested preferred=0 delayed=4
which looks pretty reasonable, I think. (I didn’t break the delayed count into preferred/non-preferred, so the delayed= figure should include both inbounds and any overloaded outbounds)
I wasn’t quite as lucky with my outbounds staying around the entire timeby the looks, but got a much more even looking distribution of requests between the different prefrences:
 0   1048 ABCD requested preferred=0 peer=2013
 1   1294 ABCD requested preferred=0 peer=4076
 2   1359 ABCD requested preferred=0 peer=4096
 3   1725 ABCD requested preferred=0 peer=235
 4   2193 ABCD requested preferred=0 peer=3370
 5   2630 ABCD requested preferred=0 peer=1950
 6   2694 ABCD requested preferred=0 peer=2304
 7   2843 ABCD requested preferred=0 peer=2095
 8   3618 ABCD requested preferred=0 peer=1853
 9
10      9 ABCD requested preferred=1 peer=6529
11     21 ABCD requested preferred=1 peer=5853
12     25 ABCD requested preferred=1 peer=1737
13     39 ABCD requested preferred=1 peer=1624
14     44 ABCD requested preferred=1 peer=30
15     57 ABCD requested preferred=1 peer=644
16    853 ABCD requested preferred=1 peer=22
17   2312 ABCD requested preferred=1 peer=11
18   5470 ABCD requested preferred=1 peer=202
19  13974 ABCD requested preferred=1 peer=0
20  14062 ABCD requested preferred=1 peer=2238
21  14427 ABCD requested preferred=1 peer=14
22  16520 ABCD requested preferred=1 peer=16
23  23714 ABCD requested preferred=1 peer=3385
24  37178 ABCD requested preferred=1 peer=24
25  46702 ABCD requested preferred=1 peer=29
26  49527 ABCD requested preferred=1 peer=44
(ignoring non-preferred peers that didn’t have >1000 requests)
EDIT: I’m guessing the lack of “used the first marker to tie-break between two preferred peers” cases means that none of my preferred peers were overloaded for any length of time, and always had reqtime=min(), and thus the ThreadMessageHandler loop would see the INV and immediately request it in a single iteration with no chance to consider any other peer, no matter how simultaneous the announcements were.
The previous 1.1% figure was due to the wtxidrelay delay causing all my preferred peers to have a 2s delay, so that if the outbounds were within 100ms there’d be a tie-breaker, ie they’d get added to the index, then both their reqtimes would pass while the thread was sleeping, and both would progress to READY/BEST in a single GetRequestable call.