I’m not sure it’s a good idea to link mockable time to network delays in general – if you bump the mocktime by a hours/days that shouldn’t risk triggering “hey this peer hasn’t said anything for ages, let’s disconnect” conditions, and equally if you backdate the mocktime for some reason, that shouldn’t cause network tasks to say “oh, we decided we don’t need to do anything until time t, which is apparently years away” and effectively hang. Having to tweak tests feels a bit like a canary…
My intuition for this was we have two sorts of delays – short ones (<30s) that just prevent us busy looping when there’s nothing to actually do and don’t generally need to be precisely controlled, and long ones (minutes/hours/days) that are human visible and that we’ll obviously need mocktime to test because minutes/hours/days is effectively forever.
But maybe we’ve really got two different sorts of mocking that we want to do – “set the time to X” and it stays there until we’re ready to set it to some different time “Y” (what we’ve got now), and “everything that was going to happen in the next 5 minutes, get it done now” (something more like what the mockscheduler
rpc does)? You could implement the latter by bumping the system clock (rather than replacing it), and only allowing the bump amount to increase.
I suspect at a minimum we’d want to switch to the std::chrono::time_point
abstraction and use different clocks so that we could tell these cases apart by type, but that doesn’t seem like a good thing to dive into immediately before feature freeze. (I had a sketch I was working on a while back at https://github.com/ajtowns/bitcoin/commits/201908-systime fwiw)
And mocktime is only relevant for regtest, so the downsides of this approach are pretty limited (presumably to just these test case changes)