I don’t see this discussion as a blocker.
I just want to get to the bottom of it. Checked the source code of http.client.HTTPConnection()
- it does not open any connections, just sets some internal member variables of the HTTPConnection
class. So, starting the timer before or after http.client.HTTPConnection()
shouldn’t make a difference.
conn.connect()
is what makes the connection. And the timer should be started before it. Why would not the followng test the right thing?
- start timer in the test (A)
- connect, server timer starts here (B)
- send
- try to recv, connection will be closed by the server due to server timeout (C)
- stop the timer (D)
Time between B and C will (must!) always be less than the time between A and D. If it is not, then I would like to know why. Maybe we are doing something completely wrong.
My worry is that CI machines are sometimes veeery slow and any “reasonable” time we set turns out to be surprise and results in intermittent test failures. Now, if doing the above can result in the time between A and D being less than 2 sec (= server timeout, B to C) and we don’t know why is that and we set to check only for >1 sec instead, then, because we do not know the cause, how can we be sure that the test timer (A to D) will not be sometimes e.g. 0.99 sec causing the test to fail?