“doesn’t really seem right” – yeah, more along the lines of “morally right” vs “mathematically correct” :)
In general using doubles where they’re not needed just seems like a bad idea to me – there’s way more possible special cases and pitfalls with them, and the code already works fine without them, so why change it? Changing behaviour in potentially subtle ways in refactors bothers me in general – to me that seems like the easiest way for a potential attacker to slip vulnerabilities into the codebase.
For example, in this line, I think behind the scenes chrono is:
- converting the chrono::secondsversion ofnPowTargetSpacingtoduration<double,ratio=1,1>when multiplying by a double
- converting both m_downloading_sinceand the result of the above toduration<double,ratio=1,1000000>(converting one to a double, multiplying the other by 1M)
- converting current_timeto theduration<double,ratio=1,1000000>(ie conveting to a double) as well and then doing a floating point comparison between the two
Are those all fine? Yeah, I think so – but why add the burden of reviewing all that just to use “0.5” instead of “500000”? In general, figuring out what the code actually does is the tricky part, so  making it easy to see what the plan was and hard to see what the compiler’s going to do seems backwards to me in general. After all, if it’s not clear from the code what the plan was, you can add a comment.