I introduced a bug in #22677 (sorry! :sweat_smile:)
Mempool entries cache LockPoints
, containing the first height/blockhash/CBlockIndex*
at which the transaction becomes valid. During a reorg, we re-check timelocks on all mempool entries using CheckSequenceLocks(useExistingLockPoints=false)
and remove any now-invalid entries. CheckSequenceLocks()
also mutates the LockPoints
passed in, and we update valid entries’ LockPoints
using update_lock_points
. Thus, update_lock_points(lp)
needs to be called right after CheckSequenceLocks(lp)
, otherwise we lose the data in lp
. I incorrectly assumed they could be called in separate loops.
The incorrect behavior introduced is: if we have a reorg in which a timelocked mempool transaction is still valid but becomes valid at a different block, the cached LockPoints
will be incorrect.
This PR fixes the bug, adds a test, and adds an assertion at the end of removeForReorg()
to check that all mempool entries’ lockpoints are valid. You can reproduce the bug by running the test added in the [test] commit on the code before the [bugfix] commit.