I think this type of policy changes would benefit from a more binding definition of “incentive compatibility”. If we speak about a world where the network mempools backlog is always superior to MAX_BLOCK_WEIGHT
, optimizing for mining score based on feerate sounds the most obvious. If we add the dimension of mempools congestion variance where emptyness is a non-null outcome, I think a miner could adopt a “pure-replace-by-fee” mempool acceptance policy until reaching MAX_BLOCK_WEIGHT
.
Another layer of complexity in devising “incentive compatibility” could be to realize the best feerate mempool plausible is a combination of the replacement policy and an order of events. E.g, if you evict 2sat/vb A for 4sat/vb A’ though after 1 min you receive 4 sat/vb B child of A, your mempool is at loss. This scenario sounds far from unplausible to me in a shared-utxo future (e.g 2-of-2 LN channels), where spenders are building chain of transactions in blindness of every co-spender. A miner mempool optimization could be to run a cache of all the replacements.
On this subject, earlier this year on the ML: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2022-February/019921.html
Of course, we could make abstraction of order of events complexity by assuming all received transactions are following templates (e.g nversion=3) where the design assumes only the most efficient package/chain of transactions propagate. However, I’m not sure how much our network transaction-relay “transaction patterns” (current and future) are realistic of potential miner mempools running with DEFAULT_ANCESTOR_LIMIT=$HUGE
and out-of-band transaction-relay. If our design give margin for non-marginal fee asymmetries, in a future where fees are significantly contributing to block reward, you should expect some miners doing the efforts of capturing them.
All thoughts for future works, I think this is a good improvement for now.