This PR introduces a faster input deduplication algorithm.
In the first commit, I introduce a worst case duplicate input benchmark.
The existing code achieves the following performance.
DuplicateInputs, 5, 10, 0.710982, 0.0140756, 0.0143986, 0.0141852
In the next commit, I introduce a probabilistic checker which is much faster. It’s analysis and use is documented extensively in the code comments. It achieves the following performance on the same benchmark.
DuplicateInputs, 5, 10, 0.166576, 0.00329314, 0.0034048, 0.0033221
This is about 4.3X faster.
Future work can further improve this by back-grounding the checks and hashing to another thread.
This does introduce one behavior change in which DoS error gets reported for transactions which have both duplicates and null inputs; the first error found will be the one reported.