Currently, large vectors of std::byte
are (un)serialized byte-by-byte, which is slow. Fix this, by enabling the already existing optimization for them.
On my system this gives a 10x speedup for ./src/bench/bench_bitcoin --filter=PrevectorDeserializeTrivial
, when std::byte
are used:
0diff --git a/src/bench/prevector.cpp b/src/bench/prevector.cpp
1index 2524e215e4..76b16bc34e 100644
2--- a/src/bench/prevector.cpp
3+++ b/src/bench/prevector.cpp
4@@ -17,7 +17,7 @@ struct nontrivial_t {
5 static_assert(!std::is_trivially_default_constructible<nontrivial_t>::value,
6 "expected nontrivial_t to not be trivially constructible");
7
8-typedef unsigned char trivial_t;
9+typedef std::byte trivial_t;
10 static_assert(std::is_trivially_default_constructible<trivial_t>::value,
11 "expected trivial_t to be trivially constructible");
12
However, the optimization does not cover signed char
. Fix that as well.