This is an attempt to implement #25675.
I was able to reduce the serialized utxo set from 5GB to 4.1GB on mainnet.
Closes #25675.
<!--e57a25ab6845829454e8d69fc972939a-->
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
<!--006a51241073e994b41acfe9ec718e94-->
For detailed information about the code coverage, see the test coverage report.
<!--021abf342d371248e50ceaed478a90ca-->
See the guideline for information on the review process.
| Type | Reviewers |
|---|---|
| Concept ACK | jamesob, pablomartin4btc, jaonoctus, Sjors |
| Stale ACK | TheCharlatan |
If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.
<!--174a7506f384e20aa4161008e828411d-->
Reviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.
I don't have the skill set to approve this code but I can test and verify. I ran this on testnet and can confirm a smaller output file.
without:
{ "coins_written": 27908372, "base_hash": "0000000000000008f07ed39d6d03c19ee7346bc15b6a516cdda8402b6244b828", "base_height": 2345886, "path": "/Users/{$User}/Library/Application Support/Bitcoin/testnet3/./txoutset.txt", "txoutset_hash": "026617308d218e57fb43f02baa644134f5000594a1eea06b02cc9d02959d4d9b", "nchaintx": 63601314 }
Size of 3 473 536
With this optimization:
{ "coins_written": 27908372, "base_hash": "0000000000000008f07ed39d6d03c19ee7346bc15b6a516cdda8402b6244b828", "base_height": 2345886, "path": "/Users/${User}/Library/Application Support/Bitcoin/testnet3/txoutset.optimized.txt", "txoutset_hash": "026617308d218e57fb43f02baa644134f5000594a1eea06b02cc9d02959d4d9b", "nchaintx": 63601314 }
Size of 2 457 728
Looks like a win to me.
Not sure it's worth it to save 20%. Presumably generic compressors could do better anyway?
Not sure it's worth it to save 20%. Presumably generic compressors could do better anyway?
Yes I agree but at least with this implementation the utxo set is still readable as is and doesn't need decompression.
The main downside of this implementation is that the entire UTXO set must be loaded in RAM (in mapCoins) before being written to disk. So running dumptxoutset will consome a lot of RAM on mainnet since the utxo set is large. Not sure how to improve this.
The keys in leveldb are guaranteed to be lexicographically sorted. You just need to cache the last txid and flush the txid and outpoint tuples once the next one is reached in the database iterator. I pushed a dirty proof of concept here and it produced the same output file as your current implementation.
Otherwise Concept ACK. For a file that might be lugged over a network or other data carrier this space improvement is nice. Users can still apply their own compression on top of it. I also like that this provides additional structure to the data.
Yeah as I specified in the issue and as @TheCharlatan wrote more ram shouldn't be needed because txid are iterated sorted from leveldb.
I was also wondering why the bitcoin-cli savetxoutset API (same goes for savemempool) requires to specify a filename instead of writing to stdout that would offer better composition while maintaining the possibility to write to file with >utxo.bin
2686 | if (iter % 5000 == 0) node.rpc_interruption_point();
2687 | ++iter;
2688 | if (pcursor->GetKey(key) && pcursor->GetValue(coin)) {
2689 | - afile << key;
2690 | - afile << coin;
2691 | + std::cout << key.ToString() << "\n";
This needs to be dropped.
2688 | if (pcursor->GetKey(key) && pcursor->GetValue(coin)) {
2689 | - afile << key;
2690 | - afile << coin;
2691 | + std::cout << key.ToString() << "\n";
2692 | + if (key.hash == last_hash) {
2693 | + coins.push_back(std::make_pair(key.n, coin));
Using explicit make_pair can be avoided by directly calling emplace_back instead (same for further below).
I was also wondering why the bitcoin-cli savetxoutset API (same goes for savemempool) requires to specify a filename instead of writing to stdout that would offer better composition while maintaining the possibility to write to file with >utxo.bin
Writing to stdout would mean that the data would have to be carried over the rpc connection, right?
2690 | - afile << coin; 2691 | + std::cout << key.ToString() << "\n"; 2692 | + if (key.hash == last_hash) { 2693 | + coins.push_back(std::make_pair(key.n, coin)); 2694 | + } else { 2695 | + afile << last_hash;
The file writing would be more DRY if it were declared as a lambda and then called here and below, e.g. like:
auto write_coins_to_file = [&](AutoFile& afile, const uint256& last_hash, const std::vector<std::pair<uint32_t, Coin>>& coins) {
afile << last_hash;
afile << static_cast<uint16_t>(coins.size());
for (auto [vout, coin] : coins) {
afile << vout;
afile << coin;
}
};
Thank you for picking this up again.
These are just patches for fixing up my initial code, I also put the entire diff here. The rest looks good to me, I tested dumptxoutset extensively on mainnet. The runtime performance of this branch is very similar to the base.
The following can be ignored in the context of this PR, but I still want to take note:
During review I noticed that the execution time of the command greatly relies on getting the utxo stats (getting the stats and writing to the file each take about 450s on my m.2 machine). I don't follow why these stats need to be collected by iterating through the entire set again, is there no better way to prepend the meta data? A count of the written entries as well as a hash of them can be provided by the same iteration loop that is used to write the file. Further, the m_coins_count in the SnapshotMetadata is populated with tip->nChainTx, which I am not sure always corresponds to the actual number of coins written, but is used to read that exact number of coins from the file when loading the snapshot.
Writing to stdout would mean that the data would have to be carried over the rpc connection, right?
Yes, would that be a problem?
There is no way we can currently send gigabytes of data as an RPC response; both the server and client likely buffer the result and would OOM.
Thanks for the patch @TheCharlatan, I applied it.
I ran some more numbers, dumping a mainnet snapshot on height 787535:
| Filename | Size (bytes) |
|---|---|
| this_branch.dat | 4'697'836'541 |
| this_branch.tar.xz | 3'899'452'268 |
| master.dat | 5'718'308'597 |
| master.tar.xz | 3'925'608'176 |
So even in the compressed form the encoding saves some megabytes.
ACK 7acfc2a221237877aa3522266cac6c326ae7fe8e
Since we are changing the format of dumptxoutset anyway here in a non backwards compatible fashion, I'd like to suggest moving the metadata to the end of the file. This would take care of the double iteration described in #26045#pullrequestreview-1403015467. In my eyes, this does not significantly hurt the integrity of the file. If an exception occurs during writing, only the temporary file remains. Other common file formats also embed metadata at the end.
Rebased
Concept ACK
I cherry-picked 4250824efbe721a8b36f1b05c4280d144318872c on master @ 6619d6a8dca5a4d8e664a76526ac6bef3eb17831 to generate snapshots for signet and mainnet. The txoutset_hash is unchanged.
I then cherry-picked 4250824efbe721a8b36f1b05c4280d144318872c on #28516 (which leads to a small conflict) to load the snapshots.
Signet:
{
"coins_written": 1278002,
"base_hash": "0000003ca3c99aff040f2563c2ad8f8ec88bd0fd6b8f0895cfaf1ef90353a62c",
"base_height": 160000,
"txoutset_hash": "5225141cb62dee63ab3be95f9b03d60801f264010b1816d4bd00618b2736e7be",
"nchaintx": 2289496
}
The snapshot is actually slightly bigger:
91425233 utxo-signet-160000.dat
92892387 utxo-signet-160000-smaller.dat
Mainnet:
{
"coins_written": 111535121,
"base_hash": "00000000000000000002a7c4c1e48d76c5a37902165a270156b7a8d72728a054",
"base_height": 800000,
"txoutset_hash": "7d69b87512db3d13b9758ea32b93ce468d18cf7456fb5d250c9e1bed9339e4d2",
"nchaintx": 868965226
}
The snapshot made by this PR is a gigabyte smaller.
7323553927 utxo-800000.dat
6234254939 utxo-800000-smaller.dat
I succeeded in loading the signet snapshot and finishing background sync IBD. I also managed to load the mainnet snapshot, waited for it to reach the tip. I didn't wait for the for full background sync.
Haven't reviewed the code yet.
Given the slight conflict with the main assume-utxo PR - and that we don't want to make @jamesob life ever harder - I suggest rebasing on top of it and marking this draft until that's merged.
As an aside (this just tests the main PR), loading the block 800,000 snapshot and saving its chainstate to disk takes 10 minutes on my machine (AMD Ryzen 9 7950X, SSD drive for the main datadir, -dbcache set to 16GiB). During that time the background IBD reaches november 2011.
Once it starts using the snapshot, it reaches the almost-tip in 12 minutes (-blocksdir points to a spinning disk, downloaded from 1 Gbit internet, cache peaks at 3584 MiB). It then saves the new chainstate, resizes the caches which trigger a flush, but less than 2 minutes later we're at the real tip! And then the background sync takes over.
Cool stuff.
2701 | + } else { 2702 | + write_coins_to_file(afile, last_hash, coins); 2703 | + last_hash = key.hash; 2704 | + coins.clear(); 2705 | + coins.emplace_back(key.n, coin); 2706 | + }
if (key.hash != last_hash) {
write_coins_to_file(afile, last_hash, coins);
last_hash = key.hash;
coins.clear();
}
coins.emplace_back(key.n, coin);
Fixed thanks
Concept ACK
Concept ACK
Tested on testnet and regtest, works as expected
@aureleoules wen rebase? :-)
@aureleoules wen rebase? :-)
🫡
Rebased
I had to slightly change the tests in feature_assumeutxo.py because I changed the encoding format of the dump. I added 2 bytes to the offset because of the new size (2 bytes) field.
Did some testing with f5d2014e96240aab9540aaba0a792710aa7725e7.
Using contrib/devtools/utxo_snapshot.sh on signet I get the same txoutset_hash. The resulting is also identical to what I generated in earlier, see #26045 (comment). I was then able to load the loadshot and sync the chain (to the tip in about a minute, wonderful) and complete the background sync.
I also generated a mainnet snapshot which was also identical to what I found in the last test, so I have not tried loading again.
Will review the code later.
Co-authored-by: TheCharlatan <seb.kung@gmail.com>
Rebased. I had to change the offset again in feature_assumeutxo.py.
75 | @@ -76,6 +76,7 @@ def test_invalid_snapshot_scenarios(self, valid_snapshot_path): 76 | bad_snapshot_path = valid_snapshot_path + '.mod' 77 | 78 | def expected_error(log_msg="", rpc_details=""): 79 | + print(log_msg)
Don't forget to drop this.
2676 | - unsigned int iter{0}; 2677 | + std::vector<std::pair<uint32_t, Coin>> coins; 2678 | + 2679 | + auto write_coins_to_file = [&](AutoFile& afile, const uint256& last_hash, const std::vector<std::pair<uint32_t, Coin>>& coins) { 2680 | + afile << last_hash; 2681 | + afile << static_cast<uint16_t>(coins.size());
4e194640a74ff784212e615c8e88c375cdfed039: In primitives/transaction.h we serialize std::vector<CTxOut> vout with a simple s << tx.vout; without any reference to the size of the vector. Not sure if any magic is happening there under the hood that I missed.
Also, uint16_t implies a limit of 65,536 outputs per transaction. I don't think that's a consensus rule?
Yeah, I think the magic happens here: https://github.com/bitcoin/bitcoin/blob/master/src/serialize.h#L674
However, we can't use that because we are not looking at a full transaction but rather the outpoints that are still left in the UTXO set. But we basically mimic that behavior here.
On the 65,536: I guess the blocksize solves this for us for now I think it makes sense to use VARINT/CompactSize here
A transaction with 65,537 OP_RETURN outputs should fit in a block.
If I start with P2TR outputs with this calculator, that's 2,818,159 vbyte. https://bitcoinops.org/en/tools/calc-size/
And then subtract 32 bytes per output: 2,818,159 - 65537 * 32 = 720,975 vbyte
cc @murchandamus can you add OP_RETURN to the dropdown? :-)
In any case it seems unsafe to rely on the block size here.
Hm, but OP_RETURNs are not included in the UTXO set and we are serializing the UTXO set here, so I think this could still not happen like this. But I think you are right there are non-standard cases imaginable that make this possible, like just sending to OP_TRUE for example. So we should still make this robust.
Anyway, I am using CompactSize now in #29612 :)
5427 | + coins_file >> txid; 5428 | + uint16_t size{0}; 5429 | + coins_file >> size; 5430 | + 5431 | + if(size > coins_left) { 5432 | + LogPrintf("[snapshot] mismatch in coins count in snapshot metadata and actual snapshot data\n");
In the context of my remark above about maybe not serializing size: in that case this check would have to happen after the batch of coins is loaded, which seems fine.
Nothing to do here, since we do have serialize size, as you explained: #26045 (review)
5437 | + for (int i = 0; i < size; i++) { 5438 | + COutPoint outpoint; 5439 | + Coin coin; 5440 | + coins_file >> outpoint.n; 5441 | + coins_file >> coin; 5442 | + outpoint.hash = txid;
4e194640a74ff784212e615c8e88c375cdfed039 nit: maybe move this above where you set outpoint.n
5511 | + FlushSnapshotToDisk(coins_cache, /*snapshot_loaded=*/false); 5512 | + } 5513 | + } 5514 | } 5515 | + } catch (const std::ios_base::failure&) { 5516 | + LogPrintf("[snapshot] bad snapshot format or truncated snapshot after deserializing %d coins\n",
Unrelated, but why doesn't this use coins_processed?
5479 | @@ -5468,6 +5480,7 @@ bool ChainstateManager::PopulateAndValidateSnapshot( 5480 | 5481 | bool out_of_coins{false}; 5482 | try { 5483 | + COutPoint outpoint;
I think this should be:
Txid txid;
coins_file >> txid;
The current code might accidentally work because a COutPoint is a Txid followed by a single number (albeit uint32_t instead of uint16_t).
Concept ACK. Found a few issues during code review, see inline. @TheCharlatan wrote:
The keys in leveldb are guaranteed to be lexicographically sorted. You just need to cache the last txid and flush the txid and outpoint tuples once the next one is reached in the database iterator.
I would be good to document in the code that we rely on this behavior (maybe in the header file, so someone who wants to replace leveldb is reminded). @luke-jr & @aureleoules wrote:
Not sure it's worth it to save 20%. Presumably generic compressors could do better anyway?
Yes I agree but at least with this implementation the utxo set is still readable as is and doesn't need decompression.
We might also at some point want to xor the file - for the same reason as #28207. That would make compression with external tools impossible.
2667 | @@ -2668,21 +2668,42 @@ UniValue CreateUTXOSnapshot( 2668 | 2669 | afile << metadata; 2670 | 2671 | + std::map<uint256, std::vector<std::pair<uint32_t, Coin>>> mapCoins; 2672 | + unsigned int iter{0}; 2673 | COutPoint key; 2674 | + uint256 last_hash; 2675 | Coin coin; 2676 | - unsigned int iter{0};
This move doesn't seem necessary.
2677 | + std::vector<std::pair<uint32_t, Coin>> coins; 2678 | + 2679 | + auto write_coins_to_file = [&](AutoFile& afile, const uint256& last_hash, const std::vector<std::pair<uint32_t, Coin>>& coins) { 2680 | + afile << last_hash; 2681 | + afile << static_cast<uint16_t>(coins.size()); 2682 | + for (auto [vout, coin] : coins) {
I think you should call vout here either n or vout_index, otherwise it might be confusing
Thanks for the reviews but I don't have time to address the comments so please mark this pull as Up for grabs.
Thanks for the great start @aureleoules! @fjahr can you take this one? Otherwise I can, but not sure how soon.
Thanks for the great start @aureleoules!
@fjahr can you take this one? Otherwise I can, but not sure how soon.
Yeah, I will reopen this shortly.