ScriptPubKeyMan
migration. It creates a LegacyDataSPKM
which can have some keys/scripts/etc, and then migrate it to descriptor. I tried to keep it as compatible as possible with future legacy wallet removal.
ScriptPubKeyMan
migration. It creates a LegacyDataSPKM
which can have some keys/scripts/etc, and then migrate it to descriptor. I tried to keep it as compatible as possible with future legacy wallet removal.
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/29694.
See the guideline for information on the review process. A summary of reviews will appear here.
Reviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.
223+ keys.push_back(key);
224+ }
225+
226+ if (keys.empty()) return;
227+
228+ auto& legacy_spkm{*wallet.GetOrCreateLegacyScriptPubKeyMan()};
LegacyDataSPKM
. This will let you avoid crafting the bdb file manually when bdb is not compiled anymore.
You might need to add a mechanism to mock the bdb reader class just so you can feed the migration process with a hand-crafted LegacyDataSPKM. This will let you avoid crafting the bdb file manually when bdb is not compiled anymore.
You’re right, I’ll move this PR to draft to work on it.
LegacyDataSPKM
. Note that this target is about the spkm migration only, not the whole process.
243+ if (legacy_data.HaveKey(keys[key_index].GetPubKey().GetID())) return;
244+ if (legacy_data.LoadKey(keys[key_index], keys[key_index].GetPubKey())) {
245+ load_key_count++;
246+ if (fuzzed_data_provider.ConsumeBool()) {
247+ CHDChain hd_chain;
248+ hd_chain.nVersion = CHDChain::VERSION_HD_CHAIN_SPLIT;
284+ }
285+
286+ std::optional<MigrationData> res{legacy_data.MigrateToDescriptor()};
287+ if (res.has_value()) {
288+ assert(static_cast<int>(res->desc_spkms.size()) >= load_key_count);
289+ if (!res->solvable_descs.empty()) assert(script_count > 0);
script_count
to really be watchonly script count and just have it increment only when a watchonly script is added?
In fact, I tried to track watchonly and spendable scripts, but I don’t think it’s simple since we load keys, hd chain, diverse scripts, etc. I ended up writing similar or copying a lot of things from wallet/scriptpubkey.cpp
which I don’t think would be a valuable approach here.
nevermind! will address it next push.
283+ );
284+ }
285+
286+ std::optional<MigrationData> res{legacy_data.MigrateToDescriptor()};
287+ if (res.has_value()) {
288+ assert(static_cast<int>(res->desc_spkms.size()) >= load_key_count);
desc_spkms
should be load_key_count
plus the number of spendable scripts.
263+ },
264+ [&] {
265+ CScript script{ConsumeScript(fuzzed_data_provider)};
266+ const auto key_index{fuzzed_data_provider.ConsumeIntegralInRange<int>(0, keys.size() - 1)};
267+ LIMITED_WHILE(fuzzed_data_provider.ConsumeBool(), 2) {
268+ CallOneOf(
Force-pushed addressing (multisig and hd chain cover):
🚧 At least one of the CI tasks failed. Debug: https://github.com/bitcoin/bitcoin/runs/33561912794
Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:
Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.
A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
🚧 At least one of the CI tasks failed. Debug: https://github.com/bitcoin/bitcoin/runs/33784139981
Try to run the tests locally, according to the documentation. However, a CI failure may still happen due to a number of reasons, for example:
Possibly due to a silent merge conflict (the changes in this pull request being incompatible with the current code in the target branch). If so, make sure to rebase on the latest commit of the target branch.
A sanitizer issue, which can only be found by compiling with the sanitizer and running the affected test.
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
Force-pushed changing the fuzz target to address the ApplyMigrationData
function. Now, after migrating the spk to descriptor, it creates some transactions and apply the migrated data.
Note that the CI failure is expected (see #31374):
0[09:11:48.112] Run spkm_migration with args ['/ci_container_base/ci/scratch/build-x86_64-pc-linux-gnu/src/test/fuzz/fuzz', '-max_total_time=60']INFO: Running with entropic power schedule (0xFF, 100).
1[09:11:48.112] INFO: Seed: 2346481497
2[09:11:48.112] INFO: Loaded 1 modules (629856 inline 8-bit counters): 629856 [0x55a29859f118, 0x55a298638d78),
3[09:11:48.112] INFO: Loaded 1 PC tables (629856 PCs): 629856 [0x55a298638d78,0x55a298fd5378),
4[09:11:48.112] INFO: -max_len is not provided; libFuzzer will not generate inputs larger than 4096 bytes
5[09:11:48.112] wallet/wallet.cpp:4088 ApplyMigrationData: Assertion `!m_cached_spks.empty()' failed.
326+
327+ WalletBatch batch{wallet.GetDatabase()};
328+ LOCK(wallet.cs_wallet);
329+ CBlockLocator loc{chainstate.m_chain.GetLocator()};
330+ assert(batch.WriteBestBlock(loc));
331+ (void)wallet.ApplyMigrationData(batch, *res);
ApplyMigrationData
without executing the entire DoMigration
step. That’s just incorrect. The code skips the creation of the watch-only and solvable wallets from the MigrationData
result, which causes a crash inside ApplyMigrationData
because it is trying to access the watch-only wallet.
DoMigration
can’t be called directly here.
Force-pushed:
Now the fuzz target runs the entire migration process (MigrateLegacyToDescriptor
). Also, I added a new commit that allows to migrate in-memory wallets to speed up the target and avoid timeouts.
For in-memory SQLite databases, exclusive locking is generally not
necessary because in-memory dbs are private to the connection that
created them.
By using in-memory databases, we do not have dbs file to delete
and restore.