fuzz: compact block harness #33300

pull Crypt-iQ wants to merge 7 commits into bitcoin:master from Crypt-iQ:cmpctblock-fuzz-0807-fs changing 11 files +507 −19
  1. Crypt-iQ commented at 10:33 pm on September 3, 2025: contributor

    Posting up to get feedback, there are some design flaws with the approach in this PR. Coverage is here (look in src/blockencodings.cpp, relevant compact block bits in src/net_processing.cpp).

    This harness can make (in)valid blocks, reconstruct blocks with in-mempool txns, mark peers as HB, and has high stability in AFL++ (~98-99%).

    The main downside is that there are filesystem operations. In the .init function initialize_cmpctblock, a chain of 200 blocks is created. Each fuzzing iteration then copies this statically-named, “cached” data directory to a temporary directory that gets deleted at the end of the iteration. If each fuzzing iteration instead mines its own chain, the execs/s slows down to a crawl (~0.5/s or less, which would also make CI runs really slow).

  2. DrahtBot added the label Tests on Sep 3, 2025
  3. DrahtBot commented at 10:33 pm on September 3, 2025: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage & Benchmarks

    For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33300.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    Concept ACK l0rinc

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #33740 (RFC: bench: Add multi thread benchmarking by fjahr)
    • #33637 (refactor: optimize block index comparisons (1.4-6.8x faster) by l0rinc)
    • #29415 (Broadcast own transactions only via short-lived Tor or I2P connections by vasild)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  4. Crypt-iQ force-pushed on Sep 3, 2025
  5. Crypt-iQ force-pushed on Sep 3, 2025
  6. maflcko commented at 6:45 am on September 4, 2025: member

    There does not seem to be a way without signal handlers to delete the static, cached datadir when fuzzing is complete. Another issue is that multi-core fuzzing via AFL++ does not work because each worker will try to wipe the cached datadir (created via -testdatadir which wipes if the directory exists) which will cause other workers to crash if they are trying to copy it. I could not figure out a way for each worker to have their own cached datadir.

    Is my understanding correct that AFL will run init (create the static dir), then fork into several fuzz processes, which run for N iterations and then shut down and delete the static dir? (On the next re-fork, it will fail to find the static dir)?

    If yes, a solution could be to lazy-init the static dir after init, after the fork. Also, you may have to use the pid (or os-rand) to name the static folder to avoid collisions.

    As for deleting the dirs, maybe it is possible to spin up several testing setups at the same time:

    • One dummy testing setup to create a root dir, which is cleaned up
    • One testing setup living inside that root dir for the static folder (in a subfolder)
    • One testing setup living inside the root dir with a copy of the static folder

    (Not sure if this is possible, though)

  7. dergoegge commented at 8:59 am on September 4, 2025: member

    Is my understanding correct that AFL will run init (create the static dir), then fork into several fuzz processes, which run for N iterations and then shut down and delete the static dir? (On the next re-fork, it will fail to find the static dir)?

    We removed the __AFL_INIT call, so it will actually fork prior to init, so I think all that’s needed is to randomize the static dir name as you suggested as well.

  8. Crypt-iQ commented at 1:58 pm on September 4, 2025: contributor

    Is my understanding correct that AFL will run init (create the static dir), then fork into several fuzz processes, which run for N iterations and then shut down and delete the static dir? (On the next re-fork, it will fail to find the static dir)?

    My description was a bit vague. Since the harness uses -testdatadir in init to create the static datadir, it does not get deleted in the TestingSetup destructor (by design of -testdatadir). Any time init is called, it will wipe the static datadir if it already exists. This can happen after N iterations (100,000 currently) or with a newly spawned worker after forking since the fork point is prior to init like @dergoegge mentioned. If the datadir is deleted while a fuzz iteration tries to copy the non-existent directory, it will crash.

    a solution could be to lazy-init the static dir after init, after the fork

    I considered this, but I think this introduces some (slight) non-determinism as the directory may/may not exist?

    Also, you may have to use the pid (or os-rand) to name the static folder to avoid collisions.

    Yup, I agree. I think this randomization can only be done if the static datadir can be deleted at the end of fuzzing, otherwise there will be lots of these lying around?

    As for deleting the dirs, maybe it is possible to spin up several testing setups at the same time:

    Why is the dummy testing setup needed in this example? Could it instead just be two testing setups (one for the static datadir, one for the copied datadir)? I considered something similar to this as well where there is a static TestingSetup that gets destructed at the end of fuzzing (and wipes the static datadir) and one for each fuzz iteration (that gets destructed at the end of the iteration), but I was worried that the static TestingSetup would introduce more non-determinism? For example, what if this static TestingSetup has scheduling going on or is using the static datadir while we try to copy from it?

  9. Crypt-iQ force-pushed on Sep 5, 2025
  10. Crypt-iQ commented at 9:41 pm on September 5, 2025: contributor
    I was able to get AFL++ multi-core fuzzing to work by adding a static FuzzedDirectoryWrapper instance that deletes the passed path in its destructor as well as using a random element to each statically named datadir. Each instance is in the high 98-99% stability and does not crash after hitting 100,000 iterations (yay). I was not able to use multiple TestingSetups at the same time without causing several panics due to assertions about globals. I will fix the lint, tidy jobs and also see if the deterministic-fuzz-coverage issues still pop up.
  11. Crypt-iQ force-pushed on Sep 9, 2025
  12. Crypt-iQ commented at 4:39 pm on September 9, 2025: contributor

    There is non-determinism here because m_dirty_blockindex is std::set<CBlockIndex*> and sorts based on the pointer. This can be fixed by adding a function object that compares the block hashes. However, I did not know if this should be added just for fuzz code and I have not run benchmarks.

    After patching the above non-determinism locally, there is also non-determinism in the scheduler because this fuzz test uses the scheduler. I am guessing this is due to the RegisterValidationInterface call in the fuzz test notifying when transactions are being added or removed, and blocks being mined.

  13. Crypt-iQ marked this as ready for review on Sep 9, 2025
  14. in src/test/fuzz/cmpctblock.cpp:40 in 0de9143aa0 outdated
    35+using namespace util::hex_literals;
    36+
    37+namespace {
    38+
    39+//! Fee each created tx will pay.
    40+const CAmount amount_fee = 1000;
    


    dergoegge commented at 3:35 pm on September 11, 2025:

    nit

    0const CAmount AMOUNT_FEE{1000};
    

    Crypt-iQ commented at 3:38 pm on September 18, 2025:
    Thanks, done.
  15. in src/test/fuzz/cmpctblock.cpp:359 in 0de9143aa0 outdated
    354+                    return;
    355+                }
    356+
    357+                // Choose an existing block and send a HEADERS message for it.
    358+                size_t index = fuzzed_data_provider.ConsumeIntegralInRange<size_t>(0, num_blocks - 1);
    359+                CBlock block = *info[index].block;
    


    dergoegge commented at 3:37 pm on September 11, 2025:
    0                CBlock block = *info[index].block;
    1                block.vtx.clear();
    

    The headers message is a vector of blocks without transactions, so I think you should clear the copy of the block here? Although, technically this doesn’t matter because there’s only one header in the vector being sent, so the following txs are just ignored.


    Crypt-iQ commented at 3:38 pm on September 18, 2025:
    Ah nice catch, I completely missed this. Done.
  16. in src/test/fuzz/cmpctblock.cpp:320 in 0de9143aa0 outdated
    315+                    // Remove from shorttxids since we've prefilled. Subtract however many txs have been prefilled.
    316+                    cmpctBlock.EraseShortTxIDs(i - 1 - num_erased);
    317+                    ++num_erased;
    318+                }
    319+
    320+                CBlockHeaderAndShortTxIDs baseCmpctBlock = cmpctBlock;
    


    dergoegge commented at 3:44 pm on September 11, 2025:
    0                CBlockHeaderAndShortTxIDs baseCmpctBlock = cmpctBlock;
    

    nit: there are few instances of camel case usage here. Our convention is to use snake case (see dev notes).


    Crypt-iQ commented at 3:39 pm on September 18, 2025:
    Thanks, I believe I’ve fixed all of the camel case.
  17. dergoegge changes_requested
  18. dergoegge commented at 3:29 pm on September 12, 2025: member
    Did a first pass, overall approach looks good to me
  19. Crypt-iQ force-pushed on Sep 18, 2025
  20. Crypt-iQ force-pushed on Sep 18, 2025
  21. Crypt-iQ force-pushed on Sep 18, 2025
  22. Crypt-iQ force-pushed on Sep 18, 2025
  23. Crypt-iQ commented at 4:18 pm on September 18, 2025: contributor

    The latest push adds two commits:

    • a5619ea631bd8b93b4ef02a20abb8c1c0705d8e4 “test: add setup_validation_interface_no_scheduler to TestOpts”
      • Disables the scheduler completely if set. This is needed because this harness creates a TestingSetup inside FUZZ_TARGET and scheduling a promise can be non-deterministic as the scheduler’s serviceQueue may start with a non-empty taskQueue. This is not an issue in the other fuzz tests because their TestingSetup is created in .init and ResetCoverageCounters is called after.
    • e2f921458913bcbbe74115cdb2174b0ab31784f2 “node: sort m_dirty_blockindex by block hash”
      • I am ok with this commit being removed and want to know what others think. This sorts by block hash rather than memory address and does introduce a slow-down in production code for no production benefit (I think because memory addresses are ~generally going to be increasing while inserting into m_dirty_blockindex, whereas sorting by block hash won’t). I added it to show the change needed to make this harness fully non-deterministic. It is also possible to add an #ifdef so that it doesn’t negatively affect production code, but I also don’t want to set a precedent and litter the codebase with fuzz-specific macros. The sorting is needed as otherwise leveldb’s MemTable will be non-deterministic across runs. I tried some alternatives, but this is the best I could come up with that made the determinstic-fuzz-coverage script happy.
  24. in src/test/fuzz/cmpctblock.cpp:261 in a473ec6509 outdated
    256+            }
    257+        }
    258+
    259+        CBlockIndex* pindexPrev{WITH_LOCK(::cs_main, return setup->m_node.chainman->m_blockman.LookupBlockIndex(prev))};
    260+        setup->m_node.chainman->GenerateCoinbaseCommitment(*block, pindexPrev);
    261+        FinalizeHeader(header, *setup->m_node.chainman);
    


    marcofleon commented at 11:26 pm on October 1, 2025:
    Shouldn’t *block be passed in here instead of header? Right now, I think the block just ends up with the random nonce from when the header was created. Although this still works half the time with the simplifed PoW check. Also, calling GetHash() on the header includes the merkle root, so finalizing the header should be after we calculate the merkle root below.

    Crypt-iQ commented at 11:24 am on October 2, 2025:
    Ah yes, nice catch. I was wondering why the PoW check was still failing so many times in the coverage.

    Crypt-iQ commented at 4:41 pm on October 2, 2025:
    Fixed in ba89cdd93ee39e884c924ae12737dd06639cee8f, verified that the CheckBlockHeader call in validation.cpp no longer fails.
  25. Crypt-iQ force-pushed on Oct 2, 2025
  26. Crypt-iQ commented at 4:43 pm on October 2, 2025: contributor

    e2f921458913bcbbe74115cdb2174b0ab31784f2 -> ed813c48f826d083becf93c741b483774c850c86:

    • Fixes an issue pointed out by @marcofleon with the PoW checks failing
    • Adds more descriptive commit messages
  27. in src/test/util/setup_common.cpp:214 in ed813c48f8
    211+
    212+        m_path_root = rand_path();
    213+        TryCreateDirectories(m_path_root);
    214+
    215+        // Copy the cached directory into the newly created temporary directory.
    216+        fs::copy(cached_dir, m_path_root, fs::copy_options::recursive);
    


    stringintech commented at 2:29 pm on October 7, 2025:

    When fuzzing on macOS (Apple Silicon), I ran into lots of crashes when copying the cached dir into the destination:

    0libc++abi: terminating due to uncaught exception of type std::__1::__fs::filesystem::filesystem_error: filesystem error: in copy_file: File exists ["/var/folders/wc/3l6l_0b92zzg54v6rmvqqn300000gn/T/test_common bitcoin/cmpctblock/6c5e38c6dcfbc2607321/regtest/blocks/xor.dat"] ["/var/folders/wc/3l6l_0b92zzg54v6rmvqqn300000gn/T/cmpctblock_cachedca12974312651650d6b0/datadir/regtest/blocks/xor.dat"]
    

    The issue is that the destination dir may already exist before TryCreateDirectories(), and it happens when the AFL++ fork server is enabled (it does not happen when fuzzing with AFL_NO_FORKSRV=1). It appears that forked processes are running into collisions when generating the random path (rand_path()) for creating the destination dir, and when we switch to strong randomness (like how the static dir name is generated), the issue seems to be resolved:

     0diff --git a/src/test/util/setup_common.cpp b/src/test/util/setup_common.cpp
     1index e9768d4c5c..ac477caa62 100644
     2--- a/src/test/util/setup_common.cpp
     3+++ b/src/test/util/setup_common.cpp
     4@@ -164,8 +164,9 @@ BasicTestingSetup::BasicTestingSetup(const ChainType chainType, TestOpts opts)
     5         // tests, such as the fuzz tests to run in several processes at the
     6         // same time, add a random element to the path. Keep it small enough to
     7         // avoid a MAX_PATH violation on Windows.
     8-        const auto rand{HexStr(g_rng_temp_path.randbytes(10))};
     9-        return fs::temp_directory_path() / TEST_DIR_PATH_ELEMENT / test_name / rand;
    10+        std::vector<unsigned char> random_path_suffix(10);
    11+        GetStrongRandBytes(random_path_suffix);
    12+        return fs::temp_directory_path() / TEST_DIR_PATH_ELEMENT / test_name / HexStr(random_path_suffix);
    13     };
    14 
    15     if (m_node.args->IsArgSet("-testdatadir")) {
    

    Crypt-iQ commented at 5:29 pm on October 7, 2025:
    Are you fuzzing with persistent mode?

    stringintech commented at 6:43 pm on October 7, 2025:
    Yes. I am compiling using afl-clang-fast++ (it seems that other options are either unavailable or obsolete on macOS). However, I just added a #undef __AFL_LOOP at the top of fuzz.cpp to disable persistent mode (I made sure I no longer see the [+] Persistent mode binary detected log when running) and reverted the above change to rand_path(), but the same crash still occurs.

    Crypt-iQ commented at 11:27 am on October 8, 2025:
    Think this is because the child processes have the same g_rng_temp_path so they create the same directory. Will need to see if GetStrongRandBytes introduces non-determinism. I’m confused why fuzzing on debian didn’t bring this up.

    Crypt-iQ commented at 4:17 pm on October 8, 2025:
    Thanks for pointing this out, GetStrongRandBytes does not introduce non-determinism. I’ll change the next time I push up by making rand_path accept a bool strong so that the original behavior when -testdatadir is set is unchanged.

    stringintech commented at 3:54 pm on October 9, 2025:

    Since the crash I explained wasn’t observed by others, I doubted whether it was actually coming from multiple child processes running simultaneously or from missed data dir cleanup. So I added a bunch of logs (including pid and parent pid in each) to investigate this further and found this after reading the logs for multiple runs:

    I observed that AFL++ ran one worker process at a time, not multiple workers in parallel. Each worker created a static cached directory once, then ran multiple fuzz iterations sequentially; each iteration creating and destroying a temporary data directory. This worked fine until a worker died unexpectedly (not yet sure why), leaving behind both its last temp data directory and its static cached directory without cleanup.

    Each new worker had the same g_rng_temp_path as the previous worker, so it generated the same temp directory paths. When the next worker started, it faced the copy crash at exactly the same iteration where the previous worker died. For example, if process A died before completing iteration N, it left behind the temp data directory at that iteration. Process B then faced the fs::copy() crash at exactly iteration N, since the generated sequence of temp data directory paths was identical. This worker also died, leaving behind another orphaned static cached directory. This flow would repeat, eventually filling the disk with orphaned directories.


    stringintech commented at 4:06 pm on October 9, 2025:

    I’m confused why fuzzing on debian didn’t bring this up.

    Yeah, I am also confused by this. While my case wasn’t exactly a multi-process execution issue, it seems a true multi-process execution should observe the same crash because of the same g_rng_temp_path across processes.


    Crypt-iQ commented at 4:30 pm on October 9, 2025:
    Some signals are not specifically handled by the fuzzer, so if it creates a datadir (i.e. process_messages) and you Ctrl-C, it will leave it around instead of deleting it in ~BasicTestingSetup. In any case, not using g_rng_temp_path seems like the right choice.
  28. in src/test/fuzz/cmpctblock.cpp:147 in ba89cdd93e outdated
    142+
    143+    LOCK(NetEventsInterface::g_msgproc_mutex);
    144+
    145+    std::vector<CNode*> peers;
    146+    auto& connman = *static_cast<ConnmanTestMsg*>(setup->m_node.connman.get());
    147+    for (int i = 0; i < 3; ++i) {
    


    marcofleon commented at 4:40 pm on October 8, 2025:
    As we discussed a bit offline, increasing the number of peers here to greater than 3 should be enough to hit some of the missing “eviction” logic in MaybeSetPeerAsAnnouncingHeaderAndIDs. Maybe increasing to 8 peers would be fine? Although not sure if that would test anything more (somewhere else) than if it were 4.

    Crypt-iQ commented at 7:00 pm on October 8, 2025:
    Will compare the coverage from increasing to 4 vs increasing to 8.

    Crypt-iQ commented at 4:03 am on October 24, 2025:
    Neither 4 nor 8 helped hit the case, will look into it more.

    marcofleon commented at 3:11 pm on October 30, 2025:

    Non-blocking I’d say, so could be a followup once it’s figured out.

    Strange though, now I’m curious.


    Crypt-iQ commented at 3:48 pm on October 30, 2025:
    I think outbound connections are getting disconnected for some reason, latest coverage is here. On a local branch, I modified the harness to return early if any of the test nodes did not complete the version-verack handshake, but coverage is still missing.
  29. in src/node/blockstorage.cpp:496 in ed813c48f8 outdated
    491@@ -487,7 +492,7 @@ bool BlockManager::WriteBlockIndexDB()
    492     }
    493     std::vector<const CBlockIndex*> vBlocks;
    494     vBlocks.reserve(m_dirty_blockindex.size());
    495-    for (std::set<CBlockIndex*>::iterator it = m_dirty_blockindex.begin(); it != m_dirty_blockindex.end();) {
    496+    for (std::set<CBlockIndex*, CBlockIndexBlockHashComparator>::iterator it = m_dirty_blockindex.begin(); it != m_dirty_blockindex.end();) {
    


    marcofleon commented at 4:53 pm on October 8, 2025:

    I feel like I might be overlooking something here, but could we keep m_dirty_blockindex the same and then if EnableFuzzDeterminism() just sort vBlocks by hash before calling WriteBatchSync()?

    I guess I’m asking is there a reason why m_dirty_blockindex needs to be sorted by hash from the beginning vs right before the write.


    Crypt-iQ commented at 6:49 pm on October 8, 2025:

    Answered in the review club, also posting here:

    Without this commit, leveldb’s MemTable is non-deterministic since it depends on insert order. If we sort vBlocks right before calling WriteBatchSync, then the leveldb non-determinism is solved, but the number of times we call the comparison function varies since the initial ordering before sorting varies across runs of the same input. That means we need to sort when we insert into m_dirty_blockindex as the order of CBlockIndex* we insert here is deterministic (i.e. we’ll always insert A then B, even if it may sort as {A, B} or {B, A}). This is definitely confusing, I used pencil and paper to work this out.

    I did a simple benchmark and it showed ~O(n log n) slowdown. I was not sure how to bench this in a production scenario or what a realistic load here would look like. I guess it’s possible to bench two different IBDs, but there may be other non-IBD scenarios?


    Crypt-iQ commented at 2:07 pm on November 10, 2025:

    Ran this patch against master with -reindex (which puts lots of entries in vBlocks) and noticed no slowdown.

    cc @l0rinc, do you have any opinions about this? We need this for deterministic fuzzing, we can also wrap this in a fuzz-specific macro.


    l0rinc commented at 3:16 pm on November 11, 2025:

    Is my understanding correct that we want to remove useless jitter that doesn’t actually increase code coverage but confuses the fuzzer, such as the internal Random in leveldb’s skiplist? It seems to me that should be deterministic though, as long as we’re actually giving it the same work on the same thread: https://github.com/bitcoin/bitcoin/blob/a7e80676104b5c90c5b5e3bfab815d55a9061052/src/leveldb/db/skiplist.h#L330

    And (as I think you also mentioned) the write and iteration order should be deterministic, regardless of the inputs:

     0diff --git a/src/test/dbwrapper_tests.cpp b/src/test/dbwrapper_tests.cpp
     1--- a/src/test/dbwrapper_tests.cpp	(revision c44048159359f8a7335a00b768548b351b4181a5)
     2+++ b/src/test/dbwrapper_tests.cpp	(date 1762874342278)
     3@@ -422,5 +422,29 @@
     4     BOOST_CHECK(fs::exists(lockPath));
     5 }
     6 
     7+BOOST_AUTO_TEST_CASE(leveldb_memtable_rand256_sorted)
     8+{
     9+    const fs::path path{m_args.GetDataDirBase() / "dbwrapper_memtable_sorted"};
    10+    CDBWrapper dbw{{.path = path, .cache_bytes = 1_MiB, .memory_only = true, .wipe_data = true}};
    11+
    12+    // Write a batch of random keys.
    13+    CDBBatch batch{dbw};
    14+    for (int i{0}; i < 10; ++i) {
    15+        auto key = m_rng.rand256().ToString().substr(0, 10);
    16+        BOOST_TEST_MESSAGE("Adding: " + key);
    17+        batch.Write(key, /*value=*/0);
    18+    }
    19+    dbw.WriteBatch(batch);
    20+
    21+    // Get iterator and collect all keys as they are stored.
    22+    std::unique_ptr<CDBIterator> it{dbw.NewIterator()};
    23+    for (std::string prev{""}; it->Valid(); it->Next()) {
    24+        std::string key;
    25+        BOOST_REQUIRE(it->GetKey(key));
    26+        BOOST_TEST_MESSAGE("Got: " + key);
    27+        BOOST_CHECK_LE(prev, key);
    28+        prev = key;
    29+    }
    30+}
    31 
    32 BOOST_AUTO_TEST_SUITE_END()
    

    These identity sets would indeed cause cross-run nondeterminism. We’ve probably done it since we don’t expect to have duplicate objects, we just care about pointer-equality for otherwise volatile instances and we don’t really care about order, just set properties. But having sorted sets of pointers does introduce needless jitter that I agree we should stabilize - and we seem to have quite a few:

     0git grep -E 'std::set<[^>]+\*>'
     1src/node/blockstorage.cpp:    for (std::set<CBlockIndex*>::iterator it = m_dirty_blockindex.begin(); it != m_dirty_blockindex.end();) {
     2src/node/blockstorage.h:    std::set<CBlockIndex*> m_dirty_blockindex;
     3src/rpc/blockchain.cpp:    std::set<const CBlockIndex*> setOrphans;
     4src/rpc/blockchain.cpp:    std::set<const CBlockIndex*> setPrevs;
     5src/rpc/blockchain.cpp:    for (std::set<const CBlockIndex*>::iterator it = setOrphans.begin(); it != setOrphans.end(); ++it) {
     6src/test/fuzz/txgraph.cpp:        std::set<std::vector<TxGraph::Ref*>> clusters;
     7src/txgraph.cpp:    std::set<const Cluster*> expected_clusters[MAX_LEVELS];
     8src/txgraph.cpp:        std::set<const Cluster*> actual_clusters;
     9src/wallet/rpc/wallet.cpp:            std::set<ScriptPubKeyMan*> spkms;
    10src/wallet/wallet.cpp:std::set<ScriptPubKeyMan*> CWallet::GetActiveScriptPubKeyMans() const
    11src/wallet/wallet.cpp:    std::set<ScriptPubKeyMan*> spk_mans;
    12src/wallet/wallet.cpp:std::set<ScriptPubKeyMan*> CWallet::GetAllScriptPubKeyMans() const
    13src/wallet/wallet.cpp:    std::set<ScriptPubKeyMan*> spk_mans;
    14src/wallet/wallet.cpp:std::set<ScriptPubKeyMan*> CWallet::GetScriptPubKeyMans(const CScript& script) const
    15src/wallet/wallet.cpp:    std::set<ScriptPubKeyMan*> spk_mans;
    16src/wallet/wallet.h:    std::set<ScriptPubKeyMan*> GetActiveScriptPubKeyMans() const;
    17src/wallet/wallet.h:    std::set<ScriptPubKeyMan*> GetAllScriptPubKeyMans() const;
    18src/wallet/wallet.h:    std::set<ScriptPubKeyMan*> GetScriptPubKeyMans(const CScript& script) const;
    

    I think we should tackle these in a separate PR. In other cases we either have dedicated sorting methods for the red-black trees (e.g. #33637) or hash them and use a hashset (e.g. #30442). In this particular case sorting could help in seeding the MemTable since we’d be sorting already sorted content - as long as iteration is not a lot slower we should keep an std::set, but I would definitely add a dedicated sorter to avoid jitter (but I would prefer doing it in a separate PR and for all other cases as well. I will try something like that)


    Crypt-iQ commented at 3:33 pm on November 13, 2025:

    Is my understanding correct that we want to remove useless jitter that doesn’t actually increase code coverage but confuses the fuzzer, such as the internal Random in leveldb’s skiplist?

    It depends on the specific fuzz engine as far as exactly what happens (e.g. how it prioritizes or drops executed inputs). Speaking generally, if the fuzzer sees increased coverage (due to non-determinism) it could prioritize a no-coverage-gain input and if it sees decreased coverage it could de-prioritize an input where it shouldn’t.

    To clarify if I was unclear above, this non-determinism doesn’t show up in what hits the disk (as it’s a k-v store), only in the MemTable which depends on the order that we write to it.

    I think we should tackle these in a separate PR.

    This sounds fine to me and I can drop the commit here. #33469 is another case where a set of pointers caused non-determinism during fuzzing.

      0diff --git a/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t1.a.txt b/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t1.b.txt
      1index 8389ec5527..68708cc9fc 100644
      2--- a/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t1.a.txt
      3+++ b/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t1.b.txt
      4@@ -61402,15 +61402,15 @@
      5    45|      0|  return "leveldb.InternalKeyComparator";
      6    46|      0|}
      7    47|       |
      8-   48|  6.11k|int InternalKeyComparator::Compare(const Slice& akey, const Slice& bkey) const {
      9+   48|  6.01k|int InternalKeyComparator::Compare(const Slice& akey, const Slice& bkey) const {
     10    49|       |  // Order by:
     11    50|       |  //    increasing user key (according to user-supplied comparator)
     12    51|       |  //    decreasing sequence number
     13    52|       |  //    decreasing type (though sequence# should be enough to disambiguate)
     14-   53|  6.11k|  int r = user_comparator_->Compare(ExtractUserKey(akey), ExtractUserKey(bkey));
     15-   54|  6.11k|  if (r == 0) {
     16+   53|  6.01k|  int r = user_comparator_->Compare(ExtractUserKey(akey), ExtractUserKey(bkey));
     17+   54|  6.01k|  if (r == 0) {
     18   ------------------
     19-  |  Branch (54:7): [True: 61, False: 6.05k]
     20+  |  Branch (54:7): [True: 61, False: 5.95k]
     21   ------------------
     22    55|     61|    const uint64_t anum = DecodeFixed64(akey.data() + akey.size() - 8);
     23    56|     61|    const uint64_t bnum = DecodeFixed64(bkey.data() + bkey.size() - 8);
     24@@ -61426,8 +61426,8 @@
     25    60|     51|      r = +1;
     26    61|     51|    }
     27    62|     61|  }
     28-   63|  6.11k|  return r;
     29-   64|  6.11k|}
     30+   63|  6.01k|  return r;
     31+   64|  6.01k|}
     32    65|       |
     33    66|       |void InternalKeyComparator::FindShortestSeparator(std::string* start,
     34    67|     10|                                                  const Slice& limit) const {
     35@@ -61620,10 +61620,10 @@
     36    92|       |bool ParseInternalKey(const Slice& internal_key, ParsedInternalKey* result);
     37    93|       |
     38    94|       |// Returns the user key portion of an internal key.
     39-   95|  13.1k|inline Slice ExtractUserKey(const Slice& internal_key) {
     40-   96|  13.1k|  assert(internal_key.size() >= 8);
     41-   97|  13.1k|  return Slice(internal_key.data(), internal_key.size() - 8);
     42-   98|  13.1k|}
     43+   95|  12.9k|inline Slice ExtractUserKey(const Slice& internal_key) {
     44+   96|  12.9k|  assert(internal_key.size() >= 8);
     45+   97|  12.9k|  return Slice(internal_key.data(), internal_key.size() - 8);
     46+   98|  12.9k|}
     47    99|       |
     48   100|       |// A comparator for internal keys that uses a specified comparator for
     49   101|       |// the user key portion and breaks ties by decreasing sequence number.
     50@@ -62498,12 +62498,12 @@
     51    11|       |
     52    12|       |namespace leveldb {
     53    13|       |
     54-   14|  11.1k|static Slice GetLengthPrefixedSlice(const char* data) {
     55-   15|  11.1k|  uint32_t len;
     56-   16|  11.1k|  const char* p = data;
     57-   17|  11.1k|  p = GetVarint32Ptr(p, p + 5, &len);  // +5: we assume "p" is not corrupted
     58-   18|  11.1k|  return Slice(p, len);
     59-   19|  11.1k|}
     60+   14|  10.9k|static Slice GetLengthPrefixedSlice(const char* data) {
     61+   15|  10.9k|  uint32_t len;
     62+   16|  10.9k|  const char* p = data;
     63+   17|  10.9k|  p = GetVarint32Ptr(p, p + 5, &len);  // +5: we assume "p" is not corrupted
     64+   18|  10.9k|  return Slice(p, len);
     65+   19|  10.9k|}
     66    20|       |
     67    21|       |MemTable::MemTable(const InternalKeyComparator& comparator)
     68    22|      4|    : comparator_(comparator), refs_(0), table_(comparator_, &arena_) {}
     69@@ -62513,12 +62513,12 @@
     70    26|      5|size_t MemTable::ApproximateMemoryUsage() { return arena_.MemoryUsage(); }
     71    27|       |
     72    28|       |int MemTable::KeyComparator::operator()(const char* aptr,
     73-   29|  4.93k|                                        const char* bptr) const {
     74+   29|  4.83k|                                        const char* bptr) const {
     75    30|       |  // Internal keys are encoded as length-prefixed strings.
     76-   31|  4.93k|  Slice a = GetLengthPrefixedSlice(aptr);
     77-   32|  4.93k|  Slice b = GetLengthPrefixedSlice(bptr);
     78-   33|  4.93k|  return comparator.Compare(a, b);
     79-   34|  4.93k|}
     80+   31|  4.83k|  Slice a = GetLengthPrefixedSlice(aptr);
     81+   32|  4.83k|  Slice b = GetLengthPrefixedSlice(bptr);
     82+   33|  4.83k|  return comparator.Compare(a, b);
     83+   34|  4.83k|}
     84    35|       |
     85    36|       |// Encode a suitable internal key target for "target" and return it.
     86    37|       |// Uses *scratch as scratch space, and the returned pointer will point
     87@@ -62883,12 +62883,12 @@
     88   150|       |
     89   151|       |  // Accessors/mutators for links.  Wrapped in methods so we can
     90   152|       |  // add the appropriate barriers as necessary.
     91-  153|  5.55k|  Node* Next(int n) {
     92-  154|  5.55k|    assert(n >= 0);
     93+  153|  5.24k|  Node* Next(int n) {
     94+  154|  5.24k|    assert(n >= 0);
     95   155|       |    // Use an 'acquire load' so that we observe a fully initialized
     96   156|       |    // version of the returned Node.
     97-  157|  5.55k|    return next_[n].load(std::memory_order_acquire);
     98-  158|  5.55k|  }
     99+  157|  5.24k|    return next_[n].load(std::memory_order_acquire);
    100+  158|  5.24k|  }
    101   159|    587|  void SetNext(int n, Node* x) {
    102   160|    587|    assert(n >= 0);
    103   161|       |    // Use a 'release store' so that anybody who reads through this
    104@@ -62995,15 +62995,15 @@
    105   252|    414|}
    106   253|       |
    107   254|       |template <typename Key, class Comparator>
    108-  255|  5.13k|bool SkipList<Key, Comparator>::KeyIsAfterNode(const Key& key, Node* n) const {
    109+  255|  4.82k|bool SkipList<Key, Comparator>::KeyIsAfterNode(const Key& key, Node* n) const {
    110   256|       |  // null n is considered infinite
    111-  257|  5.13k|  return (n != nullptr) && (compare_(n->key, key) < 0);
    112-                                         ^4.52k
    113+  257|  4.82k|  return (n != nullptr) && (compare_(n->key, key) < 0);
    114+                                         ^4.42k
    115   ------------------
    116-  |  Branch (257:10): [True: 4.52k, False: 613]
    117-  |  Branch (257:28): [True: 3.25k, False: 1.26k]
    118+  |  Branch (257:10): [True: 4.42k, False: 403]
    119+  |  Branch (257:28): [True: 2.94k, False: 1.47k]
    120   ------------------
    121-  258|  5.13k|}
    122+  258|  4.82k|}
    123   259|       |
    124   260|       |template <typename Key, class Comparator>
    125   261|       |typename SkipList<Key, Comparator>::Node*
    126@@ -63011,18 +63011,18 @@
    127   263|    443|                                              Node** prev) const {
    128   264|    443|  Node* x = head_;
    129   265|    443|  int level = GetMaxHeight() - 1;
    130-  266|  5.13k|  while (true) {
    131+  266|  4.82k|  while (true) {
    132   ------------------
    133   |  Branch (266:10): [Folded - Ignored]
    134   ------------------
    135-  267|  5.13k|    Node* next = x->Next(level);
    136-  268|  5.13k|    if (KeyIsAfterNode(key, next)) {
    137+  267|  4.82k|    Node* next = x->Next(level);
    138+  268|  4.82k|    if (KeyIsAfterNode(key, next)) {
    139   ------------------
    140-  |  Branch (268:9): [True: 3.25k, False: 1.87k]
    141+  |  Branch (268:9): [True: 2.94k, False: 1.87k]
    142   ------------------
    143   269|       |      // Keep searching in this list
    144-  270|  3.25k|      x = next;
    145-  271|  3.25k|    } else {
    146+  270|  2.94k|      x = next;
    147+  271|  2.94k|    } else {
    148   272|  1.87k|      if (prev != nullptr) prev[level] = x;
    149                                          ^1.85k
    150   ------------------
    151@@ -63038,7 +63038,7 @@
    152   277|  1.43k|        level--;
    153   278|  1.43k|      }
    154   279|  1.87k|    }
    155-  280|  5.13k|  }
    156+  280|  4.82k|  }
    157   281|    443|}
    158   282|       |
    159   283|       |template <typename Key, class Comparator>
    160@@ -68240,7 +68240,7 @@
    161    31|    945|  Slice() : data_(""), size_(0) {}
    162    32|       |
    163    33|       |  // Create a slice that refers to d[0,n-1].
    164-   34|  27.6k|  Slice(const char* d, size_t n) : data_(d), size_(n) {}
    165+   34|  27.2k|  Slice(const char* d, size_t n) : data_(d), size_(n) {}
    166    35|       |
    167    36|       |  // Create a slice that refers to the contents of "s"
    168    37|  2.16k|  Slice(const std::string& s) : data_(s.data()), size_(s.size()) {}
    169@@ -68253,10 +68253,10 @@
    170    44|       |  Slice& operator=(const Slice&) = default;
    171    45|       |
    172    46|       |  // Return a pointer to the beginning of the referenced data
    173-   47|  22.1k|  const char* data() const { return data_; }
    174+   47|  21.9k|  const char* data() const { return data_; }
    175    48|       |
    176    49|       |  // Return the length (in bytes) of the referenced data
    177-   50|  40.5k|  size_t size() const { return size_; }
    178+   50|  40.1k|  size_t size() const { return size_; }
    179    51|       |
    180    52|       |  // Return true iff the length of the referenced data is zero
    181    53|    433|  bool empty() const { return size_ == 0; }
    182@@ -68322,16 +68322,16 @@
    183   101|       |
    184   102|     26|inline bool operator!=(const Slice& x, const Slice& y) { return !(x == y); }
    185   103|       |
    186-  104|  6.41k|inline int Slice::compare(const Slice& b) const {
    187-  105|  6.41k|  const size_t min_len = (size_ < b.size_) ? size_ : b.size_;
    188-                                                           ^227    ^6.19k
    189+  104|  6.31k|inline int Slice::compare(const Slice& b) const {
    190+  105|  6.31k|  const size_t min_len = (size_ < b.size_) ? size_ : b.size_;
    191+                                                           ^227    ^6.09k
    192   ------------------
    193-  |  Branch (105:26): [True: 227, False: 6.19k]
    194+  |  Branch (105:26): [True: 227, False: 6.09k]
    195   ------------------
    196-  106|  6.41k|  int r = memcmp(data_, b.data_, min_len);
    197-  107|  6.41k|  if (r == 0) {
    198+  106|  6.31k|  int r = memcmp(data_, b.data_, min_len);
    199+  107|  6.31k|  if (r == 0) {
    200   ------------------
    201-  |  Branch (107:7): [True: 89, False: 6.32k]
    202+  |  Branch (107:7): [True: 89, False: 6.22k]
    203   ------------------
    204   108|     89|    if (size_ < b.size_)
    205   ------------------
    206@@ -68344,8 +68344,8 @@
    207   ------------------
    208   111|      0|      r = +1;
    209   112|     89|  }
    210-  113|  6.41k|  return r;
    211-  114|  6.41k|}
    212+  113|  6.31k|  return r;
    213+  114|  6.31k|}
    214   115|       |
    215   116|       |}  // namespace leveldb
    216   117|       |
    217@@ -72351,22 +72351,22 @@
    218   106|       |const char* GetVarint32PtrFallback(const char* p, const char* limit,
    219   107|       |                                   uint32_t* value);
    220   108|       |inline const char* GetVarint32Ptr(const char* p, const char* limit,
    221-  109|  11.9k|                                  uint32_t* value) {
    222-  110|  11.9k|  if (p < limit) {
    223+  109|  11.7k|                                  uint32_t* value) {
    224+  110|  11.7k|  if (p < limit) {
    225   ------------------
    226-  |  Branch (110:7): [True: 11.9k, False: 4]
    227+  |  Branch (110:7): [True: 11.7k, False: 4]
    228   ------------------
    229-  111|  11.9k|    uint32_t result = *(reinterpret_cast<const uint8_t*>(p));
    230-  112|  11.9k|    if ((result & 128) == 0) {
    231+  111|  11.7k|    uint32_t result = *(reinterpret_cast<const uint8_t*>(p));
    232diff --git a/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t0.a.txt b/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t0.b.txt
    233index 209490c47e..67c0258d57 100644
    234+  112|  11.7k|    if ((result & 128) == 0) {
    235   ------------------
    236-  |  Branch (112:9): [True: 11.9k, False: 0]
    237--- a/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t0.a.txt
    238+  |  Branch (112:9): [True: 11.7k, False: 0]
    239+++ b/Users/eugenesiegel/btc/bitcoin/build_fuzzcov/fuzz_det_cov.show.t0.b.txt
    240   ------------------
    241-  113|  11.9k|      *value = result;
    242-  114|  11.9k|      return p + 1;
    243-  115|  11.9k|    }
    244-  116|  11.9k|  }
    245@@ -61393,15 +61393,15 @@
    246+  113|  11.7k|      *value = result;
    247+  114|  11.7k|      return p + 1;
    248    45|      0|  return "leveldb.InternalKeyComparator";
    249+  115|  11.7k|    }
    250    46|      0|}
    251+  116|  11.7k|  }
    252    47|       |
    253   117|      4|  return GetVarint32PtrFallback(p, limit, value);
    254-  118|  11.9k|}
    255+  118|  11.7k|}
    256-   48|  6.06k|int InternalKeyComparator::Compare(const Slice& akey, const Slice& bkey) const {
    257   119|       |
    258   120|       |}  // namespace leveldb
    259   121|       |
    260+   48|  6.01k|int InternalKeyComparator::Compare(const Slice& akey, const Slice& bkey) const {
    261@@ -72399,9 +72399,9 @@
    262    24|       |
    263    25|      4|  const char* Name() const override { return "leveldb.BytewiseComparator"; }
    264    26|       |
    265    49|       |  // Order by:
    266-   27|  6.41k|  int Compare(const Slice& a, const Slice& b) const override {
    267    50|       |  //    increasing user key (according to user-supplied comparator)
    268-   28|  6.41k|    return a.compare(b);
    269    51|       |  //    decreasing sequence number
    270-   29|  6.41k|  }
    271    52|       |  //    decreasing type (though sequence# should be enough to disambiguate)
    272+   27|  6.31k|  int Compare(const Slice& a, const Slice& b) const override {
    273-   53|  6.06k|  int r = user_comparator_->Compare(ExtractUserKey(akey), ExtractUserKey(bkey));
    274+   28|  6.31k|    return a.compare(b);
    275-   54|  6.06k|  if (r == 0) {
    276+   29|  6.31k|  }
    277+   53|  6.01k|  int r = user_comparator_->Compare(ExtractUserKey(akey), ExtractUserKey(bkey));
    278    30|       |
    279+   54|  6.01k|  if (r == 0) {
    280    31|       |  void FindShortestSeparator(std::string* start,
    281   ------------------
    282    32|     10|                             const Slice& limit) const override {
    283-  |  Branch (54:7): [True: 66, False: 5.99k]
    284+  |  Branch (54:7): [True: 66, False: 5.94k]
    285   ------------------
    286    55|     66|    const uint64_t anum = DecodeFixed64(akey.data() + akey.size() - 8);
    287    56|     66|    const uint64_t bnum = DecodeFixed64(bkey.data() + bkey.size() - 8);
    288@@ -61417,8 +61417,8 @@
    289    60|     56|      r = +1;
    290    61|     56|    }
    291    62|     66|  }
    292-   63|  6.06k|  return r;
    293-   64|  6.06k|}
    294+   63|  6.01k|  return r;
    295+   64|  6.01k|}
    296    65|       |
    297    66|       |void InternalKeyComparator::FindShortestSeparator(std::string* start,
    298    67|     10|                                                  const Slice& limit) const {
    299@@ -61611,10 +61611,10 @@
    300    92|       |bool ParseInternalKey(const Slice& internal_key, ParsedInternalKey* result);
    301    93|       |
    302    94|       |// Returns the user key portion of an internal key.
    303-   95|  13.0k|inline Slice ExtractUserKey(const Slice& internal_key) {
    304-   96|  13.0k|  assert(internal_key.size() >= 8);
    305-   97|  13.0k|  return Slice(internal_key.data(), internal_key.size() - 8);
    306-   98|  13.0k|}
    307+   95|  12.9k|inline Slice ExtractUserKey(const Slice& internal_key) {
    308+   96|  12.9k|  assert(internal_key.size() >= 8);
    309+   97|  12.9k|  return Slice(internal_key.data(), internal_key.size() - 8);
    310+   98|  12.9k|}
    311    99|       |
    312   100|       |// A comparator for internal keys that uses a specified comparator for
    313   101|       |// the user key portion and breaks ties by decreasing sequence number.
    314@@ -62489,12 +62489,12 @@
    315    11|       |
    316    12|       |namespace leveldb {
    317    13|       |
    318-   14|  10.9k|static Slice GetLengthPrefixedSlice(const char* data) {
    319-   15|  10.9k|  uint32_t len;
    320-   16|  10.9k|  const char* p = data;
    321-   17|  10.9k|  p = GetVarint32Ptr(p, p + 5, &len);  // +5: we assume "p" is not corrupted
    322-   18|  10.9k|  return Slice(p, len);
    323-   19|  10.9k|}
    324+   14|  10.8k|static Slice GetLengthPrefixedSlice(const char* data) {
    325+   15|  10.8k|  uint32_t len;
    326+   16|  10.8k|  const char* p = data;
    327+   17|  10.8k|  p = GetVarint32Ptr(p, p + 5, &len);  // +5: we assume "p" is not corrupted
    328+   18|  10.8k|  return Slice(p, len);
    329+   19|  10.8k|}
    330    20|       |
    331    21|       |MemTable::MemTable(const InternalKeyComparator& comparator)
    332    22|      4|    : comparator_(comparator), refs_(0), table_(comparator_, &arena_) {}
    333@@ -62504,12 +62504,12 @@
    334    26|      5|size_t MemTable::ApproximateMemoryUsage() { return arena_.MemoryUsage(); }
    335    27|       |
    336    28|       |int MemTable::KeyComparator::operator()(const char* aptr,
    337-   29|  4.83k|                                        const char* bptr) const {
    338+   29|  4.78k|                                        const char* bptr) const {
    339    30|       |  // Internal keys are encoded as length-prefixed strings.
    340-   31|  4.83k|  Slice a = GetLengthPrefixedSlice(aptr);
    341-   32|  4.83k|  Slice b = GetLengthPrefixedSlice(bptr);
    342-   33|  4.83k|  return comparator.Compare(a, b);
    343-   34|  4.83k|}
    344+   31|  4.78k|  Slice a = GetLengthPrefixedSlice(aptr);
    345+   32|  4.78k|  Slice b = GetLengthPrefixedSlice(bptr);
    346+   33|  4.78k|  return comparator.Compare(a, b);
    347+   34|  4.78k|}
    348    35|       |
    349    36|       |// Encode a suitable internal key target for "target" and return it.
    350    37|       |// Uses *scratch as scratch space, and the returned pointer will point
    351@@ -62874,12 +62874,12 @@
    352   150|       |
    353   151|       |  // Accessors/mutators for links.  Wrapped in methods so we can
    354   152|       |  // add the appropriate barriers as necessary.
    355-  153|  5.37k|  Node* Next(int n) {
    356-  154|  5.37k|    assert(n >= 0);
    357+  153|  5.14k|  Node* Next(int n) {
    358+  154|  5.14k|    assert(n >= 0);
    359   155|       |    // Use an 'acquire load' so that we observe a fully initialized
    360   156|       |    // version of the returned Node.
    361-  157|  5.37k|    return next_[n].load(std::memory_order_acquire);
    362-  158|  5.37k|  }
    363+  157|  5.14k|    return next_[n].load(std::memory_order_acquire);
    364+  158|  5.14k|  }
    365   159|    587|  void SetNext(int n, Node* x) {
    366   160|    587|    assert(n >= 0);
    367   161|       |    // Use a 'release store' so that anybody who reads through this
    368@@ -62986,15 +62986,15 @@
    369   252|    414|}
    370   253|       |
    371   254|       |template <typename Key, class Comparator>
    372-  255|  4.95k|bool SkipList<Key, Comparator>::KeyIsAfterNode(const Key& key, Node* n) const {
    373+  255|  4.73k|bool SkipList<Key, Comparator>::KeyIsAfterNode(const Key& key, Node* n) const {
    374   256|       |  // null n is considered infinite
    375-  257|  4.95k|  return (n != nullptr) && (compare_(n->key, key) < 0);
    376-                                         ^4.42k
    377+  257|  4.73k|  return (n != nullptr) && (compare_(n->key, key) < 0);
    378+                                         ^4.37k
    379   ------------------
    380-  |  Branch (257:10): [True: 4.42k, False: 534]
    381-  |  Branch (257:28): [True: 3.07k, False: 1.35k]
    382+  |  Branch (257:10): [True: 4.37k, False: 356]
    383+  |  Branch (257:28): [True: 2.84k, False: 1.52k]
    384   ------------------
    385-  258|  4.95k|}
    386+  258|  4.73k|}
    387   259|       |
    388   260|       |template <typename Key, class Comparator>
    389   261|       |typename SkipList<Key, Comparator>::Node*
    390@@ -63002,18 +63002,18 @@
    391   263|    448|                                              Node** prev) const {
    392   264|    448|  Node* x = head_;
    393   265|    448|  int level = GetMaxHeight() - 1;
    394-  266|  4.95k|  while (true) {
    395+  266|  4.73k|  while (true) {
    396   ------------------
    397   |  Branch (266:10): [Folded - Ignored]
    398   ------------------
    399-  267|  4.95k|    Node* next = x->Next(level);
    400-  268|  4.95k|    if (KeyIsAfterNode(key, next)) {
    401+  267|  4.73k|    Node* next = x->Next(level);
    402+  268|  4.73k|    if (KeyIsAfterNode(key, next)) {
    403   ------------------
    404-  |  Branch (268:9): [True: 3.07k, False: 1.88k]
    405+  |  Branch (268:9): [True: 2.84k, False: 1.88k]
    406   ------------------
    407   269|       |      // Keep searching in this list
    408-  270|  3.07k|      x = next;
    409-  271|  3.07k|    } else {
    410+  270|  2.84k|      x = next;
    411+  271|  2.84k|    } else {
    412   272|  1.88k|      if (prev != nullptr) prev[level] = x;
    413                                          ^1.85k
    414   ------------------
    415@@ -63029,7 +63029,7 @@
    416   277|  1.43k|        level--;
    417   278|  1.43k|      }
    418   279|  1.88k|    }
    419-  280|  4.95k|  }
    420+  280|  4.73k|  }
    421   281|    448|}
    422   282|       |
    423   283|       |template <typename Key, class Comparator>
    424@@ -68231,7 +68231,7 @@
    425    31|    985|  Slice() : data_(""), size_(0) {}
    426    32|       |
    427    33|       |  // Create a slice that refers to d[0,n-1].
    428-   34|  27.4k|  Slice(const char* d, size_t n) : data_(d), size_(n) {}
    429+   34|  27.2k|  Slice(const char* d, size_t n) : data_(d), size_(n) {}
    430    35|       |
    431    36|       |  // Create a slice that refers to the contents of "s"
    432    37|  2.20k|  Slice(const std::string& s) : data_(s.data()), size_(s.size()) {}
    433@@ -68244,10 +68244,10 @@
    434    44|       |  Slice& operator=(const Slice&) = default;
    435    45|       |
    436    46|       |  // Return a pointer to the beginning of the referenced data
    437-   47|  22.1k|  const char* data() const { return data_; }
    438+   47|  22.0k|  const char* data() const { return data_; }
    439    48|       |
    440    49|       |  // Return the length (in bytes) of the referenced data
    441-   50|  40.4k|  size_t size() const { return size_; }
    442+   50|  40.2k|  size_t size() const { return size_; }
    443    51|       |
    444    52|       |  // Return true iff the length of the referenced data is zero
    445    53|    433|  bool empty() const { return size_ == 0; }
    446@@ -68313,16 +68313,16 @@
    447   101|       |
    448   102|     31|inline bool operator!=(const Slice& x, const Slice& y) { return !(x == y); }
    449   103|       |
    450-  104|  6.37k|inline int Slice::compare(const Slice& b) const {
    451-  105|  6.37k|  const size_t min_len = (size_ < b.size_) ? size_ : b.size_;
    452-                                                           ^250    ^6.12k
    453+  104|  6.32k|inline int Slice::compare(const Slice& b) const {
    454+  105|  6.32k|  const size_t min_len = (size_ < b.size_) ? size_ : b.size_;
    455+                                                           ^239    ^6.08k
    456   ------------------
    457-  |  Branch (105:26): [True: 250, False: 6.12k]
    458+  |  Branch (105:26): [True: 239, False: 6.08k]
    459   ------------------
    460-  106|  6.37k|  int r = memcmp(data_, b.data_, min_len);
    461-  107|  6.37k|  if (r == 0) {
    462+  106|  6.32k|  int r = memcmp(data_, b.data_, min_len);
    463+  107|  6.32k|  if (r == 0) {
    464   ------------------
    465-  |  Branch (107:7): [True: 99, False: 6.27k]
    466+  |  Branch (107:7): [True: 99, False: 6.22k]
    467   ------------------
    468   108|     99|    if (size_ < b.size_)
    469   ------------------
    470@@ -68335,8 +68335,8 @@
    471   ------------------
    472   111|      0|      r = +1;
    473   112|     99|  }
    474-  113|  6.37k|  return r;
    475-  114|  6.37k|}
    476+  113|  6.32k|  return r;
    477+  114|  6.32k|}
    478   115|       |
    479   116|       |}  // namespace leveldb
    480   117|       |
    481@@ -71306,15 +71306,15 @@
    482    45|    418|  char* result;
    483    46|    418|  if (needed <= alloc_bytes_remaining_) {
    484   ------------------
    485-  |  Branch (46:7): [True: 411, False: 7]
    486+  |  Branch (46:7): [True: 412, False: 6]
    487   ------------------
    488-   47|    411|    result = alloc_ptr_ + slop;
    489-   48|    411|    alloc_ptr_ += needed;
    490-   49|    411|    alloc_bytes_remaining_ -= needed;
    491-   50|    411|  } else {
    492+   47|    412|    result = alloc_ptr_ + slop;
    493+   48|    412|    alloc_ptr_ += needed;
    494+   49|    412|    alloc_bytes_remaining_ -= needed;
    495+   50|    412|  } else {
    496    51|       |    // AllocateFallback always returned aligned memory
    497-   52|      7|    result = AllocateFallback(bytes);
    498-   53|      7|  }
    499+   52|      6|    result = AllocateFallback(bytes);
    500+   53|      6|  }
    501    54|    418|  assert((reinterpret_cast<uintptr_t>(result) & (align - 1)) == 0);
    502    55|    418|  return result;
    503    56|    418|}
    504@@ -71391,14 +71391,14 @@
    505    59|    414|  assert(bytes > 0);
    506    60|    414|  if (bytes <= alloc_bytes_remaining_) {
    507   ------------------
    508-  |  Branch (60:7): [True: 405, False: 9]
    509+  |  Branch (60:7): [True: 404, False: 10]
    510   ------------------
    511-   61|    405|    char* result = alloc_ptr_;
    512-   62|    405|    alloc_ptr_ += bytes;
    513-   63|    405|    alloc_bytes_remaining_ -= bytes;
    514-   64|    405|    return result;
    515-   65|    405|  }
    516-   66|      9|  return AllocateFallback(bytes);
    517+   61|    404|    char* result = alloc_ptr_;
    518+   62|    404|    alloc_ptr_ += bytes;
    519+   63|    404|    alloc_bytes_remaining_ -= bytes;
    520+   64|    404|    return result;
    521+   65|    404|  }
    522+   66|     10|  return AllocateFallback(bytes);
    523    67|    414|}
    524    68|       |
    525    69|       |}  // namespace leveldb
    526@@ -72342,22 +72342,22 @@
    527   106|       |const char* GetVarint32PtrFallback(const char* p, const char* limit,
    528   107|       |                                   uint32_t* value);
    529   108|       |inline const char* GetVarint32Ptr(const char* p, const char* limit,
    530-  109|  11.7k|                                  uint32_t* value) {
    531-  110|  11.7k|  if (p < limit) {
    532+  109|  11.6k|                                  uint32_t* value) {
    533+  110|  11.6k|  if (p < limit) {
    534   ------------------
    535-  |  Branch (110:7): [True: 11.7k, False: 4]
    536+  |  Branch (110:7): [True: 11.6k, False: 4]
    537   ------------------
    538-  111|  11.7k|    uint32_t result = *(reinterpret_cast<const uint8_t*>(p));
    539-  112|  11.7k|    if ((result & 128) == 0) {
    540+  111|  11.6k|    uint32_t result = *(reinterpret_cast<const uint8_t*>(p));
    541+  112|  11.6k|    if ((result & 128) == 0) {
    542   ------------------
    543-  |  Branch (112:9): [True: 11.7k, False: 0]
    544+  |  Branch (112:9): [True: 11.6k, False: 0]
    545   ------------------
    546-  113|  11.7k|      *value = result;
    547-  114|  11.7k|      return p + 1;
    548-  115|  11.7k|    }
    549-  116|  11.7k|  }
    550+  113|  11.6k|      *value = result;
    551+  114|  11.6k|      return p + 1;
    552+  115|  11.6k|    }
    553+  116|  11.6k|  }
    554   117|      4|  return GetVarint32PtrFallback(p, limit, value);
    555-  118|  11.7k|}
    556+  118|  11.6k|}
    557   119|       |
    558   120|       |}  // namespace leveldb
    559   121|       |
    560@@ -72390,9 +72390,9 @@
    561    24|       |
    562    25|      4|  const char* Name() const override { return "leveldb.BytewiseComparator"; }
    563    26|       |
    564-   27|  6.36k|  int Compare(const Slice& a, const Slice& b) const override {
    565-   28|  6.36k|    return a.compare(b);
    566-   29|  6.36k|  }
    567+   27|  6.32k|  int Compare(const Slice& a, const Slice& b) const override {
    568+   28|  6.32k|    return a.compare(b);
    569+   29|  6.32k|  }
    570    30|       |
    571    31|       |  void FindShortestSeparator(std::string* start,
    572    32|     10|                             const Slice& limit) const override {
    573⚠️
    574
    575The coverage was not deterministic between runs.
    576The fuzz target input was /Users/eugenesiegel/btc/bitcoin/cmpctblock/cmpctblock/00095f04def2d4e203a2844032ee4bb88e33681d
    
  30. marcofleon commented at 4:57 pm on October 8, 2025: contributor
    Some possible discussion about ed813c48f826d083becf93c741b483774c850c86 for review club.
  31. fanquake added the label Fuzzing on Oct 30, 2025
  32. net: move CMPCTBLOCK_VERSION to header b58beea4b6
  33. fuzz: move FinalizeHeader from p2p_headers_presync.cpp to util.h
    This allows other fuzz tests to use the function to create valid block headers.
    a54e759d05
  34. fs: add std::filesystem::copy wrapper
    Disable implicit string conversion.
    8515594b3b
  35. test: add -fuzzcopydatadir, modify -testdatadir semantics during fuzzing
    The -fuzzcopydatadir argument accepts a path to a cached data directory.
    It then copies everything in this data directory over to a new directory
    that will be deleted in ~BasicTestingSetup.
    
    The -testdatadir argument is modified to not add a random path element
    if EnableFuzzDeterminism() is true. This is necessary so the path of
    the data directory created via -testdatadir can be passed into
    -fuzzcopydatadir. Since the caller is BasicTestingSetup() is not aware of
    the random path element, finding it would be more complex than just
    not adding it.
    
    Co-Authored-By: stringintech <stringintech@gmail.com>
    5a0f0c3fac
  36. test: add setup_validation_interface_no_scheduler to TestOpts
    This option is mutually exclusive with setup_validation_interface
    and ensures that a scheduler is not created.
    71719d172e
  37. Crypt-iQ force-pushed on Oct 30, 2025
  38. fuzz: add harness for the compact blocks protocol
    This harness does one of the following operations in a loop:
    - send a cmpctblock message to the test node
    - send a blocktxn message to the test node
    - send a headers message to the test node
    - send a sendcmpct message to the test node
    - send a tx message to the test node
    - mine a block
    - set mock time
    
    The initialize function creates a TestingSetup and mines 200 blocks. Each fuzz
    iteration will then create a TestingSetup and copy the cached data directory via
    -fuzzcopydatadir.
    1cfe4f0355
  39. node: sort m_dirty_blockindex by block hash
    This avoids non-determinism since memory addresses change between
    runs for a single input.
    f2ce3626a6
  40. Crypt-iQ force-pushed on Oct 30, 2025
  41. Crypt-iQ commented at 6:08 pm on October 30, 2025: contributor

    Latest push ed813c48f826d083becf93c741b483774c850c86 -> f2ce362:

    • implements GetStrongRandBytes per comment
    • modifies create_tx to choose a mempool UTXO only sometimes instead of most of the time
    • modifies create_block to generate more than 2 non-coinbase transactions per feedback from review club
  42. in src/test/fuzz/p2p_headers_presync.cpp:146 in a54e759d05 outdated
    142@@ -143,13 +143,6 @@ CBlock ConsumeBlock(FuzzedDataProvider& fuzzed_data_provider, const uint256& pre
    143     return block;
    144 }
    145 
    146-void FinalizeHeader(CBlockHeader& header, const ChainstateManager& chainman)
    


    l0rinc commented at 12:04 pm on November 11, 2025:
    do we still need #include <pow.h> here after the move?

    Crypt-iQ commented at 3:42 pm on November 13, 2025:
    Nope, thanks.
  43. in src/util/fs.h:141 in 8515594b3b outdated
    135@@ -136,6 +136,13 @@ static inline bool copy_file(const path& from, const path& to, copy_options opti
    136     return std::filesystem::copy_file(from, to, options);
    137 }
    138 
    139+// Disallow implicit std::string conversion for copy to avoid locale-dependent
    140+// encoding on Windows. This is currently only used in fuzzing.
    141+static inline void copy(const path& from, const path& to, copy_options options)
    


    l0rinc commented at 12:15 pm on November 11, 2025:

    8515594b3b716d05a89b9fef36dfe7e8032d19c7

    Do we still need this after #33550?

    This is currently only used in fuzzing.

    Are we adding dead code in this commit? It’s hard to judge if a solution is accurate if we see it before the problem is presented - can we make the question obvious before we provide the answer?

    And more concretely: my IDE is confused about the usage of this even at the tip - would it be possible to invalidate the conversions you don’t want instead, maybe something like:

    0static void copy(const std::string&, const path&, copy_options) = delete;
    1static void copy(const path&, const std::string&, copy_options) = delete;
    2static void copy(const std::string&, const std::string&, copy_options) = delete;
    

    nit: inline is implicit


    Crypt-iQ commented at 4:39 pm on November 13, 2025:

    Do we still need this after #33550?

    I think it’s still needed, I added it to get rid of linter warnings if std::filesystem::copy is instead called directly from the test code.

    Are we adding dead code in this commit? It’s hard to judge if a solution is accurate if we see it before the problem is presented - can we make the question obvious before we provide the answer?

    Fair point, I’ll rework the commits so it’s a bit easier to tell where things are used.

    would it be possible to invalidate the conversions you don’t want instead, maybe something like

    Yes, that also works.


    Crypt-iQ commented at 4:44 pm on November 19, 2025:

    would it be possible to invalidate the conversions you don’t want instead, maybe something like

    Currently implementing this and wondering if this is necessary; can you share what your IDE is confused about? The constructor of path that takes a std::string is deleted.


    l0rinc commented at 4:54 pm on November 19, 2025:
    I don’t have Windows, it’s why I asked if instead of adding a new method we can just delete the conversions, somewhat similarly to the mentioned https://github.com/bitcoin/bitcoin/pull/33550/files#diff-69423eb01bf14b3bd0d930c0b3e1fd6f4f061ffefacab579053eaa734fc22f38R65 (since the error seemed superficially similar to me).
  44. in src/test/util/setup_common.cpp:291 in 71719d172e outdated
    286@@ -287,6 +287,8 @@ ChainTestingSetup::ChainTestingSetup(const ChainType chainType, TestOpts opts)
    287             m_node.scheduler->scheduleFromNow([&promise] { promise.set_value(); }, 0ms);
    288             promise.get_future().wait();
    289         }
    290+    } else if (opts.setup_validation_interface_no_scheduler) {
    291+        m_node.validation_signals = std::make_unique<ValidationSignals>(std::make_unique<util::ImmediateTaskRunner>());
    


    l0rinc commented at 12:29 pm on November 11, 2025:
    I have the same problem in 71719d172e3a435dd983293fc9da6a08974f8b01 - we’re introducing dead code, I don’t have a way to judge if this is indeed the correct solution, since up to this point there wasn’t any pain that this alleviates. Looking at the commit I see that setup_validation_interface_no_scheduler is always false in this commit therefore we can delete the code - and I have to check the next commits and come back here to reevaluate that. Can we add a simpler fuzz test which needs this in the same commit that introduces it - and extend the test with other functionality separately.

    Crypt-iQ commented at 4:42 pm on November 13, 2025:

    Can we add a simpler fuzz test which needs this in the same commit that introduces it - and extend the test with other functionality separately.

    Yes, I’ll do some variation of this. I think it might be cleaner to put it as one of the last commits.

  45. in src/test/fuzz/cmpctblock.cpp:422 in 1cfe4f0355 outdated
    417+            random_node.fPauseSend = false;
    418+
    419+            try {
    420+                more_work = connman.ProcessMessagesOnce(random_node);
    421+            } catch (const std::ios_base::failure&) {
    422+            }
    


    l0rinc commented at 12:33 pm on November 11, 2025:
    0                // Expected for truncated/invalid fuzzed messages; swallow to continue exercising code paths.
    1                // TODO validate that the error is something expected, otherwise anything can happen here
    2            }
    

    Crypt-iQ commented at 4:45 pm on November 13, 2025:
    This is copied from the process_message(s) harness. Unlike those, this fuzz test should always construct correctly serialized messages (even if the messages themselves are deemed “protocol invalid”), so I think the catch case can just be assert(false).

    l0rinc commented at 9:42 am on November 14, 2025:
    Isn’t it cleaner to just drop the try/catch entirely in that case?

    Crypt-iQ commented at 11:09 am on November 14, 2025:
    Ah yes, that is cleaner.
  46. in src/test/fuzz/cmpctblock.cpp:199 in 1cfe4f0355 outdated
    194+        in.nSequence = sequence;
    195+        in.scriptSig = script_sig;
    196+        in.scriptWitness.stack = script_wit_stack;
    197+        tx_mut.vin.push_back(in);
    198+
    199+        const CAmount amount_in = GetAmount(outpoint);
    


    l0rinc commented at 12:36 pm on November 11, 2025:

    nit: seems excessive to add a helper for a single usage here:

    0        const CAmount amount_in = Assert(amount_view.GetCoin(outpoint))->out.nValue;
    

    Crypt-iQ commented at 5:02 pm on November 13, 2025:
    Will do, I think this was used in multiple places in an earlier version of this code.
  47. in src/test/util/setup_common.cpp:1 in 5a0f0c3fac outdated


    l0rinc commented at 1:00 pm on November 11, 2025:

    5a0f0c3fac47053f7d0245ac0b005d4b26cacd48 nit:

    0- Since the caller is BasicTestingSetup() is not aware of the random path element
    1+ Since the caller of BasicTestingSetup() is not aware of the random path element
    
  48. in src/test/fuzz/cmpctblock.cpp:332 in f2ce3626a6
    327+                }
    328+
    329+                CBlockHeaderAndShortTxIDs base_cmpctblock = cmpctblock;
    330+                net_msg = NetMsg::Make(NetMsgType::CMPCTBLOCK, base_cmpctblock);
    331+            },
    332+            [&]() {
    


    l0rinc commented at 1:05 pm on November 11, 2025:
    This commit is huge, it’s hard to review it meaningfully. Can we separate some of these cases to independent commits that can provide more context?

    Crypt-iQ commented at 5:03 pm on November 13, 2025:
    I’ll incrementally add the cases so it’s a bit easier to digest.
  49. in src/test/util/setup_common.cpp:105 in 5a0f0c3fac outdated
    101@@ -101,6 +102,7 @@ void SetupCommonTestArgs(ArgsManager& argsman)
    102 {
    103     argsman.AddArg("-testdatadir", strprintf("Custom data directory (default: %s<random_string>)", fs::PathToString(fs::temp_directory_path() / TEST_DIR_PATH_ELEMENT / "")),
    104                    ArgsManager::ALLOW_ANY, OptionsCategory::DEBUG_TEST);
    105+    argsman.AddArg("-fuzzcopydatadir", "Copies the passed data directory to a temporary directory to use during a single fuzz iteration", ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
    


    l0rinc commented at 1:19 pm on November 11, 2025:
    5a0f0c3fac47053f7d0245ac0b005d4b26cacd48 commit is also quite big, can we split by functionality instead? I would prefer the commits telling a story, to each make be self-contained as much as possible, to make sense on their own as far as it’s reasonable. Can we split this one by features, e.g. adding rand_path to be able to specify strong random, maybe fuzzcopydatadir param and usage next, EnableFuzzDeterminism last - or something similar
  50. in src/node/blockstorage.cpp:185 in f2ce3626a6
    179@@ -180,6 +180,11 @@ bool CBlockIndexHeightOnlyComparator::operator()(const CBlockIndex* pa, const CB
    180     return pa->nHeight < pb->nHeight;
    181 }
    182 
    183+bool CBlockIndexBlockHashComparator::operator()(const CBlockIndex* pa, const CBlockIndex* pb) const
    184+{
    185+    return UintToArith256(pa->GetBlockHash()) < UintToArith256(pb->GetBlockHash());
    


    l0rinc commented at 3:31 pm on November 11, 2025:

    What was your reason for instantiating new objects just for comparison, is it just to make sure you’re sorting backwards?

    0    return pa->GetBlockHash().Compare(pb->GetBlockHash());
    

    or

    0    return pa->GetBlockHash() < pb->GetBlockHash();
    

    I don’t think that’s necessary, the above should likely be safe as well.

    But since it’s a lot faster to compare pointers for equality (which seems like an important feature, it’s why most of these are a set in the first place I assume) and since that’s stable across runs as well, we can likely optimize further to:

    0    return pa != pb && pa->GetBlockHash() < pb->GetBlockHash();
    

    And if we inline this to the header it should be just as fast as pointers.


    Crypt-iQ commented at 5:10 pm on November 13, 2025:
    IIRC I tried comparing the uint256 block hashes similar to your second suggestion, and while it fixed the leveldb non-determinism, it introduced its own non-determinism.

    l0rinc commented at 9:43 am on November 14, 2025:
    The suggested comparator is a pure function, I don’t see how it could be non-deterministic - unless it’s broken, in which case we should definitely dig deeper. Can you try it again an report what you see?

    Crypt-iQ commented at 3:54 pm on November 19, 2025:
    My bad, I must have tested a slightly different comparator. The suggested ones are deterministic.
  51. in src/node/blockstorage.h:99 in f2ce3626a6
    94@@ -95,6 +95,10 @@ struct CBlockIndexHeightOnlyComparator {
    95     bool operator()(const CBlockIndex* pa, const CBlockIndex* pb) const;
    96 };
    97 
    98+struct CBlockIndexBlockHashComparator {
    99+    bool operator()(const CBlockIndex* pa, const CBlockIndex* pb) const;
    


    l0rinc commented at 3:38 pm on November 11, 2025:
    I’d just move this to header for easier inlinability, #33637 indicates that’s generally a lot more performant
  52. l0rinc changes_requested
  53. l0rinc commented at 4:23 pm on November 11, 2025: contributor

    Concept and approach ACK, thanks for tackling this.

    I like how you’ve extracted the first few commits that I could quickly get out of the way to continue to the more difficult ones - which are a bit too difficult though, it would help me personally if we could split it into smaller functional chunks, so that the reviewers are guided through the change. The commit messages could give extra context where needed. I don’t think we should split by functions, but rather functionalities so that each commit could theoretically be merged if needed - we shouldn’t depend on a future commit to understand a previous one.

    I have provided some hints and some further context for the LevelDB related changes. I think we should do the set stabilization in a dedicated PR, but you can also adjust it here so that it likely performs similarly to the original.


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-11-20 15:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me