Fix signed integer oveflow #25561

pull bvbfan wants to merge 1 commits into bitcoin:master from bvbfan:wallet/fix_cve changing 2 files +5 −5
  1. bvbfan commented at 5:27 am on July 7, 2022: contributor

    It’s Common Vulnerable and Exposure mirror https://github.com/google/leveldb/pull/1040

    Ledger by default is safe, but if node is running by -dbcache=2148 or more it will misbehave.

  2. Fix signed integer oveflow
    Signed-off-by: Anthony Fieroni <bvbfan@abv.bg>
    32a3d9cfbe
  3. maflcko commented at 5:35 am on July 7, 2022: member
    Mind adding steps to reproduce?
  4. bvbfan commented at 7:24 am on July 7, 2022: contributor

    Mind adding steps to reproduce?

    Increase -dbcache= in then validate blocks and coins dbs (probably the easiest via python plyvel a small script to compare node dbs) In fork that i’ve work on, it’s flush batch at every block in ibd, so every block append in a batch then some batch records missing due to signed int overflow.

  5. maflcko commented at 7:31 am on July 7, 2022: member

    Sorry, I don’t follow. Are you saying this is something that an end user of a Bitcoin Core release can run into or does it require local modifications or external scripts?

    Our functional tests run with the integer sanitizer, so are you saying this happens only outside of tests?

    I am happy to try to reproduce this, but it would make it a lot easier for me if there were exact and easy steps to reproduce. Ideally ones that can be copy-pasted into a terminal.

  6. maflcko commented at 7:39 am on July 7, 2022: member

    If this is security relevant (or you are unsure about it), kindly follow the security policy:

    https://github.com/bitcoin/bitcoin/security/policy

  7. bvbfan commented at 10:17 am on July 7, 2022: contributor

    DBImpl::Write is used by CDBWrapper to write in leveldb, it groups batches in DBImpl::BuildBatchGroup then in WriteBatchInternal::Append uses SetCount(dst, Count(dst) + Count(src)); more precise Count(dst) + Count(src) is signed int + signed int = overflow is UB that’s made a record id to be unpredictable then not found.

    It could compromise block / tx index and/or coins db which could lead to possible fork and compromise the network as worst case scenario.

  8. chjj commented at 11:39 am on July 7, 2022: none

    @bvbfan, BuildBatchGroup only groups batches if you’re attempting to write batches concurrently (core appears to split updates into smaller batches and write them sequentially). Even then, the batches would need to total 2^31 updates or more.

    I don’t think this is possible under any realistic circumstance. Writing the entire UTXO set in a single batch still wouldn’t come close to triggering an overflow.

  9. bvbfan commented at 12:08 pm on July 7, 2022: contributor

    @chjj correct, it groups batches when it’s called concurrently, but it does it even one batch is present, notice

    0  if (result == first->batch) {
    1        // Switch to temporary batch instead of disturbing caller's batch
    2        result = tmp_batch_;
    3        assert(WriteBatchInternal::Count(result) == 0);
    4        WriteBatchInternal::Append(result, first->batch);
    5  }
    

    one batch it uses the allocated one and will overflow as well. Issue is pretty serious.

  10. chjj commented at 12:18 pm on July 7, 2022: none

    @bvbfan, the append loop won’t ever be entered with just a single writer:

    0  std::deque<Writer*>::iterator iter = writers_.begin();
    1  ++iter;  // Advance past "first"
    2  for (; iter != writers_.end(); ++iter) {
    3    ...
    4  }
    

    one batch it uses the allocated one and will overflow as well

    I’m still having trouble understanding how there would be billions of updates in a single write.

  11. bvbfan commented at 12:22 pm on July 7, 2022: contributor

    I’m still having trouble understanding how there would be billions of updates in a single write.

    Easy, bitcoin core uses a cache over the backed, so when cache is increased it will produce a big batch, as i wrote before via -dbcache

  12. chjj commented at 12:57 pm on July 7, 2022: none

    so when cache is increased it will produce a big batch

    I don’t think that’s true. When the cache is flushed, core splits the updates into smaller batches. These batches are processed sequentially, and aren’t grouped together by LevelDB.

  13. bvbfan commented at 1:59 pm on July 7, 2022: contributor

    so when cache is increased it will produce a big batch

    I don’t think that’s true. When the cache is flushed, core splits the updates into smaller batches. These batches are processed sequentially, and aren’t grouped together by LevelDB.

    There is nothing to “think”, just read the code In CCoinsViewDB::BatchWrite and CBlockTreeDB::WriteBatchSync one batch is created and put/delete just append to it.

  14. luke-jr commented at 0:26 am on July 8, 2022: member

    While an overflow may technically be UB, in practice AFAIK it is always the same deterministic result (for the same platform) on all supported platforms. So unless the value going negative is a problem (which is not trivial to see looking at the code), I don’t see how this is exploitable or can lead to de facto issues. But if going negative is a problem, I don’t see why simply making it uint32 would fix it… it would just move the overflow from 2^31 to 2^32.

    That being said, since it’s UB, we should probably still fix it anyway. But this currently goes through upstream or at least https://github.com/bitcoin-core/leveldb-subtree

  15. maflcko commented at 6:56 am on July 8, 2022: member

    That being said, since it’s UB, we should probably still fix it anyway

    Keep in mind that unsigned will “propagate” over signed and may change code logic in unrelated parts. So I don’t think we should be blindly changing int into unsigned because it can theoretically be UB, unless it is clear how the UB can be triggered in a Bitcoin Core production release. (Either with a full description, a unit test, or otherwise steps to reproduce).

    If the issue purely exists for unrelated third-party API users of the google leveldb, the issue should be fixed in google/leveldb.

  16. bvbfan commented at 7:39 am on July 8, 2022: contributor

    Keep in mind that unsigned will “propagate” over signed and may change code logic in unrelated parts

    It will be true if there is unsigned int in the calculation chain, but there isn’t. It’s stored as unsigned but after calculation. The lucky scenario is 0 + (- signed) will not trigger actual calculation then it’s fine.

    But if going negative is a problem, I don’t see why simply making it uint32 would fix it… it would just move the overflow from 2^31 to 2^32.

    Unsigned int overflow is well defined and expected, unless that - true.

    So I don’t think we should be blindly changing int into unsigned because it can theoretically be UB, unless it is clear how the UB can be triggered in a Bitcoin Core production release

    So wrong logic, even if it’s well defined, as i wrote only by luck, it could lead to possible fork in the future, depend of some new implementation. There is no blind here.

  17. Crypt-iQ commented at 3:05 pm on July 8, 2022: contributor

    DBImpl::Write is used by CDBWrapper to write in leveldb, it groups batches in DBImpl::BuildBatchGroup then in WriteBatchInternal::Append uses SetCount(dst, Count(dst) + Count(src)); more precise Count(dst) + Count(src) is signed int + signed int = overflow is UB that’s made a record id to be unpredictable then not found.

    I don’t think this leads to signed integer overflow.

    BuildBatchGroup (https://github.com/google/leveldb/blob/479a1f4e9b1138d7dda4db5078aadc4e7b845a85/db/db_impl.cc#L1282-L1290) fetches the size of the initial batch and determines a maximum size a combined batch can be. It doesn’t, however, limit the size of the initial batch but I think the bitcoind code doesn’t create lots of Puts/Deletes in a single batch.

    Note that this size is not the number of entries being updated, but the overall serialized size of the batch (which is internally represented as std::string). The number of entries ref’d by SetCount or Count is the actual number of entries being updated via put/delete. The count will always lag behind the byte size of the batch since each entry being modified at least has a key that is stored in the internal std::string. Since the max_size is nowhere near the maximum int value, this check should be hit (https://github.com/google/leveldb/blob/479a1f4e9b1138d7dda4db5078aadc4e7b845a85/db/db_impl.cc#L1303-L1307) and signed integer overflow via Append -> SetCount won’t occur.

  18. bvbfan commented at 2:55 pm on July 9, 2022: contributor

    Note that this size is not the number of entries being updated, but the overall serialized size of the batch (which is internally represented as std::string). The number of entries ref’d by SetCount or Count is the actual number of entries being updated via put/delete. The count will always lag behind the byte size of the batch since each entry being modified at least has a key that is stored in the internal std::string. Since the max_size is nowhere near the maximum int value, this check should be hit (https://github.com/google/leveldb/blob/479a1f4e9b1138d7dda4db5078aadc4e7b845a85/db/db_impl.cc#L1303-L1307) and signed integer overflow via Append -> SetCount won’t occur.

    Correct, it’s internal counter of Put/Delete records, it could overflow as well, note here https://github.com/google/leveldb/blob/479a1f4e9b1138d7dda4db5078aadc4e7b845a85/db/write_batch.cc#L99 https://github.com/google/leveldb/blob/479a1f4e9b1138d7dda4db5078aadc4e7b845a85/db/write_batch.cc#L106

    So depending of configuration it’s vulnerable. I did not expect to defence a patch which address a clear CVE.

  19. sipa commented at 4:22 pm on July 9, 2022: member

    A few comments:

    • It’s unclear to me whether this is even vulnerable in general, but Bitcoin Core splits up UTXO database writes into 16 MiB chunks, so it’s impossible to even reach this condition. This can only be changed using a debug-only option. So from this I would conclude this doesn’t pose a vulnerability for Bitcoin Core itself, and doesn’t warrant any special treatment on our end.
    • Even if the above is wrong (which I think it isn’t), and this actually poses a threat directly to Bitcoin Core, it should be fixed through updating of the https://github.com/bitcoin-core/leveldb-subtree repo, where our LevelDB fork is maintained (this is why this PR is failing CI).
    • It may be that this is a bug in the LevelDB code that doesn’t affect us, but if it is, it should be fixed upstream. I see you’ve opened a PR there, though it’s waiting on signing a license. If merged upstream we can consider updating the Bitcoin Core subtree, or just waiting for a normal update.
    • CVEs are a database for assigning unambiguous numbers to vulnerabilities and disclosures. Unless you’ve requested a CVE number, this is not a CVE. Doing so is independent from whether it’s actually a serious issue or not.
    • If you believe this was a serious issue, you should privately contact the maintainers, not open a PR. I hope that if you do find problems in the future, that’s the approach you’ll use.
  20. maflcko closed this on Jul 11, 2022

  21. bvbfan commented at 10:14 am on July 11, 2022: contributor

    A few comments:

    • It’s unclear to me whether this is even vulnerable in general, but Bitcoin Core splits up UTXO database writes into 16 MiB chunks, so it’s impossible to even reach this condition. This can only be changed using a debug-only option. So from this I would conclude this doesn’t pose a vulnerability for Bitcoin Core itself, and doesn’t warrant any special treatment on our end.

    Untrue, https://github.com/bitcoin/bitcoin/blob/master/src/txdb.cpp#L118 it’s configurable. It’s not affected only coinds db, block index as well.

    • Even if the above is wrong (which I think it isn’t), and this actually poses a threat directly to Bitcoin Core, it should be fixed through updating of the https://github.com/bitcoin-core/leveldb-subtree repo, where our LevelDB fork is maintained (this is why this PR is failing CI).

    Sure, i could open PR there.

    • It may be that this is a bug in the LevelDB code that doesn’t affect us, but if it is, it should be fixed upstream. I see you’ve opened a PR there, though it’s waiting on signing a license. If merged upstream we can consider updating the Bitcoin Core subtree, or just waiting for a normal update.

    Sure, it’s open PR upstream, i still think it’s higher priority to Bitcoin

    • CVEs are a database for assigning unambiguous numbers to vulnerabilities and disclosures. Unless you’ve requested a CVE number, this is not a CVE. Doing so is independent from whether it’s actually a serious issue or not.

    Sure, i did not request, but it’s clear a CVE, i was supposed to do it, but rather i think it’s better to be fixed here.

  22. bvbfan deleted the branch on Jul 11, 2022
  23. bvbfan commented at 1:39 pm on October 26, 2022: contributor
    It’s accepted upstream.
  24. luke-jr commented at 2:36 pm on October 26, 2022: member
    Not as far as I can see…?
  25. bvbfan commented at 2:54 pm on October 26, 2022: contributor
  26. 0xB10C commented at 3:59 pm on October 26, 2022: contributor

    It’s approved google/leveldb#1040

    That some random GitHub account marks your PR with a checkmark on GItHub doesn’t mean that the PR is accepted into upstream. Everyone can add this checkmark on any PR.

  27. bitcoin locked this on Oct 26, 2023

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-23 09:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me