Fix: respend relay must examine more inputs #4484

pull dgenr8 wants to merge 4 commits into bitcoin:master from dgenr8:relay_break_first changing 6 files +183 −52
  1. dgenr8 commented at 2:16 PM on July 8, 2014: contributor

    Restructure double-spend detection to fix three vulnerabilities in respend relay.

    First, when a transaction is found to be a relayable respend, stop checking its inputs for more respent prevouts. Attacker should not be able to suppress relay by including a later input that has already been double-spent.

    Second, when a respend is found NOT to be relayable due to previous relay in connection with one input, keep searching its inputs. Attacker should not be able to suppress relay by including an earlier input that has already been double-spent.

    Third, do not update the respend bloom filter until a respend is actually relayed. Otherwise, attacker can neutralize relay by sending an invalid respend, followed by a valid one.

  2. dgenr8 commented at 2:18 PM on July 8, 2014: contributor

    @gavinandresen Hope you have time to review.

  3. gavinandresen commented at 7:57 PM on July 8, 2014: contributor

    Code changes look good, I am writing a regression test to help test.

  4. gavinandresen commented at 1:31 AM on July 9, 2014: contributor

    First cut regression test that just tests the 'ordinary' double-spend case: https://github.com/gavinandresen/bitcoin-git/commits/doublespend_tests

    Simulating attack scenarios addressed by this pull as additional tests would be spiffy...

  5. dgenr8 commented at 3:04 PM on July 11, 2014: contributor

    I'm adding 3 more tests, then will commit doublespendrelay.py. The tests are: no relay same input triple-spend, yes relay with triple spend before, and after, double-spend in input list.

    No test for not adding invalid tx to bloom filter. This would require shipping a tool that transmits invalid txes.

  6. dgenr8 commented at 11:06 PM on July 11, 2014: contributor

    The util.py function stop_node, which is used repeatedly in this test, sometimes hangs due to #4502 (stuck GetExternalIP).

    A successful test can take up to 30 seconds due to propagation waits.

  7. jgarzik commented at 11:16 PM on July 11, 2014: contributor

    The changes themselves look OK at quick glance.

    However: Pattern matched :) In general, waiting a bit for tests to complete should be OK.

    However, anything longer than X seconds or minutes tends to impact programmer productivity, by slowing down the compile+test cycle, sometimes leading to the skipping of tests. The usual solution is a switch between quick tests and comprehensive tests.

    Long term, "make check" should really be running pull tester tests locally. We should not have to push to github to get comprehensive testing, IMO. My dev boxes are probably more beefy than our pull tester box.

  8. sipa commented at 11:24 PM on July 11, 2014: member

    @jgarzik Compile with --with-comparison-tool=file.jar

  9. ghost commented at 8:42 PM on July 12, 2014: none

    Looks good.

  10. SergioDemianLerner commented at 4:17 PM on July 15, 2014: contributor

    Edited: removed accusation. A single transaction with 2000 double-spent inputs will be relayed 2000 times, one for each input!

    The code should be: if (pool.mapNextTx.count(outpoint)) {

           respend = true;
    
            if ((doubleSpendFilter.contains(outpoint) &&  (!tx.IsEquivalentTo(*pool.mapNextTx[outpoint].ptx)) )   // <<<---- ADDED
              return false;                                           // <<<---- ADDED
    
            // Relay only one tx per respent outpoint, but not if tx is equivalent to pool member
            if (!doubleSpendFilter.contains(outpoint) && !tx.IsEquivalentTo(*pool.mapNextTx[outpoint].ptx))
            {
                relayForOutpoint = outpoint;
                break;
            }
        }
    

    Please look at my comments in #4450. Don't merge this patch, it's too difficult to make it right.

  11. dgenr8 commented at 4:43 PM on July 15, 2014: contributor

    @SergioDemianLerner No, please re-read as committed. And please do provide a concrete example of at least one bug when making such a broad accusation.

  12. SergioDemianLerner commented at 6:00 PM on July 15, 2014: contributor

    Dear @dgenr8, I'm not saying that you made a mistake. Probably I did the mistake. I'm sure you worked very hard in this issue, and I didn't. What I'm saying is that the IDEA of this patch is difficult to implement right, if not impossible.

  13. SergioDemianLerner commented at 6:17 PM on July 15, 2014: contributor

    @dgenr8 I still see the problem in the code. Suppose that TX with 1000 double-spend prevout is received, it will be broadcast because of the FIRST double-spend prevout.The remaining prevouts won't be added to the bloom filter. If it is received again, it will be broadcast because of the SECOND double-spend prevout. So if received 1000 times, it will be broadcast 1000 times, creating a massive DoS loop.

  14. dgenr8 commented at 7:45 PM on July 15, 2014: contributor

    @SergioDemianLerner That is still @petertodd's attack. Small first spend and huge respend. Rate limiter.

  15. petertodd commented at 7:52 PM on July 15, 2014: contributor

    Yup. @SergioDemianLerner If you think there's an attack, write up a quick script that demonstrates it. For instance here's one I did for the invalid-due-to-too-many-sigops attack: https://github.com/petertodd/tx-flood-attack

    It's really useful to have attack code available to do regression testing, as well as test new implementations.

  16. SergioDemianLerner commented at 12:43 AM on July 16, 2014: contributor

    Ok. The reason my "attack" does not work is because RelayTransaction() does not relay the same transaction twice. And this is because setInventoryKnown is unbounded. But setInventoryKnown should be bounded (and this will be probably corrected in a future version), and so the attack is still possible, but it will require overflowing the setInventoryKnown to force it delete the tx hash. If elements are evicted randomly from setInventoryKnown, then there should be not problem. But also there is the problem of malleability: The day an attacker finds a bug involving transaction malleability that allows nodes to broadcast a modified tx, he can create infinite number of copies of the same Tx with different hashes, and DoS the network using a single tx, because all of them are going to be relayed by every peer.

    So from many perspectives I see this patch dangerous.

    The Bitcoin network is a dynamic feedback system that can oscillate and broadcasting is amplification. Have you seen bridges oscillate?

  17. SergioDemianLerner commented at 12:57 AM on July 16, 2014: contributor

    For example, using SIGHASH_NONE the attacker can easily create thousands of variants of a single transaction without even computing signatures. Just 2000 double-spend inputs, and a single changing output. So the attacker consumes X bandwidth and each link of the whole network consumes X bandwidth.

    Can't we rise the bloom filter reset value to something like an average 100K hashes?

  18. SergioDemianLerner commented at 1:06 AM on July 16, 2014: contributor

    What is the limit Kb/sec of the rate limiter?

  19. sipa commented at 3:11 AM on July 16, 2014: member

    Last I checked the code before merge, it was 50 kB per 10 minutes. Seems it was increased to 1 MB per 10 minutes since...

  20. dgenr8 commented at 3:26 AM on July 16, 2014: contributor

    Yes, the RR rate limit is currently 100 thousand bytes/min = 1.6Kb/s. It was increased due to feedback from @sipa and @petertodd. All of the numerical constants are open to better-reasoned values of course. @SergioDemianLerner Was raising the bloom filter size related to your malleability point? The bloom filter contains outputs, not respend txes. When attacker controls the outputs we can just assume he has many ways to construct big respend txes.

    But you hit on something there with signature hash type. If tx1 (FIRST spend) has any hash type but SIGHASH_ALL, alerts are going to be triggered by third parties just doing what they're allowed to.

    So IsEquivalent needs to get smarter and take signature hash type into account, always ignoring all the parts a third party could change. Or, the easy way out is only alert for SIGHASH_ALL.

    If this is a bug, is it evidence of tacoma narrows style instability? Or can we just call it progress, collaboration, and enough-eyeballs?

    Looking for input on hash type.

  21. SergioDemianLerner commented at 8:10 PM on July 16, 2014: contributor

    1.6Kb/s is very low, so the network will never be in danger for normal transactions/blocks, but is also means that the double-spend alert system will be easy to saturate if an attacker wants to. I have no more arguments, so I leave this to you. It was only my opinion against everyone else's, so you must know better...

  22. dgenr8 commented at 8:43 PM on July 16, 2014: contributor

    @SergioDemianLerner Targeted saturation with the goal of executing a particular double-spend in the dark is a risk. But they are all in the dark today. We would probably see this if unwise merchants accept 0-conf payments for valuable items. I completely disagree that you are isolated -- and you have contributed a ton to this. Do you still NACK it? If so, are there specific changes that would change your mind?

  23. SergioDemianLerner commented at 12:57 PM on July 17, 2014: contributor

    @dgenr8 Yes, I still think it has problems. If we keep the reset constant too low (1000 outputs), then I see that the system could be attacked in this other way:

    1. Fill the memory of a majority of the peers in the whole network bringing Bitcoin to death (not sure that I can, but I think so)
    2. Attack SPV battery-powered nodes by draining the battery or using the whole bandwidth.

    Let me first explain 1 with a targeted attack.

    I've have already explained how two transactions can pass though the network by being sent iteratively, but I will explain again if I was not clear: Suppose I know the Bitcoin address being monitored by a node X in the network. Suppose that an attacker owns 4000 outputs. He creates transactions to spend those 4000 ouputs with zero fees so they have the lowest priority. Then the attacker creates two double-spends Tx, each one double-spending 2000 inputs, he sets SIGHASH_NONE for the signature of the first input (but there are other possibilities to do it without using SIGHASH_NONE) and the rest are SIGHASH_SINGLE. The first output is the victim's address. The remaining outputs are attacker's addresses. Now the attacker sends Tx1, which moves though the network until the victim's node that receives it because it has an output for him. Now the attacker sends Tx2, which also arrives to X, and with high probability resets any double-spend bloom filter of the network. Now the attacker changes the amount of the first output (+1 satoshi) and re-sends Tx1, then the same with Tx2, forever. All transactions are received by the X node. And as far as I can see they are stored in the wallet with g_signals.SyncTransaction(tx, NULL) -> CWallet::AddToWallet(). I thought there was prevention for this, but I don't see it now in the code. So the wallet will become huge, probably making the node core dump or maybe corrupting the wallet. The attack can be made in parallel against 4000 nodes by using the 4000 outputs as targets, with the same cost, which is around 400 USD (aprox. fees to create 4000 ouputs) The attack can be repeated choosing another 4000 nodes. At the end, the only nodes that will survive are those unattached to any wallet. If 2000 inputs/ouputs is too much for a single Tx, the same attack can be made splitting the inputs between multiple txs, which are forwarded in a loop.

    If the target node is an SPV node, also battery will be drained by full bandwidth usage.

    My Conclusion:

    1. g_signals.SyncTransaction(tx, NULL) should not be called when relaying a double-spend.
    2. MAX_DOUBLESPEND_BLOOM seems to be too low.
    3. Is very difficult to do this patch right :)
  24. dgenr8 commented at 7:47 PM on July 17, 2014: contributor

    @SergioDemianLerner

    Now the attacker sends Tx2, which also arrives to X, and with high probability resets any double-spend bloom filter of the network

    Why did you go back to assuming that all prevouts re-spent by tx1 or tx2 are simultaneously added to the bloom filter? That won't happen until MAX_DOUBLESPEND_BLOOM respend txes are relayed (on average).

    Your idea to dispense with the bloom filter in favor of a boolean respendRelayedForMe associated with mempool prevouts could eliminate fears about the bloom filter reset though.

    A conflict tx needs to be added to the wallet, to persistently document the state of the txes it conflicts with. This approach made alerting on respends in a block work too. But the bloom filter protects the wallet as well as the network. You are right that after each bloom filter reset (or wallet restart), one more conflicting tx can get added to the wallet.

  25. dgenr8 commented at 8:03 PM on July 17, 2014: contributor

    @SergioDemianLerner Just to note, overall this attack requires attacker to pay his victim, and the network to accept, but fail to execute, the transaction.

    MAX_DOUBLESPEND_BLOOM increased to 100000.

  26. SergioDemianLerner commented at 8:39 PM on July 17, 2014: contributor

    @dgenr8 Yes I did erroneously assumed again simultaneously additions of prevouts. But we agree that this is unessential to the attack: if the attack is performed with 4000 different transactions each having a single prevout and 500 ouputs (500 simultaneous attack targets) then the same attack is possible.

    The problem is still g_signals.SyncTransaction(tx, NULL). Have you carefully followed this call when a double-spend is received? I didn't.

    The thing that you should research is what do nodes do when they receive multiple double-spends. If they store anything in memory, then an attack is possible. Even if the attack takes a week, it's a remote, anonymous attack. And you have to research this for every other Bitcoin implementation. For example BitcoinJ will probably store the double-spend within the wallet, and so it can be made to crash.

    Rising MAX_DOUBLESPEND_BLOOM seems good, but you should also analyze the bloom filter size to see how many false positives it will generate when the number of added prevouts reaches MAX_DOUBLESPEND_BLOOM.

    Regarding paying to the victim: my understanding is that the attacker is not. Because all of the tx created by the attacker are double-spends, none of them will be included in a block. In the worse case, just one of them will (and anyway the amount could be only a few satoshis).

  27. dgenr8 commented at 11:16 PM on July 17, 2014: contributor

    @SergioDemianLerner Got it, you are concerned about a wallet resource exhaustion attack on double-spend payees. Well, those are dropping right on into the wallet. Aaaand Sergio scores ;)

    As you say, I think the only thing we can do is not allow a respend relay tx into the wallet at all, unless it conflicts with a tx already in the wallet. That latter case is needed for the user alert, and limited by the bloom filter.

    Regarding non-core wallets, your attack is already possible today, but if relay results in more double spends on the network, it will be more of a threat.

  28. Respend relay must examine more inputs
    Restructure double-spend detection to fix three vulnerabilities
    in respend relay.
    
    When a transaction is found to be a relayable respend, stop
    checking its inputs for more respent prevouts.
    
    When a respend is found NOT to be relayable due to previous relay
    in connection with one input, keep searching its inputs.
    
    Do not update the respend bloom filter until a respend is
    actually relayed.
    
    Since all the checks in RelayableRespend have been parceled out to
    other locations (except for the rate limit) rename the function.
    71361ba03e
  29. Add doublespendrelay.py regression test
    Use a 4-node network to test relay scenarios.
    f217f4449b
  30. Increase MAX_DOUBLESPEND_BLOOM to 100000 a579b4e517
  31. dgenr8 commented at 4:46 PM on July 18, 2014: contributor

    Lerner attack addressed in 389f3ee.

  32. Do not add double-spends that pay us to wallet
    Add flag fRespend to SyncTransaction interface.
    
    Do not add double-spends to the local wallet, even if they pay us,
    unless they conflict with an existing wallet transaction.
    
    This is to fix a resource exhaustion attack discovered by Sergio
    Demian Lerner.
    389f3ee455
  33. BitcoinPullTester commented at 8:00 PM on July 18, 2014: none

    Automatic sanity-testing: FAILED BUILD/TEST, see http://jenkins.bluematt.me/pull-tester/p4484_389f3ee45528dc0c88507b5a6abd78c71942eef4/ for binaries and test log.

    This could happen for one of several reasons:

    1. It chanages paths in makefile.linux-mingw or otherwise changes build scripts in a way that made them incompatible with the automated testing scripts (please tweak those patches in qa/pull-tester)
    2. It adds/modifies tests which test network rules (thanks for doing that), which conflicts with a patch applied at test time
    3. It does not build on either Linux i386 or Win32 (via MinGW cross compile)
    4. The test suite fails on either Linux i386 or Win32
    5. The block test-cases failed (lookup the first bNN identifier which failed in https://github.com/TheBlueMatt/test-scripts/blob/master/FullBlockTestGenerator.java)

    If you believe this to be in error, please ping BlueMatt on freenode or TheBlueMatt here.

    This test script verifies pulls every time they are updated. It, however, dies sometimes and fails to test properly. If you are waiting on a test, please check timestamps to verify that the test.log is moving at http://jenkins.bluematt.me/pull-tester/current/ Contact BlueMatt on freenode if something looks broken.

  34. dgenr8 commented at 6:35 PM on July 21, 2014: contributor

    Closing, since the now depends on reverted changes. I'll submit a new cleaned-up pull request.

  35. dgenr8 closed this on Jul 21, 2014

  36. dgenr8 deleted the branch on Sep 19, 2018
  37. DrahtBot locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-13 21:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me