Address mempool information leak and resource wasting attacks. #7093

pull gmaxwell wants to merge 1 commits into bitcoin:master from gmaxwell:mempool_infoleak changing 6 files +47 −23
  1. gmaxwell commented at 9:59 pm on November 24, 2015: contributor

    Currently an attacker can poll mempool frequently to bypass trickling logic and trace a transaction through the network. This also wastes lots of bandwidth by causing the same huge invs to be sent over and over.

    This change makes mempool not return results with an arrival time greater than the current minus 16 rounded down to a multiple of 16 seconds. This is a 16 to 32 second delay.

    It also makes mempool calls return only responses which have not been INVed before (using the known inv mruset) and limits the mempool command to considering only the top 8192 entries in the mempool.

  2. gmaxwell commented at 10:05 pm on November 24, 2015: contributor

    For some time there has been some party connecting to every node on the network (sometimes multiple times) and fetching the mempool over and over again at high speed. This wastes a ton of node’s bandwidth. These fetches are also an information leak, as the frequent polling bypasses the trickle logic (and I believe that is their actual motivation, based on other behavior by the same IP).

    Jgarzik suggested removing the mempool command entirely, but it’s used by some lite clients to display unconfirmed transactions that arrived before they connected. This change is more polite, preserving the usecase for lite wallets without making the call an attractive nuisance for third parties to waste a nodes’ bandwidth with various information gathering attacks.

    We should also look to see if we’ve created other trickle bypassing bugs (e.g. responding to a getdata on a tx we’ve never INVed), and consider limiting the maximum size of the mempool response. Now that we get the result back in feerate order, it would be pretty reasonable to return only the next couple blocks worth. Anyone know a reason we shouldn’t?

  3. gmaxwell added the label P2P on Nov 24, 2015
  4. in src/rpcblockchain.cpp: in f91fe537cb outdated
    217@@ -218,7 +218,7 @@ UniValue mempoolToJSON(bool fVerbose = false)
    218     else
    219     {
    220         vector<uint256> vtxid;
    221-        mempool.queryHashes(vtxid);
    222+        mempool.queryHashes(vtxid, 0, INT64_MAX);
    


    pstratem commented at 0:14 am on November 25, 2015:
    We’ve been using std::numeric_limits<int64_t>::max() elsewhere (also INT64_MAX seems to fail on arm?)
  5. in src/main.cpp: in f91fe537cb outdated
    4632-                vInv.push_back(inv);
    4633-            if (vInv.size() == MAX_INV_SZ) {
    4634-                pfrom->PushMessage("inv", vInv);
    4635-                vInv.clear();
    4636+        //Put the mempool on a 16 to 32 second quantized time delay to suppress the arrival time information leak.
    4637+        int64_t rounded_time = (GetTime() - 16) & 0x7ffffffffffffff0LL;
    


    pstratem commented at 0:33 am on November 25, 2015:

    I can see this confusing someone in the future (I had to stop and think about it).

    Needs a comment like “equivalent to int64_t rounded_time = (GetTime() - 16); rounded_time = rounded_time - (rounded_time % 16) but faster”

  6. gmaxwell commented at 5:17 am on November 25, 2015: contributor

    Pulltester fails because of known inadvisable stuff it does with the mempool command. I’m not sure what to do about that.

    I updated the patch so that the filter commands would reset the starting time to zero, allowing re-filtering of already filtered stuff. I did this after observing breadwallet making multiple mempool requests (after keys were added). The downside of this is that no leaves open a DOS vector when bloom filtering is enabled, but there are so many of those that I don’t know if its worth worrying about.

  7. laanwj commented at 9:19 am on November 25, 2015: member

    Pulltester fails because of known inadvisable stuff it does with the mempool command. I’m not sure what to do about that.

    Well, the proper fix would be that tests that want to probe the mempool should use the privileged RPC commands instead of the P2P message. Not sure how much work that would be in practice.

  8. in src/txmempool.cpp: in 4d9f311ab6 outdated
    684 
    685     LOCK(cs);
    686     vtxid.reserve(mapTx.size());
    687-    for (indexed_transaction_set::iterator mi = mapTx.begin(); mi != mapTx.end(); ++mi)
    688-        vtxid.push_back(mi->GetTx().GetHash());
    689+    //Doesn't use the time sorted index to avoid an information leak about when a transaction entered the mempool.
    


    laanwj commented at 9:28 am on November 25, 2015:

    I’m not sure I understand the rationale behind not using the index because it would leak information. We’re still querying on mi->GetTime() so ’leaking’ the same information?

    If the order of the result is the problem, we could just sort the result by hash before returning.


    gmaxwell commented at 11:16 am on November 25, 2015:
    Right the order is the problem. It’s true that it could sort, but thats another expensive operation, and the result being in fee sorted order is quite useful… e.g. if the peer wanted to grab just the top of the mempool.

    laanwj commented at 11:29 am on November 25, 2015:
    I’m not sure how a post-sort of the result would stack up against bypassing a linear scan when an index exists anyway. It depends on the input and output sizes. In the least this compromise is more subtle than the comment makes it look.

    sipa commented at 11:32 am on November 25, 2015:
    We can also use any of the other indexes we have, like the feerate one?

    laanwj commented at 11:33 am on November 25, 2015:
    But the query is by time range - so wouldn’t the time be the only index that helps speed this up?
  9. laanwj commented at 11:33 am on November 25, 2015: member
    Concept ACK
  10. gmaxwell commented at 11:36 am on November 25, 2015: contributor

    So I think I have two mutually exclusive approaches to suggest: one is the current PR code that has a start and quantized end time. The purpose of the start is to limit how much subsequent requests return.

    The alternative is the quantized end time and a max depth, where it returns the top X of the feerate sorted mempool (but only txn which are before the end time). This one might even worth will pull tester (if the delays are set suitably).

    I think the latter may be more resource friendly. The reason the two cannot be efficiently combined is that a tx might be not sent because it’s too deep, but then a block happens and everything moves up.. and then it won’t be transmitted because it’s too early. No great harm, the duplicate removal is mostly to using lots of bandwidth and limiting would accomplish the same.

    Primary downside of the latter is that if you’re interested in some transaction deep in the mempool you won’t learn about it until it’s close to getting mined. Advantage is that it’s simpler behavior that is easier to understand and explain, and also more resource friendly.

  11. gmaxwell commented at 9:26 pm on November 25, 2015: contributor
    A third option, would be to maintain a limited size set of responses sent already, and do the only top N of the mempool trick. That would combine both benefits (cpu benefit from considering only the top of the mempool, bandwidth benefit from not being willing to just send the same data multiple times).
  12. gmaxwell commented at 10:01 pm on November 25, 2015: contributor
    Switched to that third option, using setInventoryKnown and only considering the top 8192 entries in the mempool.
  13. dgenr8 commented at 1:29 am on November 26, 2015: contributor
    TIL the default size of setInventoryKnown is only 1000. It was 10000 before the default send buffer size was reduced.
  14. gmaxwell commented at 2:36 am on November 26, 2015: contributor

    @dgenr8 ouch. I don’t think it makes sense to link it to max_sendbuffer at all. I’ve unlinked it and set it to 50,000; so that we can at least take a whole inv on it.

    (I created another PR that changes it to a rolling bloomfilter; if that goes forward I’ll rebase this on that.)

  15. Suppress the mempool tx arrival time information leak.
    Currently an attacker can poll mempool frequently to bypass
     trickling logic and trace a transaction through the network.
     This also wastes lots of bandwidth by causing the same huge
     invs to be sent over and over.
    
    This change makes mempool not return results with an arrival
     time greater than the current minus 16 rounded down to a
     multiple of 16 seconds. This is a 16 to 32 second delay.
    
    It also makes mempool calls return only responses which have
     not been INVed before (using the known inv mruset) and
     limits the mempool command to considering only the top 8192
     entries in the mempool.
    
    This also introduces the constant MAX_SETINVENTORYKNOWN_SZ
     and sets it to 50,000 (large enough to hold the inventory
     for a maximum sized INV), which is a substantial increase
     from the current default of 1000.
    8441e5a92d
  16. laanwj commented at 9:50 am on November 26, 2015: member

    @dgenr8 ouch. I don’t think it makes sense to link it to max_sendbuffer at all. I’ve unlinked it and set it to 50,000; so that we can at least take a whole inv on it.

    From 1000 to 50,000 is quite an increase. How does that (approximately) affect worst-case memory usage per connected node?

  17. gmaxwell commented at 10:11 am on November 26, 2015: contributor
    @laanwj See #7100 for my preferred solution (which I think adds about 400k per peer, though I’ve not measured yet)– I just didn’t want the 1000 messing up any testing here.
  18. gmaxwell added the label Privacy on Nov 28, 2015
  19. DeftNerd commented at 8:25 pm on November 30, 2015: contributor

    Wouldn’t this pull (designed to make access to mempool information more difficult) damage future fee markets?

    I thought one of the points of RBF was to make it so we could keep small blocks and just pay higher necessary fees to have a transaction included in a block.

    If there are tens of thousands of transactions clamoring to get included in blocks, then those wallets could alter their transaction and rebroadcast it with a higher fee.

    It’s like a few water fountains in the desert that can only support 7 drinkers per second, but tens-of-thousands of people increasing their bids for access to the water before they die (I believe the current transaction time-out is 72 hours?). If they don’t have enough to bid extra, then they didn’t deserve access anyway.

    For a proper fee market, shouldn’t you be able to hear what other people are bidding for access at that moment?

    If this pull goes through, wouldn’t it make it difficult for services to determine the most up-to-date pricing information on what is a fee that is likely to get included in a block?

    You can get information from the confirmed blocks, but sometimes the 10 minute average between blocks results in spans of over an hour between blocks. Without the real-time information, it won’t be possible to adjust fees to try to be on top of the pile of transactions that’ll confirm in the next block.

  20. laanwj commented at 8:46 am on December 4, 2015: member
    Needs rebase, and to pass travis
  21. gmaxwell commented at 9:08 am on December 4, 2015: contributor
    @DeftNerd mempool message are not suitable for that and you’ll hear them via you’ll hear the transactions through the normal transfer mechanism. If anything this pull makes it more suitable since its not sending you megabytes of IDs a dozen blocks deep in the history that you’ve already received from it. The delay from it is only a few seconds, which isn’t that large compared to the operation of the network. @laanwj K. pull tester will be fun; the blocktester misuses the mempool command for operation checking
  22. laanwj commented at 9:17 am on December 4, 2015: member
    Oh the blocktester… bleh I can’t help you with that. @theuni Do you still remember how to change that beast?
  23. laanwj commented at 9:32 am on December 4, 2015: member
    We could also deprecate the blocktester if we feel that its task has been sufficiently surpassed by the new test framework. But I’m not sure of this (in most part because I’ve never had a good grasp on what it actually does, being external software and in Java for that matter).
  24. sipa commented at 7:45 am on December 5, 2015: member
    @BugTheBlueMatt Would it be possible to reduce the blocktester to only do consensus checks, and not mempool checks?
  25. in src/txmempool.cpp: in 8441e5a92d
    686     vtxid.reserve(mapTx.size());
    687-    for (indexed_transaction_set::iterator mi = mapTx.begin(); mi != mapTx.end(); ++mi)
    688-        vtxid.push_back(mi->GetTx().GetHash());
    689+    size_t count = 0;
    690+    //Doesn't use the time sorted index to avoid an information leak about when a transaction entered the mempool.
    691+    for (indexed_transaction_set::iterator mi = mapTx.begin(); count < maxresults && mi != mapTx.end(); ++mi) {
    


    morcos commented at 4:41 pm on December 11, 2015:

    If your intention is to add the top sorted txs by fee rate then you need to iterate over index 3.

    We should keep in mind other uses (like this) that that index takes on (utxo cache priming, fee estimation) because the intention had been to change that index to be a sort of ancestor fee rate packages, which isn’t necessarily what’s wanted for the other use cases.

  26. laanwj commented at 12:08 pm on April 11, 2016: member
    To be clear: this is blocked by #4545
  27. sipa commented at 12:19 pm on April 11, 2016: member
    Related (but not a replacement): #7840
  28. gmaxwell commented at 7:58 am on May 20, 2016: contributor
    #7840 moved forward enough that I’ll close this for now.
  29. gmaxwell closed this on May 20, 2016

  30. DrahtBot locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-21 09:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me