Erlay: bandwidth-efficient transaction relay protocol #18261

pull naumenkogs wants to merge 28 commits into bitcoin:master from naumenkogs:erlay changing 83 files +8225 −59
  1. naumenkogs commented at 8:01 pm on March 4, 2020: member

    This is an implementation of Erlay, using primitives in the BIP-330 (see the updated spec here). Please refer to these two to understand the design. My talk on the topic is here.

    Erlay uses both flooding (announcing using INV messages to all peers) and reconciliation to announce transactions. Flooding is expensive, so Erlay seeks to use it sparingly and in strategic locations - only well-connected publicly reachable nodes flood transactions to other publicly reachable nodes via outbound connections. Since every unreachable node is directly connected to several reachable nodes, this policy ensures that a transaction is quickly propagated to be within one hop from most of the nodes in the network.

    All transactions not propagated through flooding are propagated through efficient set reconciliation. To do this, every node keeps a reconciliation set for each peer, in which transactions are placed which would have been announced using INV messages absent this protocol. Every 2 seconds every node chooses a peer from its outbound connections in a predetermined order to reconcile with, resulting in both sides learning the transactions known to the other side. After every reconciliation round, the corresponding reconciliation set is cleared.

    I think both paper and the BIP motives the changes, but I’ll mention them briefly once again here:

    • save 40% of the bandwidth consumed by a node
    • increase network connectivity for almost no bandwidth or latency cost
    • improves privacy as a side-effect

    Obviously looking for review, let’s try to start with a high-level concerns, and keep nits for later.

    P.S. Please don’t be scared of 8,000 LOC added. 7,000 of them is minisketch added as a subtree.

    P.P.S. My experiments of running this code live (slightly outdated) with a script to replicate the experiment: here1 and here2.

  2. luke-jr commented at 8:04 pm on March 4, 2020: member
    How does it react to diverse network policies?
  3. Empact commented at 8:13 pm on March 4, 2020: member
    Concept ACK
  4. DrahtBot added the label Needs rebase on Mar 4, 2020
  5. in src/net_processing.cpp:3440 in b93eb384cd outdated
    3437+
    3438+        uint64_t local_salt = connman->m_local_recon_salt;
    3439+
    3440+        std::string salt1, salt2;
    3441+        if (remote_salt < local_salt) {
    3442+            salt1 = std::to_string(remote_salt);
    


    practicalswift commented at 8:49 pm on March 4, 2020:

    I’m afraid std::to_string is locale dependent.


    For those interested in permanently killing the locale dependency bug class, consider reviewing:

    • #18124 – init: Clarify C and C++ locale assumptions. Add locale sanity checks to verify assumptions at run-time
    • #18126 – tests: Add fuzzing harness testing the locale independence of the strencodings.h functions
    • #18147 – qt: Kill the locale dependency bug class by not allowing Qt to mess with LC_NUMERIC
  6. practicalswift commented at 8:57 pm on March 4, 2020: contributor

    Concept ACK

    Thanks for doing this! Excellent work!

  7. fanquake added the label P2P on Mar 4, 2020
  8. naumenkogs force-pushed on Mar 5, 2020
  9. naumenkogs force-pushed on Mar 6, 2020
  10. naumenkogs commented at 7:23 pm on March 7, 2020: member

    @luke-jr

    When it comes to network policies, I’m using the same code originally used by regular gossip (“Determine transactions to relay” in net_processing.cpp). So nothing should be lost or sent wastefully as a result of policy discrepancies.

  11. gmaxwell commented at 10:57 pm on March 8, 2020: contributor

    @luke-jr I’m understanding your question as being inspired by earlier ‘mempool sync’ ideas that would bleed bandwidth if there was a long lasting mempool discrepancy.

    Erlay isn’t a mempool sync. It’s uses a way of communicating lists of things you want to relay which only takes bandwidth proportional to the difference rather than the total size. So there is no on-going bandwidth usage due to differences in mempool content. Bandwidth is used is roughly Antx_relayed + Bpeers*(difference_in_tx_relayed) + C*peers. for some constants A,B,C.

    If a peer has a radically different relay policy than you, it works fine and continues to save bandwidth below what usage would be without erlay even though the erlay savings itself comes largely from eliminating data that both sides would send.

    Does that answer your question?

  12. practicalswift commented at 7:57 pm on March 9, 2020: contributor

    @naumenkogs When trying out this PR I ran in to two small testing issues:

    • The suffix of the functional test p2p_erlay is .p2p (p2p_erlay.p2p) instead of the expected .py (p2p_erlay.py) :)
    • It seems like make check runs the minisketch binaries test-exhaust and test-exhaust-verify. The running times of these are quite substantial - is there some way to do a quick sanity check as part of make check instead of exhaustive testing? :)
  13. sipa commented at 8:48 pm on March 9, 2020: member
    @practicalswift The unit test minisketch binaries actually run forever. I need to fix that.
  14. practicalswift commented at 10:10 pm on March 9, 2020: contributor

    @naumenkogs

    I did some robustness testing of this code by pulling in PRs #17989 (ProcessMessage(…) fuzzer). and #18288 (MemorySanitizer) and found an use of uninitialized memory (UUM) that is remotely triggerable.

    You can reproduce the issue by pulling in the commits from those PR:s and simply run:

    0$ src/test/fuzz/process_message
    1

    The issue will be hit within a few seconds: libFuzzer is amazing :)

    Notice also how libFuzzer will automatically find the newly added message names (wtxidrelay, sendrecon, reqrecon, sketch, reqbisec and reconcildiff) and probe using them all! The fuzzer harness does not need to be teached about the existence of those :)

    Perhaps this UUM is the reason of the intermittent test failure you’re seeing?

    I encourage everybody to review (or at least Concept ACK :)) #17989 (ProcessMessage(…) fuzzer). and #18288 (MemorySanitizer): having them merged would be great for robustness/security.

  15. naumenkogs force-pushed on Mar 10, 2020
  16. naumenkogs force-pushed on Mar 11, 2020
  17. practicalswift commented at 4:45 pm on March 11, 2020: contributor
    Needs rebase :)
  18. naumenkogs force-pushed on Mar 11, 2020
  19. naumenkogs commented at 7:26 pm on March 11, 2020: member
    I made some latest changes to make sure it can be plugged into a real network. Please help with testing this by running a couple inter-connected Erlay nodes, and observing bandwidth (and ideally tx relay latency). @practicalswift I was planning to rebase once #18044 is merged.
  20. naumenkogs force-pushed on Mar 14, 2020
  21. naumenkogs commented at 5:41 pm on March 15, 2020: member

    I ran 2 non-reachable nodes for around 24 hours, one is regular wtxid-node connected to legacy nodes, one is Erlay node connected to reconciliation-supporting nodes on the same host.

    Bandwidth breakdown (in megabytes):

    Legacy (sent) Erlay (sent) Legacy (received) Erlay (received)
    inv 38 0.4 22 5.3
    getdata 6 5.7 1 0
    sketch - - - 1.2
    reqrecon, reqbisec, reconcildiff - 0.7 - -
    tx, blocktxn 3 0.3 75 75
    total (incl. other) 48 7.1 103 84

    Notice overall 40% bandwidth saving. Please help by running similar experiments and sharing bandwidth results :) Here’s the script I hacked together for bandwidth analysis (run nodes with debug=1) Please sanitize your results before publishing: sometimes there’s noisy bandwidth like extra blocks due to some reasons I’m not aware of.

  22. naumenkogs commented at 2:12 am on March 18, 2020: member

    More experiments: now I ran the same 2 nodes for 24 hours, but connected to 16 instead of 8 nodes (yeah, all 16 peers of Erlay node support reconciliation).

    Legacy wtxid-relay node spent 150 megabytes for announcements, while Erlay node spent 24 megabytes. Since these 2 days might have had different activity, it makes sense to compare the growth.. Legacy grew 2.23x (I guess due to more INVs in total), Erlay grew 1.8x. So, as expected, not only it’s decent savings comparing to legacy, it also grows slower with connectivity.

    Based on the following facts I also make a guess that there’s no point in tweaking Erlay forward for better performance:

    • only 0.0016 reconciliation failed (fully or partially) — meaning we don’t underestimate much
    • only 15% of the announcement bandwidth is sketches — meaning we don’t overestimate much
    • only 15% of the announcement bandwidth is outbound invs (which are much more likely to still be a little wasteful, because those peers are better connected than us and likely don’t need those announemens)

    Finally, I measured delay in receiving transactions. I take the earliest time I received a transaction, and take latency from there, comparing to:

    • an average receiving time at all legacy nodes: 0.92s
    • at the Erlay node: 1.47s

    I think this is expected and fine, as we talk in the paper, but let me know if you think differently.

  23. naumenkogs force-pushed on Mar 19, 2020
  24. DrahtBot removed the label Needs rebase on Mar 20, 2020
  25. DrahtBot commented at 1:01 am on March 20, 2020: member

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #21148 (Split orphan handling from net_processing into txorphanage by ajtowns)
    • #21015 (Make all of net_processing (and some of net) use std::chrono types by dhruv)
    • #20892 (tests: Run both descriptor and legacy tests within a single test invocation by achow101)
    • #20758 (net-processing refactoring – lose globals, move implementation details from .h to .cpp by ajtowns)
    • #20726 (p2p: Add DISABLETX message for negotiating block-relay-only connections by sdaftuar)
    • #20721 (Net: Move ping data to net_processing by jnewbery)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  26. DrahtBot added the label Needs rebase on Mar 30, 2020
  27. mzumsande commented at 8:32 pm on April 19, 2020: member

    Several existing functional tests are failing for me while synchronizing mempools via Erlay. I looked at the log of wallet_backup.py which has a rather simple setup of some nodes creating txes and then expecting all nodes to synchronize:

    In my run, one node would behave strangely during reconciliations, sending GETDATA for a transaction with a wtxid 000000003f946000000000003f947ae147ae147b000000000000000100000000 even though its peer had created a tx with a different (and actually random) wtxid. As a result, NOTFOUND is sent and the synchronization never happens. I haven’t been able to dig in deep enough yet to understand the cause, but there still seems to be a bug somewhere.

  28. naumenkogs commented at 9:42 pm on April 19, 2020: member

    @mzumsande thank you for taking a look! I’ll do my best to find the cause.

    Some external eyes looking at the code would definitely help to both confirm the approach and troubleshoot issues like this :)

  29. in src/net.cpp:760 in 1b695eabc2 outdated
    749+    for (size_t i = 0; i < remote_missing_short_ids.size(); ++i) {
    750+        auto local_tx = m_recon_state->local_short_id_mapping.find(remote_missing_short_ids[i]);
    751+        if (local_tx == m_recon_state->local_short_id_mapping.end()) {
    752+            LogPrint(BCLog::NET, "Unknown transaction requested by short id=%d by peer=%d\n", remote_missing_short_ids[i], GetId());
    753+        }
    754+        remote_missing.push_back(local_tx->second);
    


    mzumsande commented at 4:19 pm on April 21, 2020:
    I think that this is UB (if no wtxid for a shortid is found, there is a LogPrint, but the item is still dereferenced). While this should explain the strange wtxids with lots of 0s, it doesn’t explain the why no mapping is found to begin with.

    naumenkogs commented at 5:17 pm on April 21, 2020:
    I found an issue, will commit the fix later today! Basically, sometimes reconcil. difference finding gives a false positive: it looks like a success, but some difference elements are missing. This is just a natural property of math in minisketch. I’ll submit a workaround.

    naumenkogs commented at 6:13 pm on April 21, 2020:
    Still not sure why it would be UB here? It just accesses the existing array element..

    sipa commented at 6:16 pm on April 21, 2020:
    You can’t dereference local_tx when it is .end().

    mzumsande commented at 6:23 pm on April 21, 2020:
    I may be off, but if local_tx == m_recon_state->local_short_id_mapping.end(), local_tx->second is still accessed after the LogPrint - isn’t that illegal?

    naumenkogs commented at 6:46 pm on April 21, 2020:
    Oh, i see you are definitely right, it’s a bug!
  30. naumenkogs force-pushed on Apr 21, 2020
  31. naumenkogs force-pushed on Apr 21, 2020
  32. DrahtBot removed the label Needs rebase on Apr 21, 2020
  33. naumenkogs force-pushed on Apr 21, 2020
  34. naumenkogs commented at 8:31 pm on April 21, 2020: member

    Alright, all the tests should be passing now, also rebased.

    There’s one last test that fails consistently: p2p_segwit.py. I’ll take a look at that, and upgrade the code for recent minisketch (with reconciliation capacity checksum thing) soon.

    One good idea for the future: test that legacy nodes talk OK alright to erlay nodes (I checked it only manually).

  35. sipa commented at 8:40 pm on April 21, 2020: member
  36. in src/net_processing.cpp:3492 in 0a262c092f outdated
    3446@@ -3417,6 +3447,274 @@ bool ProcessMessage(CNode* pfrom, const std::string& msg_type, CDataStream& vRec
    3447         return true;
    3448     }
    3449 
    3450+    std::chrono::microseconds current_time = GetTime<std::chrono::microseconds>();
    3451+
    3452+    // Record an (expected) reconciliation request with parameters to respond when time comes.
    3453+    // All initial reconciliation responses will be done at the same time to prevent tx-related privacy leaks.
    3454+    if (strCommand == NetMsgType::REQRECON) {
    3455+        if (!pfrom->m_recon_state->sender) return true;
    


    mzumsande commented at 0:39 am on April 22, 2020:

    One more potential issue: If I understand it correctly, m_recon_state is null if our peer does not signal Erlay support. So we should check if m_recon_state is non-null before accessing its members here.

    Otherwise an attacker could not signal Erlay but then send Erlay-specific messages to crash our node. Same applies for the other new messages NetMsgType::SKETCH etc. below.


    naumenkogs commented at 0:50 am on April 22, 2020:
    Definitely an issue, will address! I had asserts all over, and then was replacing them with returns, but this one requires 2 checks for m_recon_state and for sender :)
  37. DrahtBot added the label Needs rebase on Apr 25, 2020
  38. naumenkogs force-pushed on Apr 27, 2020
  39. naumenkogs force-pushed on Apr 27, 2020
  40. naumenkogs force-pushed on Apr 27, 2020
  41. naumenkogs force-pushed on Apr 27, 2020
  42. DrahtBot removed the label Needs rebase on Apr 27, 2020
  43. naumenkogs force-pushed on Apr 27, 2020
  44. naumenkogs force-pushed on Apr 27, 2020
  45. naumenkogs force-pushed on Apr 28, 2020
  46. DrahtBot added the label Needs rebase on Apr 29, 2020
  47. naumenkogs commented at 2:15 pm on May 7, 2020: member

    This should be in reviewable state now: I split commits more, fixed all the tests, made it more stable, addressed feedback by @mzumsande, updated with new helpers from minisketch.

    Travis failed last time because of the little issue with minisketch release build, but it should work just fine for local testing and experiments! Will rebase again when #18044 is rebased to not create much discrepancy there.

  48. naumenkogs referenced this in commit 311c085cab on Jun 6, 2020
  49. in src/protocol.h:410 in 5f81325bf1 outdated
    407 {
    408     UNDEFINED = 0,
    409     MSG_TX = 1,
    410     MSG_BLOCK = 2,
    411-    // The following can only occur in getdata. Invs always use TX or BLOCK.
    412+    MSG_WTX = 5, // Defined in BIP XXX
    


    ysangkok commented at 7:20 pm on June 6, 2020:
    this could say BIP 330
  50. in src/protocol.h:240 in 5f81325bf1 outdated
    233@@ -234,6 +234,46 @@ extern const char *GETBLOCKTXN;
    234  * @since protocol version 70014 as described by BIP 152
    235  */
    236 extern const char *BLOCKTXN;
    237+/**
    238+ * Indicates that a node prefers to relay transactions via wtxid, rather than
    239+ * txid.
    240+ * @since protocol version 70016 as described by BIP XXX.
    


    ysangkok commented at 7:20 pm on June 6, 2020:
    also, could say BIP 330
  51. in src/minisketch/src/test-exhaust.cpp:132 in 5f81325bf1 outdated
    125+        int num_roots = minisketch_decode(state, count + 1, roots.data());
    126+        CHECK(overfill || num_roots >= 0);
    127+        CHECK(num_roots < 1 || minisketch_decode(state, num_roots - 1, roots.data()) == -1); // Decoding with a too-low maximum should fail.
    128+        if (!overfill) {
    129+            std::sort(roots.begin(), roots.begin() + num_roots);
    130+//            fprintf(stderr, "Solut: ");
    


    ysangkok commented at 7:23 pm on June 6, 2020:
    wouldn’t it be better to have this guarded by an ifdef?

    naumenkogs commented at 8:14 pm on June 6, 2020:
    I think https://github.com/sipa/minisketch master branch is a better place for these comments :)
  52. naumenkogs force-pushed on Jul 24, 2020
  53. naumenkogs force-pushed on Jul 24, 2020
  54. naumenkogs force-pushed on Jul 24, 2020
  55. naumenkogs force-pushed on Jul 24, 2020
  56. naumenkogs force-pushed on Jul 24, 2020
  57. naumenkogs force-pushed on Jul 24, 2020
  58. naumenkogs force-pushed on Jul 24, 2020
  59. DrahtBot removed the label Needs rebase on Jul 26, 2020
  60. naumenkogs commented at 7:35 am on July 28, 2020: member

    Rebased on #18044, but we will probably get #19184 before Erlay first, so please review that first. Although this PR is in a reviewable state right now!

    This PR currently doesn’t pass travis tests because of the minisketch build issue after the recent patch. Should work locally and shouldn’t stop people from reviewing this work :)

  61. DrahtBot added the label Needs rebase on Jul 30, 2020
  62. naumenkogs force-pushed on Aug 27, 2020
  63. naumenkogs force-pushed on Aug 27, 2020
  64. DrahtBot removed the label Needs rebase on Aug 27, 2020
  65. DrahtBot added the label Needs rebase on Sep 2, 2020
  66. practicalswift commented at 9:08 am on October 2, 2020: contributor

    It seems like the tests are failing and that a rebase is needed.

    I’m super excited about Erlay! :) What is the plan ahead?

  67. naumenkogs commented at 11:26 am on October 2, 2020: member
    @practicalswift we kinda agreed that #19988 is a prerequisite, I hope Erlay becomes a priority right after :)
  68. naumenkogs marked this as a draft on Oct 5, 2020
  69. naumenkogs force-pushed on Nov 16, 2020
  70. naumenkogs force-pushed on Nov 16, 2020
  71. naumenkogs force-pushed on Nov 16, 2020
  72. naumenkogs force-pushed on Nov 16, 2020
  73. naumenkogs force-pushed on Nov 16, 2020
  74. naumenkogs force-pushed on Nov 17, 2020
  75. naumenkogs force-pushed on Nov 17, 2020
  76. naumenkogs marked this as ready for review on Nov 17, 2020
  77. naumenkogs commented at 1:34 pm on November 17, 2020: member

    Rebased, added a bunch of comments and restructured commits for better review.

    Marking it as reviewable now, although my fight with Travis builds is not over yet :)

  78. DrahtBot removed the label Needs rebase on Nov 17, 2020
  79. naumenkogs force-pushed on Nov 18, 2020
  80. naumenkogs force-pushed on Nov 19, 2020
  81. DrahtBot added the label Needs rebase on Nov 19, 2020
  82. naumenkogs commented at 5:49 pm on November 19, 2020: member

    Me and @sipa decided to switch to “extension sketches”: instead of requesting a bisection, someone who failed to decode the difference from the initial sketch, may ask for an extension of the initial sketch. A responder then have to send syndromes of higher order only.

    The main motivation is it seems like bisection adds a lot of complexity (see b5c92a41e4cc0599504cf838d20212f1a403e573). Extensions have all the same properties after all, but they’re more CPU-costly. We think it’s fine for sketches of small capacity (what we expect to have here) to ignore this disadvantage.

    So, for now, please review up to a30e861c9cada8fbc400d07255516839480d9e9d (but also feel free to review further if you’re curious).

    I will replace bisection with extensions and update the BIP soon.

  83. naumenkogs force-pushed on Nov 19, 2020
  84. DrahtBot removed the label Needs rebase on Nov 19, 2020
  85. DrahtBot added the label Needs rebase on Nov 20, 2020
  86. naumenkogs commented at 9:39 pm on November 21, 2020: member
    Okay, now the BIP PR is up-to-date with this code, and this stuff is ready for full review again.
  87. naumenkogs force-pushed on Nov 21, 2020
  88. DrahtBot removed the label Needs rebase on Nov 21, 2020
  89. DrahtBot added the label Needs rebase on Nov 25, 2020
  90. naumenkogs force-pushed on Nov 25, 2020
  91. DrahtBot removed the label Needs rebase on Nov 25, 2020
  92. naumenkogs force-pushed on Nov 30, 2020
  93. ariard commented at 1:41 pm on December 1, 2020: member

    Starting the review by digging into the BIP :)

    Few remarks :

    1. “improves privacy as a side-effect”, it would be great to explain and document this claim, maybe add a section in Rationale ?
    2. “truncated transaction IDs”, AFAIU, Erlay introduces new shortened transaction identifiers for regular tx-relay, maybe it could be better to call them “truncated WTXID” ? If BIP 330 implies BIP 339, it could be better to remove references to transaction ID or at least when it’s ambiguous
    3. You mentioned switching to extension sketch and mentioned the updated BIP was ready for review, I think you need to update the “intented protocol flow” schema, remove reqbisec and update Rationale ?
    4. You might swap the “Local State” section before messages, it could be easier to explain q-coefficient before using them in New messages
  94. naumenkogs force-pushed on Dec 1, 2020
  95. DrahtBot added the label Needs rebase on Dec 2, 2020
  96. naumenkogs force-pushed on Dec 4, 2020
  97. DrahtBot removed the label Needs rebase on Dec 4, 2020
  98. DrahtBot added the label Needs rebase on Dec 7, 2020
  99. in src/net_processing.cpp:2379 in a6f979194a outdated
    2374+            bool be_recon_requestor, be_recon_responder;
    2375+            // Currently reconciliation requests flow only in one direction inbound->outbound.
    2376+            if (pfrom.IsFullOutboundConn()) {
    2377+                be_recon_requestor = true;
    2378+                be_recon_responder = false;
    2379+            } else {
    


    sdaftuar commented at 8:12 pm on December 10, 2020:
    We should probably not send a sendrecon message to block-relay only peers? Here we should just check to see if it’s an inbound peer, right?
  100. in src/protocol.h:268 in a6f979194a outdated
    259@@ -260,6 +260,14 @@ extern const char* CFCHECKPT;
    260  * @since protocol version 70016 as described by BIP 339.
    261  */
    262 extern const char* WTXIDRELAY;
    263+/**
    264+ * Contains 2 1-byte bools, a 4-byte version number and an 8-byte salt.
    265+ * Indicates that a node is willing to participate in transaction reconciliation,
    266+ * either as a sender or a receiver.
    267+ * The salt is used to compute short txids needed for efficient reconciliation.
    268+ * @since protocol version 80001 as described by BIP 330
    


    sdaftuar commented at 8:15 pm on December 10, 2020:
    I believe BIP 330 doesn’t reference protocol version 80001, and the code in this commit seems to send it as long as the peer has version 70016 or higher.
  101. in src/net_processing.cpp:162 in 2ac6f49703 outdated
    145@@ -146,6 +146,10 @@ static constexpr uint32_t MAX_GETCFILTERS_SIZE = 1000;
    146 static constexpr uint32_t MAX_GETCFHEADERS_SIZE = 2000;
    147 /** the maximum percentage of addresses from our addrman to return in response to a getaddr message. */
    148 static constexpr size_t MAX_PCT_ADDR_TO_SEND = 23;
    149+/** Static component of the salt used to compute short txids for transaction reconciliation. */
    150+static const std::string RECON_STATIC_SALT = "Tx Relay Salting";
    151+/** Default coefficient used to estimate set difference for tx reconciliation. */
    152+static constexpr double DEFAULT_RECON_Q = 0.02;
    


    sdaftuar commented at 8:17 pm on December 10, 2020:
    Just a note, I think the BIP draft refers to a default value of q=0.1. Perhaps if we end up leaving our default at 0.02, then we could update the BIP to have the same default value suggestion.
  102. in src/net_processing.cpp:2620 in 2ac6f49703 outdated
    2616@@ -2559,6 +2617,62 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    2617         return;
    2618     }
    2619 
    2620+    // Received from an inbound peer planning to reconcilie transactions with us, or
    


    sdaftuar commented at 8:19 pm on December 10, 2020:
    spelling nit: should say “reconcile”
  103. in src/net_processing.cpp:2944 in 2ac6f49703 outdated
    2622+    // If received from outgoing, adds the peer to the reconciliation queue.
    2623+    // Feature negotiation of tx reconciliation should happen between VERSION and
    2624+    // VERACK, to avoid relay problems from switching after a connection is up.
    2625+    if (msg_type == NetMsgType::SENDRECON) {
    2626+        if (!pfrom.m_tx_relay) return;
    2627+        if (peer->m_recon_state != nullptr) return; // Do not support reconciliation salt/version updates.
    


    sdaftuar commented at 8:21 pm on December 10, 2020:
    Should we only consider negotiation to have succeeded if we’ve already received a wtxidrelay message from this peer? If so, I think we need to specify the order in which wtxidrelay and sendrecon get sent during handshaking in the BIP.
  104. in src/net_processing.cpp:2987 in 2ac6f49703 outdated
    2633+        if (recon_version != 1) return;
    2634+
    2635+         // Do not flood through inbound connections which support reconciliation to save bandwidth.
    2636+         bool flood_to = false;
    2637+         if (pfrom.IsInboundConn()) {
    2638+             if (!recon_sender) return;
    


    sdaftuar commented at 8:25 pm on December 10, 2020:

    A comment here explaining this setup for an inbound peer that sends us a SENDRECON with sender=false would be helpful. Here, it seems that we don’t set up any reconciliation to occur with an inbound peer that doesn’t want to reconcile with us, because we only send reconciliations to our outbound peers, and require that inbounds who want to reconcile be the requestor.

    Perhaps a comment that reminds the reader that if we give up on setting up reconciliation with a peer, that we default to flooding to that peer?

  105. in src/net_processing.cpp:2990 in 2ac6f49703 outdated
    2666+        peer->m_recon_state->m_salt = full_salt;
    2667+
    2668+        // Reconcile with all outbound peers supporting reconciliation (even if we flood to them),
    2669+        // to not miss transactions they have for us but won't flood.
    2670+        if (peer->m_recon_state->m_responder) {
    2671+            m_recon_queue.push_back(&pfrom);
    


    sdaftuar commented at 8:40 pm on December 10, 2020:
    I think if an inbound peer advertised support for both requesting sketches and responding to sketch requests, that we would then add them to our recon queue, but the comment here says that we are only doing this for outbound peers. Can you clarify the correct behavior in that situation?

    naumenkogs commented at 5:19 pm on December 11, 2020:
    Probably gonna be restrictive for now.

    naumenkogs commented at 5:20 pm on December 11, 2020:
    Updating the code and the BIP
  106. in src/net_processing.cpp:836 in 5db8069b56 outdated
    832@@ -827,6 +833,30 @@ static void FindNextBlocksToDownload(NodeId nodeid, unsigned int count, std::vec
    833 
    834 } // namespace
    835 
    836+size_t PeerManager::GetTxRelayOutboundCountByRelayType(bool flooding) const
    


    sdaftuar commented at 8:51 pm on December 10, 2020:
    If I read correctly, even in the final version of this PR this function is only invoked with flooding=true. Can we simplify this function a bit to reflect that?
  107. in src/net_processing.cpp:840 in 5db8069b56 outdated
    832@@ -827,6 +833,30 @@ static void FindNextBlocksToDownload(NodeId nodeid, unsigned int count, std::vec
    833 
    834 } // namespace
    835 
    836+size_t PeerManager::GetTxRelayOutboundCountByRelayType(bool flooding) const
    837+{
    838+    size_t result = 0;
    839+    m_connman.ForEachNode([flooding, &result](CNode* pnode) {
    840+        if (!pnode->m_tx_relay)
    


    sdaftuar commented at 8:52 pm on December 10, 2020:
    Style comment: I’ve noticed in a few places that you have one-line if statements; our style guide requires that if the body of the if isn’t on the same line, then we must use curly braces. So that’s something to fix up throughout the new code.

    naumenkogs commented at 5:10 pm on December 11, 2020:
    Fixed this in the first batch of commits, will also fix for the rest.
  108. in src/net_processing.cpp:843 in 5db8069b56 outdated
    838+    size_t result = 0;
    839+    m_connman.ForEachNode([flooding, &result](CNode* pnode) {
    840+        if (!pnode->m_tx_relay)
    841+            return;
    842+        if (!pnode->IsFullOutboundConn() && !pnode->IsManualConn())
    843+            return;
    


    sdaftuar commented at 8:55 pm on December 10, 2020:

    I haven’t quite thought through exactly how flooding works in Erlay, but I believe this logic is written so that if we configure a node to have 8 manual outbound peers in addition to the 8 full outbound relay peers, and the 8 manual peers are all assigned to be the flood peers, and then they all go offline, we would be left with 0 flood peers? Is it a problem if we are using fewer than our 8 flood peers?

    If it is, perhaps we should be considering whether to enabling flooding on a connection after an existing flood-peer disconnects?


    naumenkogs commented at 8:29 am on December 11, 2020:

    Haha, I was planning to bring this up later in the review cycle. I didn’t expect someone to find it that fast :) Yeah, what you describing is possible. Perhaps we should have a loop checking that we have enough (8) flood outbound peers, and then if not, switch some existing recon peers to flooding.

    Will add a commit for this.

  109. in src/net_processing.cpp:2378 in 9347a73f01 outdated
    2369@@ -2370,6 +2370,28 @@ static void ProcessGetCFCheckPt(CNode& peer, CDataStream& vRecv, const CChainPar
    2370     connman.PushMessage(&peer, std::move(msg));
    2371 }
    2372 
    2373+/**
    2374+ * Announce transactions a peer is missing after reconciliation is done.
    2375+ * No need to add transactions to peer's filter or do checks
    2376+ * because it was already done when adding to the reconciliation set.
    2377+ */
    2378+void static AnnounceTxs(std::vector<uint256> remote_missing_wtxids, CNode* pto, CNetMsgMaker msgMaker, CConnman* connman)
    


    sdaftuar commented at 9:00 pm on December 10, 2020:
    I assume you could pass this vector by (const) reference? (And maybe all the other stuff can be passed by reference too…).
  110. in src/net_processing.cpp:2694 in 9347a73f01 outdated
    2377+ */
    2378+void static AnnounceTxs(std::vector<uint256> remote_missing_wtxids, CNode* pto, CNetMsgMaker msgMaker, CConnman* connman)
    2379+{
    2380+    if (remote_missing_wtxids.size() != 0) {
    2381+        std::vector<CInv> remote_missing_invs;
    2382+        for (uint256 wtxid: remote_missing_wtxids) {
    


    sdaftuar commented at 9:10 pm on December 10, 2020:
    nit: Here and in many of the loops in this PR, I think you could make these const uint256& wtxid : ..., to save a copy. Won’t point it out in every spot but probably could be cleaned up throughout the PR at some point.

    sipa commented at 11:34 pm on December 18, 2020:
    Or even const auto& wtxid : ... to be sure no copy is created, regardless of the type of the container.
  111. sdaftuar commented at 9:14 pm on December 10, 2020: member

    Concept ACK on Erlay – thanks for working on this.

    My review approach is to focus on the non-minisketch portions of this PR for now. I’ve read through the BIP draft and gone through the first 5 commits. As this is a big PR I think it’ll take me a while to go through the rest, so I’m leaving these comments for now and will come back to this later when I have time for additional review.

  112. naumenkogs force-pushed on Dec 11, 2020
  113. in src/net_processing.cpp:1064 in e8fd1cb590 outdated
    934@@ -917,7 +935,7 @@ void PeerManager::ReattemptInitialBroadcast(CScheduler& scheduler) const
    935 
    936         if (tx != nullptr) {
    937             LOCK(cs_main);
    938-            RelayTransaction(txid, tx->GetWitnessHash(), m_connman);
    939+            RelayTransaction(txid, tx->GetWitnessHash(), m_connman, false);
    


    sdaftuar commented at 1:18 pm on December 11, 2020:
    I think a comment here explaining why we don’t flood these transactions would be helpful.
  114. in src/net_processing.cpp:1830 in e8fd1cb590 outdated
    1555         if (state->m_wtxid_relay) {
    1556-            pnode->PushTxInventory(wtxid);
    1557+            pnode->PushTxInventory(wtxid, flood);
    1558         } else {
    1559-            pnode->PushTxInventory(txid);
    1560+            pnode->PushTxInventory(txid, flood);
    


    sdaftuar commented at 1:27 pm on December 11, 2020:
    Here, if a peer doesn’t support reconciliation, then the flood bool doesn’t matter, we’ll always flood – right? If that’s correct, perhaps we could either set flood=true for all non-reconciliation peers, or leave a comment that explains that the parameter doesn’t matter in this case.
  115. in src/net_processing.cpp:2298 in e8fd1cb590 outdated
    2104@@ -2087,7 +2105,7 @@ void PeerManager::ProcessOrphanTx(std::set<uint256>& orphan_work_set)
    2105 
    2106         if (AcceptToMemoryPool(m_mempool, state, porphanTx, &removed_txn, false /* bypass_limits */)) {
    2107             LogPrint(BCLog::MEMPOOL, "   accepted orphan tx %s\n", orphanHash.ToString());
    2108-            RelayTransaction(orphanHash, porphanTx->GetWitnessHash(), m_connman);
    2109+            RelayTransaction(orphanHash, porphanTx->GetWitnessHash(), m_connman, true);
    


    sdaftuar commented at 1:29 pm on December 11, 2020:
    We track where we got the orphan from; should we use that to determine whether to flood (rather than setting flood=true for all orphans)?

    naumenkogs commented at 12:40 pm on December 16, 2020:
    I think you’re right, although we should be extra careful to make sure no privacy leak is introduced w.r.t topology.
  116. in src/net_processing.cpp:523 in ed736b0c38 outdated
    458+     * Salt is generated randomly per-connection to prevent linking of
    459+     * connections belonging to the same physical node.
    460+     * Also, salts should be different per-connection to prevent halting
    461+     * of relay of particular transactions due to collisions in short IDs.
    462+     */
    463+    uint64_t m_local_recon_salt;
    


    sdaftuar commented at 1:37 pm on December 11, 2020:
    Can we make this const and just always initialize it when a Peer object is created? Then we could avoid having to worry about locking issues with this (right now it seems like there should be some kind of lock that guards it).
  117. in src/net_processing.cpp:806 in 489c98c077 outdated
    516+         */
    517+        double m_local_q;
    518+    };
    519+
    520+    /// nullptr if we're not reconciling (neither passively nor actively) with this peer.
    521+    std::unique_ptr<ReconState> m_recon_state;
    


    sdaftuar commented at 1:40 pm on December 11, 2020:
    I think this probably should have a lock that protects access to this object?
  118. in src/net_processing.cpp:535 in 489c98c077 outdated
    474+     * asymmetrical, there is always a requestor and a responder. At the end of the
    475+     * sequence, nodes are supposed to exchange transactions, so that both of them
    476+     * have all relevant transactions. For more protocol details, refer to BIP-0330.
    477+     */
    478+    struct ReconState {
    479+        ReconState(){};
    


    sdaftuar commented at 1:42 pm on December 11, 2020:
    nit: perhaps the constructor can take initial values for all the members, so there’s no risk that the members go uninitialized? It looks like the way you are currently using this is to instantiate the object and immediately set everything, so I think this would be an easy change.
  119. in src/net_processing.h:162 in 489c98c077 outdated
    148+     * Transaction reconciliation should happen with peers in the same order,
    149+     * because the efficiency gain is the highest when reconciliation set difference
    150+     * is predictable. This queue is used to maintain the order of
    151+     * peers chosen for reconciliation.
    152+     */
    153+    std::deque<CNode*> m_recon_queue;
    


    sdaftuar commented at 1:43 pm on December 11, 2020:
    I think this probably needs a lock to synchronize access to this?

    sipa commented at 11:21 pm on December 18, 2020:

    In commit “Handle reconciliation support announcement”:

    The comment of m_recon_queue needing a lock doesn’t seem resolved?

  120. in src/net_processing.cpp:4948 in e8fd1cb590 outdated
    4527@@ -4504,11 +4528,11 @@ bool PeerManager::SendMessages(CNode* pto)
    4528                 bool fSendTrickle = pto->HasPermission(PF_NOBAN);
    4529                 if (pto->m_tx_relay->nNextInvSend < current_time) {
    4530                     fSendTrickle = true;
    4531-                    if (pto->IsInboundConn()) {
    4532-                        pto->m_tx_relay->nNextInvSend = std::chrono::microseconds{m_connman.PoissonNextSendInbound(count_microseconds(current_time), INVENTORY_BROADCAST_INTERVAL)};
    4533-                    } else {
    4534-                        // Use half the delay for outbound peers, as there is less privacy concern for them.
    4535+                    if (peer->m_recon_state || !pto->IsInboundConn()) {
    


    sdaftuar commented at 3:14 pm on December 11, 2020:

    So inbound peers that are reconciling with us effectively get their own poisson timer – can you explain why this is an ok change?

    It seems like an inbound peer that constantly tries to reconcile with us could be a more effective spy than before this change, but maybe I’m missing something.


    sdaftuar commented at 3:17 pm on December 11, 2020:
    Also, it might be helpful for code readers if the way we check to see if we’re reconciling with a peer is to call a function that explains that better (peer->ReconEnabled() or something), than checking to see if the m_recon_state object is a nullptr or not. But that’s just a style nit, you can take it or leave it.

    naumenkogs commented at 2:25 pm on March 15, 2021:

    It seems like an inbound peer that constantly tries to reconcile with us could be a more effective spy than before this change, but maybe I’m missing something.

    We don’t respond to reconciliations right away, there is a shared timer for those responses. That’s why I thought it’s fine to reduce the delay here.

  121. in src/net_processing.cpp:5043 in e8fd1cb590 outdated
    4630+                            // Always flood to non-reconciliation nodes supporting tx relay.
    4631+                            // For reconciliation nodes, flood if flood_to is set up and transaction is meant for flooding.
    4632+                            vInv.push_back(inv);
    4633+                        else {
    4634+                            // Storing to populate the reconciliation set.
    4635+                            txs_to_reconcile.push_back(txinfo.tx->GetWitnessHash());
    


    sdaftuar commented at 3:24 pm on December 11, 2020:

    If we’re reconciling with a peer, does it make sense to limit this loop to INVENTORY_BROADCAST_MAX transactions? That is primarily a bandwidth throttling behavior, not a privacy behavior – and since reconciliation is already bandwidth reducing itself, it might be beneficial to throw more things into the sketch sooner?

    On the other hand, maybe it works out just fine if both parties are throttling like this and will have similarly sized sketches anyway… Depends a bit on how much variation there might be between the number of times each side’s poisson timer could fire in the window between reconciliations?

  122. in src/net_processing.cpp:4880 in e8fd1cb590 outdated
    4680+                            // We don't care about a laggy peer (1) because we probably can't help them even if we flood transactions.
    4681+                            // However, exploiting (2) should not prevent us from relaying certain transactions.
    4682+                            // Since computing a sketch over too many elements is too expensive, we will just flood some transactions here.
    4683+                            std::vector<uint256> txs_to_flood = std::vector<uint256>(txs_to_reconcile.end() - recon_set_overflow, txs_to_reconcile.end());
    4684+                            txs_to_reconcile.resize(txs_to_reconcile.size() - recon_set_overflow);
    4685+                            AnnounceTxs(txs_to_flood, *pto, msgMaker, m_connman);
    


    sdaftuar commented at 3:29 pm on December 11, 2020:

    Would it make more sense perhaps to announce the earliest transactions in the queue, rather than the most recent ones?

    It’s possible this logic is fairly optimal, but here is what I was concerned about:

    • an inbound adversary could negotiate erlay support but then never send a reconciliation request, and so this queue fills up until we start just inv’ing them everything new anyway. Since we reduce the poisson timer for peers we think we’re reconciling with, this could be a way to game a quicker INV time to be a better spy node.

      • this doesn’t seem to actually work because they’d have to wait for 2 poisson timers to go off before getting the prior batch of announcements, which is approximately the same (other than integer division) as the same inv timer that inbound peers normally get. Though perhaps this would get them their own poisson timer, rather than having to use the bucket that all inbounds share? That seems not great – an adversary could use multiple inbound connections this way to effectively get a small poisson timer (on average).
    • Countervailing idea is that if the peer is just held up for some reason, the older transactions are more likely to reconcile successfully than the newest ones, so it’s probably bandwidth minimizing in the honest case to hang on to the older ones in the reconciliation set.

  123. in src/net_processing.cpp:135 in 9832017da2 outdated
    122@@ -123,12 +123,12 @@ static constexpr std::chrono::hours AVG_LOCAL_ADDRESS_BROADCAST_INTERVAL{24};
    123 static constexpr std::chrono::seconds AVG_ADDRESS_BROADCAST_INTERVAL{30};
    124 /** Average delay between trickled inventory transmissions in seconds.
    125  *  Blocks and peers with noban permission bypass this, outbound peers get half this delay. */
    126-static const unsigned int INVENTORY_BROADCAST_INTERVAL = 5;
    127+static const unsigned int INVENTORY_BROADCAST_INTERVAL = 2;
    


    sdaftuar commented at 3:32 pm on December 11, 2020:
    FYI, I believe that because this number used to be odd, the ratio of announcement delay to inbound versus outbound peers will change from 2.5 in the old code to 2 in the new code. Not sure if that is cause for any concern though.

    naumenkogs commented at 1:44 pm on December 16, 2020:
    Yeah, so I think the ratio matters most. It was 2.5 and now it’s 2, so a bit different. But I’m also not 100% if this difference matters, while preserving 2.5 would require operating with floats now, so I decided to not, at least yet.
  124. in src/net_processing.cpp:140 in 9832017da2 outdated
    128 /** Maximum rate of inventory items to send per second.
    129  *  Limits the impact of low-fee transaction floods. */
    130 static constexpr unsigned int INVENTORY_BROADCAST_PER_SECOND = 7;
    131 /** Maximum number of inventory items to send per transmission. */
    132-static constexpr unsigned int INVENTORY_BROADCAST_MAX = INVENTORY_BROADCAST_PER_SECOND * INVENTORY_BROADCAST_INTERVAL;
    133+static constexpr unsigned int INVENTORY_BROADCAST_MAX = INVENTORY_BROADCAST_PER_SECOND * INVENTORY_BROADCAST_INTERVAL * 4;
    


    sdaftuar commented at 3:34 pm on December 11, 2020:
    Can you explain why this needs to change, and how you arrived at multiplying by 4?

    naumenkogs commented at 8:57 am on December 17, 2020:

    INVENTORY_BROADCAST_PER_SECOND is probably derived from the block size, so it shouldn’t be changed. INVENTORY_BROADCAST_INTERVAL is just how long we wait between reconciliations with different peers (say 2 seconds).

    Now, imagine, a non-reachable node (which only learns from reconciliations), reconciled with a dysfunctional peer. So the stack of what it should learn during the next successful reconciliation is twice the regular size. And INVENTORY_BROADCAST_MAX should better accommodate for these cases, because otherwise what we’ll get is at least inefficiency.

    “*4” is just an intuitive amortization for these cases.

    I probably should add the explanation to a commit message once we agree this makes sense.

  125. in src/net_processing.cpp:177 in 3fc23c90ed outdated
    175@@ -176,6 +176,12 @@ static const unsigned int MAX_RECON_SET = 10000;
    176  * Less frequent reconciliations would introduce high transaction relay latency.
    177  */
    178 static constexpr std::chrono::microseconds RECON_REQUEST_INTERVAL{std::chrono::seconds{16}};
    179+/**
    180+ * Interval between responding to peers' reconciliation requests.
    181+ * We don't respond to reconciliation requests right away because that would enable monitoring
    182+ * when we receive transactions (privacy leak).
    


    sdaftuar commented at 3:46 pm on December 11, 2020:
    Ah, I didn’t realize we respond to these asynchronously. If this isn’t already in the BIP, perhaps it would be helpful to specify there that reconciliation responses are not guaranteed to be sent out in sync with other messages peers might send (such as pings, getdata, etc)?
  126. in src/net_processing.cpp:4181 in 3fc23c90ed outdated
    4039+        uint16_t peer_recon_set_size, peer_q;
    4040+        vRecv >> peer_recon_set_size >> peer_q;
    4041+        peer->m_recon_state->m_incoming_recon = Peer::ReconState::ReconPhase::INIT_REQUESTED;
    4042+        peer->m_recon_state->m_remote_set_size = peer_recon_set_size;
    4043+        peer->m_recon_state->m_remote_q = double(peer_q / Q_PRECISION);
    4044+        peer->m_recon_state->m_next_recon_respond = NextReconRespond(current_time);
    


    sdaftuar commented at 3:51 pm on December 11, 2020:
    Should we snapshot our reconciliation set as of when we receive this request? If we have a fixed delay and no snapshot, then I’m not sure how much privacy we gain by the 2 second delay (other than from throttling reconciliation requests from a single peer).

    naumenkogs commented at 9:20 am on December 17, 2020:

    We gain privacy by not allowing 2 peers to monitor our mempool, because we respond to them at the same time (every 2 seconds).

    Snapshoting earlier would hide the transactions received during the last 2 seconds and disable this kind of monitoring, but then reconciliations would not be so “fresh” (txs received during last 2s go into the next one). Not sure this kind of monitoring is that dangerous, especially since we also have a Poisson on adding to recon sets.

  127. in src/net_processing.cpp:197 in 43f06991b5 outdated
    194+* but the result will be nonsense (false-positive decoding).
    195+* Given this coef, a false positive probability will be of 1 in 2**coef.
    196+*/
    197+static constexpr unsigned int RECON_FALSE_POSITIVE_COEF = 16;
    198+static_assert(RECON_FALSE_POSITIVE_COEF <= 256,
    199+    "Reducing reconciliation false positives beyond 1 in 2**256 is not supported");
    


    sdaftuar commented at 3:57 pm on December 11, 2020:
    We should try to figure out a better way to organize all these constants and the complexity of the logic away from the rest of net_processing – not sure yet what to suggest but wanted to flag that this is now quite a lot of esoteric stuff we’re adding to the top of this file.

    naumenkogs commented at 9:22 am on December 17, 2020:
    Yeah, definitely, I have the same feel. Perhaps it better belongs to ReconState{}.
  128. in src/net_processing.cpp:5180 in 5162519249 outdated
    4873+        // Message: reconciliation response
    4874+        //
    4875+        if (peer->m_recon_state) {
    4876+            // Respond to a requested reconciliation to enable efficient transaction exchange.
    4877+            // Respond only periodically to a) limit CPU usage for sketch computation,
    4878+            // and, b) limit transaction posesssion privacy leak.
    


    sdaftuar commented at 4:14 pm on December 11, 2020:
    spelling nit: possession
  129. in src/net_processing.cpp:4888 in 5162519249 outdated
    4883+                if (response_sketch != nullptr) {
    4884+                    // It is possible to create a sketch if we had at least one transaction to send to the peer.
    4885+                    size_t ser_size = minisketch_serialized_size(response_sketch);
    4886+                    uint8_t skdata[MAX_SKETCH_CAPACITY * BYTES_PER_SKETCH_CAPACITY];
    4887+                    minisketch_serialize(response_sketch, skdata);
    4888+                    response_skdata.resize(ser_size);
    


    sdaftuar commented at 4:16 pm on December 11, 2020:

    Do we need to invoke minisketch_destroy on the response_sketch to avoid leaking memory?

    I’m also confused about why we need to serialize the sketch into a temporary array and then move it into another vector before sending our response.

  130. in src/net_processing.cpp:4894 in 5162519249 outdated
    4889+                    std::copy(skdata, skdata + ser_size, response_skdata.begin());
    4890+                }
    4891+                m_connman.PushMessage(pto, msgMaker.Make(NetMsgType::SKETCH, response_skdata));
    4892+
    4893+                peer->m_recon_state->m_incoming_recon = Peer::ReconState::ReconPhase::INIT_RESPONDED;
    4894+                peer->m_recon_state->m_local_set.clear();
    


    sdaftuar commented at 4:20 pm on December 11, 2020:

    I thought we needed to hang on to a snapshot of this in order to respond to sketch-extension requests?

    EDIT: Ah, this happens in a later commit.

  131. in src/net_processing.cpp:742 in e0627a7473 outdated
    689+         */
    690+        void FinalizeReconciliation(bool clear_local_set, LocalQAction action, size_t actual_local_missing, size_t actual_remote_missing)
    691+        {
    692+            // According to the erlay spec, reconciliation is initiated by inbound peers.
    693+            if (m_sender) {
    694+                assert(m_incoming_recon != ReconPhase::NONE);
    


    sdaftuar commented at 5:42 pm on December 11, 2020:
    This assert worried me a bit, since I figured that the FinalizeReconciliation could be triggered by a peer with a reconcildiff message without having sent a previous reqreconcil. It looks like there is a guard to prevent that from happening, but it’s not clear to me that assert() is what we want here.
  132. naumenkogs force-pushed on Dec 11, 2020
  133. in src/net_processing.cpp:2970 in 60a723ee67 outdated
    2650+            // We currently don't support reconciliations with outbound peers which
    2651+            // don't want to be reconciliation responders (send us their sketches),
    2652+            // or want to be reconciliation senders (request our sketches).
    2653+            // Just ignore SENDRECON and use normal flooding for transaction relay with them.
    2654+            if (recon_sender) return;
    2655+            if (!recon_responder) return;
    


    sdaftuar commented at 5:47 pm on December 11, 2020:

    So – this logic means that we won’t do any reconciliation with peers if their sender/responder settings don’t match exactly what we expect inbound and outbound peers to do. Should we add some sort of acknowledgement message so that both sides know whether reconciliation is enabled on the link?

    Our own logic seems to change based on whether we think we’re reconciling with a peer; if some other implementation turns on both sending and responding (for instance) and then we silently ignore all their reconciliation requests, that seems suboptimal for everyone. Likewise, if we think we’ve enabled reconciliation with a peer but it has a different policy, it would be better for our software to know that reconciliation isn’t going to actually happen.


    naumenkogs commented at 3:22 pm on December 16, 2020:
    Right, this implementation prioritized simplicity here. Perhaps, I should add sending RECONACK, and also think whether at least some of the non-matching scenarios are compatible.

    sipa commented at 10:44 pm on December 18, 2020:

    BIP comment:

    I think it’s possible to specify the behavior exactly without the need for an ACK message, but perhaps this is a detail you don’t actually want in the BIP:

    • We send (us_sender, us_responder, us_version)
    • We receive (they_sender, they_responder, they_version)
    • Define out_recon = us_sender && they_responder
    • Define in_recon = they_sender && us_responder
    • If !(out_recon || in_recon), reconciliation is disabled.
    • If exactly one of out_recon or in_recon, reconciliation (with us as requestor if out_recon, as responder otherwise) is enabled with protocol version min(us_version, they_version).
    • If both out_recon and in_recon, reconciliation is enabled with version min(us_version, they_version), in some tie-breaking direction (prefer outbound->inbound for example, or pick the one with the lowest nonce, …)?
    • If no SENDRECON is received by the time VERACK happens, reconciliation is disabled.

    This also means a new recon protocol can be introduced in a backward-compatible way. If the other party announced recon protocol 2, things will just default to protocol 1 without problems (which means forcing any client that supports protocol 2 to also support protocol 1). If a change would be made that is so incompatible that we don’t expect clients to support the old one anymore, a different negotiation message would be used, or 0 could be sent.

    In commit “Handle reconciliation support announcement”:

    The code would be something like this to accomodate that:

     0/* must match announcement logic */
     1static constexpr uint32_t RECON_VERSION = 1;
     2bool us_sender = !IsInBound(), us_responder = IsInBound(); 
     3
     4bool they_sender, they_responder;
     5vRecv >> they_sender >> they_responder >> recon_version >> remote_salt;
     6recon_version = std::min(recon_version, RECON_VERSION);
     7if (recon_version < 1) return;
     8bool recon_sender = us_sender && they_responder, recon_responder = us_responder && they_sender;
     9// If we ever announce us_sender && us_responder, this will need tie-breaking (picking the outbound side as sender)
    10assert(!(recon_sender && recon_responder));
    

    naumenkogs commented at 11:19 am on March 16, 2021:
    @sipa good idea, ack.
  134. in src/net_processing.cpp:779 in 95a6ab5d2d outdated
    925@@ -899,6 +926,14 @@ void PeerManager::UpdateNextReconRequest(std::chrono::microseconds now)
    926     m_next_recon_request = now + RECON_REQUEST_INTERVAL / m_recon_queue.size();
    927 }
    928 
    929+std::chrono::microseconds PeerManager::NextReconRespond(std::chrono::microseconds now)
    930+{
    931+    if (m_next_recon_respond < now) {
    932+        m_next_recon_respond = now + RECON_RESPONSE_INTERVAL;
    


    sdaftuar commented at 6:11 pm on December 11, 2020:
    I’m wondering if this might be better as a poisson-timer, but I don’t quite understand the implications of all this yet to reach a conclusion. Intuition: with a fixed delay like this, an adversary with 2 inbound connections can use set reconciliation to measure the set difference between the node’s transactions-to-announce in known and arbitrarily precise time slices.

    naumenkogs commented at 3:18 pm on December 16, 2020:

    I also don’t have a good answer, but it’s not that simple of an attack: we have Poisson on adding to reconciliation sets.

    Having another poison here, well, I don’t think it makes anything worse. Maybe you’re right.

  135. in src/net_processing.cpp:688 in bcade239e2 outdated
    647+        {
    648+            std::vector<uint32_t> short_ids;
    649+            for (uint256 wtxid: local_set) {
    650+                uint32_t short_txid = ComputeShortID(wtxid);
    651+                short_ids.push_back(short_txid);
    652+                m_local_short_id_mapping.insert(std::pair<uint32_t, uint256>(short_txid, wtxid));
    


    sdaftuar commented at 6:16 pm on December 11, 2020:
    Should the m_local_short_id_mapping be cleared out in this function?

    naumenkogs commented at 3:12 pm on December 16, 2020:

    Why so early? The point of it is to refer to it later, when we are requested something by short ID. That’s why it should be kept until FinalizeReconciliation.

    Although as currently implemented, I notice there’s a bug, because we may lose short IDs of the transactions received during extension when calling this after the extension is over. I need to fix this part.

  136. in src/net_processing.cpp:3987 in 19065cb401 outdated
    4219+
    4220+        std::vector<uint8_t> skdata;
    4221+        vRecv >> skdata;
    4222+
    4223+        // Attempt to decode the received sketch with a local sketch.
    4224+        if (skdata.size() / BYTES_PER_SKETCH_CAPACITY > MAX_SKETCH_CAPACITY) return;
    


    sdaftuar commented at 6:19 pm on December 11, 2020:
    I think this is a protocol violation and leaves the link in a permanently broken reconciliation state, since a reconcildiff would never be sent. Perhaps we should disconnect the peer if this happens?

    naumenkogs commented at 3:00 pm on December 16, 2020:
    Not really permanently broken, they still have a chance to send us a proper sketch. But yeah, either finalizing or disconnecting would work. Making it disconnect for now.
  137. in src/net_processing.cpp:4232 in 19065cb401 outdated
    4227+        uint8_t remote_sketch_serialized[MAX_SKETCH_CAPACITY * BYTES_PER_SKETCH_CAPACITY];
    4228+        std::copy(skdata.begin(), skdata.end(), remote_sketch_serialized);
    4229+        minisketch_deserialize(remote_sketch, remote_sketch_serialized);
    4230+
    4231+        minisketch* working_sketch; // Contains sketch of the set difference
    4232+        minisketch* local_sketch = peer->m_recon_state->ComputeSketch(peer->m_recon_state->m_local_set, remote_sketch_capacity);
    


    sdaftuar commented at 6:21 pm on December 11, 2020:
    I think we need to call minisketch_destroy on these somewhere to avoid a memory leak right?
  138. in src/net_processing.cpp:4269 in c14e819d5f outdated
    4261@@ -4252,6 +4262,16 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    4262             AnnounceTxs(remote_missing, pfrom, msgMaker, m_connman);
    4263             m_connman.PushMessage(&pfrom, msgMaker.Make(NetMsgType::RECONCILDIFF, uint8_t(RECON_SUCCESS), local_missing));
    4264             peer->m_recon_state->FinalizeReconciliation(true, Peer::ReconState::LocalQAction::Q_RECOMPUTE, local_missing.size(), remote_missing.size());
    4265+        } else {
    4266+            // Initial reconciliation failed.
    4267+            // Store the received sketch and the local sketch, request extension.
    4268+
    4269+            assert(remote_sketch != nullptr);
    


    sdaftuar commented at 6:37 pm on December 11, 2020:
    Is it possible for the peer to send us a message that causes remote_sketch to be nullptr here?

    naumenkogs commented at 1:59 pm on December 16, 2020:
    Adding protection for this case.
  139. naumenkogs force-pushed on Dec 17, 2020
  140. in src/net_processing.cpp:2672 in 9da045b6f5 outdated
    2667+        } else {
    2668+            salt1 = ToString(local_salt);
    2669+            salt2 = ToString(remote_salt);
    2670+        }
    2671+        uint256 full_salt;
    2672+        CSHA256().Write((unsigned char*)RECON_STATIC_SALT.data(), RECON_STATIC_SALT.size()).
    


    sipa commented at 10:28 pm on December 18, 2020:

    BIP comment, and commit “Handle reconciliation support announcement”:

    Should we use a BIP340 tagged hash here (see TaggedHash() in src/hash.h)?

  141. in src/net_processing.cpp:490 in 9da045b6f5 outdated
    485+
    486+        /// Whether this peer will respond to reconciliation requests.
    487+        bool m_responder;
    488+
    489+        /**
    490+         * Whether we should flood transactions to the peer.
    


    sipa commented at 10:32 pm on December 18, 2020:

    In commit “Handle reconciliation support announcement”:

    This comment isn’t entirely clear to me. Flood transactions to the peer, as opposed to what (given that in the next line you say it’s not in conflict with reconciliation, so it’s not clear to me if this means instead of, or in addition to, reconciliation).

  142. in src/net_processing.cpp:508 in 9da045b6f5 outdated
    503+         * These short IDs will be salted so that they are not the same
    504+         * across all pairs of peers, because otherwise it would enable network-wide
    505+         * collisions which may (intentionally or not) halt relay of certain transactions.
    506+         * Both of the peers contribute to the salt.
    507+         */
    508+        uint256 m_salt;
    


    sipa commented at 10:33 pm on December 18, 2020:

    In commit “Handle reconciliation support announcement”:

    The salt is only 128 bits, right? Why uint256? In the codebase we usually call SipHash salts k0 and k1.

  143. in src/net_processing.cpp:812 in 9da045b6f5 outdated
    517+         * to better predict future set size differences.
    518+         */
    519+        double m_local_q;
    520+    };
    521+
    522+    RecursiveMutex cs_recon_state;
    


    sipa commented at 10:35 pm on December 18, 2020:

    In commit “Handle reconciliation support announcement”

    Is it necessary to use a recursive mutex here? If you can avoid them, Mutex is more efficient and easier to reason about.


    naumenkogs commented at 9:44 am on December 22, 2020:
    I’m not 100% sure how to do it right, but simple Mutex doesn’t seem to work when GetFloodingOutboundsCount is called in handling NetMsgType::SENDRECON. It just hangs.
  144. in src/net_processing.cpp:2663 in 9da045b6f5 outdated
    2658+            // TODO: Flood only through a limited number of outbound connections.
    2659+            flood_to = true;
    2660+        }
    2661+
    2662+        uint64_t local_salt = peer->m_local_recon_salt;
    2663+        std::string salt1, salt2;
    


    sipa commented at 11:16 pm on December 18, 2020:

    In commit “Handle reconciliation support announcement”

    This conversion to decimal strings doesn’t seem to match the BIP. I would think you’d do:

    0uint64_t salt1 = local_salt, salt2 = remote_salt;
    1if (salt1 > salt2) std::swap(salt1, salt2);
    2uint256 full_salt = (CHashWriter(SER_HASH, 0) << MakeSpan(RECON_STATIC_SALT) << salt1 << salt2).GetSHA256();
    

    If the BIP is changed to use tagged hashes, you’d do

    0static const auto RECON_SALT_HASHER = TaggedHash(RECON_STATIC_SALT);
    1uint256 full_salt = (CHashWriter(RECON_SALT_HASHER) << salt1 << salt2).GetSHA256();
    

    instead.

  145. in src/net.h:1199 in 49a18d73df outdated
    1196     {
    1197         if (m_tx_relay == nullptr) return;
    1198         LOCK(m_tx_relay->cs_tx_inventory);
    1199         if (!m_tx_relay->filterInventoryKnown.contains(hash)) {
    1200-            m_tx_relay->setInventoryTxToSend.insert(hash);
    1201+            m_tx_relay->setInventoryTxToSend.insert(std::pair<uint256, bool>(hash, flood));
    


    sipa commented at 11:42 pm on December 18, 2020:

    In commit “Distinguish transactions to flood and to reconcile”:

    You can use

    0m_tx_relay->setInventoryTxToSend.emplace(hash, flood);
    

    here instead.

  146. in src/net_processing.cpp:171 in 49a18d73df outdated
    163+ * Sets the bound on the following objects:
    164+ * - reconciliation set
    165+ * - reconciliation set snapshot
    166+ * - reconciliation short-full id mapping
    167+ */
    168+static const unsigned int MAX_RECON_SET = 10000;
    


    sipa commented at 11:48 pm on December 18, 2020:

    In commit “Distinguish transactions to flood and to reconcile”

    Would it make sense to reuse MAX_PEER_TX_ANNOUNCEMENTS here? It was reduced to 5000 in #19988.

  147. in src/net_processing.cpp:2942 in 9da045b6f5 outdated
    2625+    // from an outgoing peer demonstrating readiness to do reconciliations.
    2626+    // If received from outgoing, adds the peer to the reconciliation queue.
    2627+    // Feature negotiation of tx reconciliation should happen between VERSION and
    2628+    // VERACK, to avoid relay problems from switching after a connection is up.
    2629+    if (msg_type == NetMsgType::SENDRECON) {
    2630+        if (!pfrom.m_tx_relay) return;
    


    sipa commented at 1:07 am on December 19, 2020:

    In commit “Handle reconciliation support announcement” (and BIP)

    Do we want to ignore or disconnect if this message arrives after VERACK?

  148. in src/net_processing.cpp:5118 in 49a18d73df outdated
    4644@@ -4608,9 +4645,17 @@ bool PeerManager::SendMessages(CNode* pto)
    4645                             continue;
    4646                         }
    4647                         if (pto->m_tx_relay->pfilter && !pto->m_tx_relay->pfilter->IsRelevantAndUpdate(*txinfo.tx)) continue;
    4648-                        // Send
    4649+
    4650+                        bool flood_tx = it->second;
    4651+                        if (!peer->m_recon_state || (peer->m_recon_state->m_flood_to && flood_tx))
    


    sipa commented at 1:28 am on December 19, 2020:

    In commit “Distinguish transactions to flood and to reconcile”:

    Nit: coding style (use { … } for multi-line ifs).

  149. in src/net.h:1022 in 49a18d73df outdated
    1008@@ -1009,9 +1009,10 @@ class CNode
    1009 
    1010         mutable RecursiveMutex cs_tx_inventory;
    1011         CRollingBloomFilter filterInventoryKnown GUARDED_BY(cs_tx_inventory){50000, 0.000001};
    1012-        // Set of transaction ids we still have to announce.
    1013-        // They are sorted by the mempool before relay, so the order is not important.
    1014-        std::set<uint256> setInventoryTxToSend;
    1015+        // Set of transaction ids we still have to announce, and whether we may flood them
    1016+        // in case if peer is meant to receive flooding, as opposed to reconcile.
    1017+        // Transactions are sorted by the mempool before relay, so the order is not important.
    


    sipa commented at 0:36 am on December 20, 2020:

    In commit “Distinguish transactions to flood and to reconcile”

    Perhaps it’s useful to comment on the exact semantics for the bool parameter here. IIRC it is:

    • false: use reconciliation if negotiated, flood otherwise
    • true: flood unless reconciliation negotiated and this is not a flooding peer
  150. in src/net_processing.cpp:160 in 80bde2b1aa outdated
    146@@ -146,6 +147,65 @@ static constexpr uint32_t MAX_GETCFILTERS_SIZE = 1000;
    147 static constexpr uint32_t MAX_GETCFHEADERS_SIZE = 2000;
    148 /** the maximum percentage of addresses from our addrman to return in response to a getaddr message. */
    149 static constexpr size_t MAX_PCT_ADDR_TO_SEND = 23;
    150+/** Static component of the salt used to compute short txids for transaction reconciliation. */
    151+static const std::string RECON_STATIC_SALT = "Tx Relay Salting";
    152+/** Used to convert a floating point reconciliation coefficient q to an int for transmission. */
    153+static constexpr double Q_PRECISION{2 << 12};
    


    sipa commented at 1:39 am on December 20, 2020:
    BIP comment: why 12 bits? You could have 16 bits of accuracy with the 2 bytes that it takes.

    naumenkogs commented at 6:23 pm on December 24, 2020:

    q is computed as (total_missing - abs(local_missing - remote_missing)) / min_size

    So the bounds on q are [0…2] I think? if abs = 0, total_missing = 2 * min_size (at most) is an upper bound. If abs !=0, well. Ideally, we need to solve this equation properly :)

    So it should be (2 « 14) - 1 I think assuming [0..2] for now.

  151. in src/net_processing.cpp:188 in 80bde2b1aa outdated
    183+ * when we receive transactions (privacy leak).
    184+ */
    185+static constexpr std::chrono::microseconds RECON_RESPONSE_INTERVAL{std::chrono::seconds{2}};
    186+/** The size of the field, used to compute sketches to reconcile transactions (see BIP-330). */
    187+static constexpr unsigned int RECON_FIELD_SIZE = 32;
    188+static_assert(RECON_FIELD_SIZE % 8 == 0, "Field size should be divisible by 8");
    


    sipa commented at 1:41 am on December 20, 2020:
    Why is this strictly needed? (apart from the BIP specifying that it is 32, so any other value is protocol breaking).
  152. in src/net_processing.cpp:704 in 80bde2b1aa outdated
    699+                minisketch_add_uint64(sketch, short_id);
    700+            }
    701+            return sketch;
    702+        }
    703+
    704+        minisketch* ComputeExtendedSketch() {
    


    sipa commented at 1:43 am on December 20, 2020:
    I’ve PR’ed https://github.com/sipa/minisketch/pull/28 that adds a safer C++ wrapper around the minisketch* type. You may want to use it instead of doing manual memory management for them. It also adds some convenience functions for dealing with fpbits (CreateFP and DecodeFP).
  153. in test/functional/p2p_erlay.py:45 in 80bde2b1aa outdated
    41+
    42+def mul2(x):
    43+    """Compute 2*x in GF(2^FIELD_BITS)"""
    44+    return (x << 1) ^ (FIELD_MODULUS if x.bit_length() >= FIELD_BITS else 0)
    45+
    46+def mul(x, y):
    


    sipa commented at 1:44 am on December 20, 2020:
    I’ve PR’ed https://github.com/sipa/minisketch/pull/26 which adds a pure Python (slow) full reimplementation of minisketch. You may want to use that instead (it also supports decoding).
  154. sipa commented at 1:46 am on December 20, 2020: member
    First set of comments.
  155. naumenkogs force-pushed on Dec 21, 2020
  156. naumenkogs force-pushed on Dec 22, 2020
  157. naumenkogs force-pushed on Dec 22, 2020
  158. naumenkogs force-pushed on Dec 22, 2020
  159. naumenkogs force-pushed on Dec 22, 2020
  160. naumenkogs force-pushed on Dec 23, 2020
  161. naumenkogs force-pushed on Dec 24, 2020
  162. DrahtBot removed the label Needs rebase on Dec 24, 2020
  163. naumenkogs force-pushed on Dec 27, 2020
  164. DrahtBot added the label Needs rebase on Jan 2, 2021
  165. naumenkogs force-pushed on Jan 9, 2021
  166. DrahtBot removed the label Needs rebase on Jan 9, 2021
  167. naumenkogs force-pushed on Jan 10, 2021
  168. DrahtBot added the label Needs rebase on Jan 11, 2021
  169. DrahtBot commented at 9:34 pm on January 11, 2021: member

    🐙 This pull request conflicts with the target branch and needs rebase.

    Want to unsubscribe from rebase notifications on this pull request? Just convert this pull request to a “draft”.

  170. in src/net.h:559 in f472f3077e outdated
    555@@ -556,9 +556,11 @@ class CNode
    556 
    557         mutable RecursiveMutex cs_tx_inventory;
    558         CRollingBloomFilter filterInventoryKnown GUARDED_BY(cs_tx_inventory){50000, 0.000001};
    559-        // Set of transaction ids we still have to announce.
    560-        // They are sorted by the mempool before relay, so the order is not important.
    561-        std::set<uint256> setInventoryTxToSend;
    562+        // Set of transaction ids we still have to announce, and whether we may flood them:
    


    jnewbery commented at 10:24 pm on January 17, 2021:
    This is no longer a set.
  171. in src/net_processing.cpp:173 in f472f3077e outdated
    168+ * This value allows to reconcile ~100 transactions (7 tx/s * 16s) during normal system operation at capacity.
    169+ * More frequent reconciliations would cause significant constant bandwidth overhead due to
    170+ * reconciliation metadata (sketch sizes etc.), which would nullify the efficiency.
    171+ * Less frequent reconciliations would introduce high transaction relay latency.
    172+ */
    173+static constexpr std::chrono::microseconds RECON_REQUEST_INTERVAL{std::chrono::seconds{16}};
    


    jnewbery commented at 10:25 pm on January 17, 2021:
    0static constexpr std::chrono::microseconds RECON_REQUEST_INTERVAL{16s};
    

    Saves repeating the chrono type

  172. in src/net_processing.cpp:182 in f472f3077e outdated
    177+ * when we receive transactions (privacy leak).
    178+ */
    179+static constexpr std::chrono::microseconds RECON_RESPONSE_INTERVAL{std::chrono::seconds{2}};
    180+
    181+/** Represents a reconciliation result, used to decide what to do when the reconciliation is over. */
    182+enum ReconResult {
    


    jnewbery commented at 10:26 pm on January 17, 2021:
    Any reason not to use a bool here?
  173. in src/net_processing.cpp:2363 in f472f3077e outdated
    2354@@ -2267,6 +2355,29 @@ static void ProcessGetCFCheckPt(CNode& peer, CDataStream& vRecv, const CChainPar
    2355     connman.PushMessage(&peer, std::move(msg));
    2356 }
    2357 
    2358+/**
    2359+ * Announce transactions a peer is missing after reconciliation is done.
    2360+ * No need to add transactions to peer's filter or do checks
    2361+ * because it was already done when adding to the reconciliation set.
    2362+ */
    2363+void static AnnounceTxs(const std::vector<uint256>& remote_missing_wtxids, CNode& pto, const CNetMsgMaker& msgMaker, CConnman& connman)
    


    jnewbery commented at 10:29 pm on January 17, 2021:
    It’s easy enough to instantiate a new CNetMsgMaker from the CNode, rather than passing it as an argument. If you make this function a member of PeerManagerImpl you can also avoid passing the CConnman&.
  174. in src/net_processing.cpp:2365 in f472f3077e outdated
    2360+ * No need to add transactions to peer's filter or do checks
    2361+ * because it was already done when adding to the reconciliation set.
    2362+ */
    2363+void static AnnounceTxs(const std::vector<uint256>& remote_missing_wtxids, CNode& pto, const CNetMsgMaker& msgMaker, CConnman& connman)
    2364+{
    2365+    if (remote_missing_wtxids.size() != 0) {
    


    jnewbery commented at 10:30 pm on January 17, 2021:

    Consider inverting this if condition to make this a guard clause and avoid deep indentations below.

    0    if (remote_missing_wtxids.size() == 0) return;
    
  175. in src/net_processing.cpp:2366 in f472f3077e outdated
    2361+ * because it was already done when adding to the reconciliation set.
    2362+ */
    2363+void static AnnounceTxs(const std::vector<uint256>& remote_missing_wtxids, CNode& pto, const CNetMsgMaker& msgMaker, CConnman& connman)
    2364+{
    2365+    if (remote_missing_wtxids.size() != 0) {
    2366+        std::vector<CInv> remote_missing_invs;
    


    jnewbery commented at 10:31 pm on January 17, 2021:
    Perhaps reserve max(size of remote_missing_wtxids, MAX_INV_SIZE) to avoid reallocations.

    naumenkogs commented at 11:25 am on January 19, 2021:
    I think this should be min, because the vector never exceeds MAX_INV_SIZE.

    jnewbery commented at 11:54 am on January 19, 2021:
    yes, you’re right. Should be min.
  176. in src/net_processing.cpp:4182 in f472f3077e outdated
    3942@@ -3751,6 +3943,165 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    3943         return;
    3944     }
    3945 
    3946+    std::chrono::microseconds current_time = GetTime<std::chrono::microseconds>();
    3947+
    3948+    // Record an (expected) reconciliation request with parameters to respond when time comes.
    3949+    // All initial reconciliation responses will be done at the same time to prevent tx-related privacy leaks.
    3950+    if (msg_type == NetMsgType::REQRECON) {
    


    jnewbery commented at 10:32 pm on January 17, 2021:
    A bit of space/comments in this code block would make it easier to read.
  177. in src/net_processing.cpp:4307 in f472f3077e outdated
    4080+        return;
    4081+    }
    4082+
    4083+    // Among transactions requested by short ID here, we should send only those transactions
    4084+    // sketched (stored in local set snapshot), because otherwise we would leak privacy (mempool content).
    4085+    if (msg_type == NetMsgType::RECONCILDIFF) {
    


    jnewbery commented at 10:33 pm on January 17, 2021:
    Again, spacing/comments would make this more legible.
  178. in src/net_processing.h:14 in f472f3077e outdated
    10@@ -11,6 +11,7 @@
    11 #include <sync.h>
    12 #include <txrequest.h>
    13 #include <validationinterface.h>
    14+#include <minisketch/include/minisketch.h>
    


    jnewbery commented at 10:34 pm on January 17, 2021:
    sort
  179. in src/net_processing.h:37 in f472f3077e outdated
    31@@ -31,6 +32,19 @@ static const bool DEFAULT_PEERBLOOMFILTERS = false;
    32 static const bool DEFAULT_PEERBLOCKFILTERS = false;
    33 /** Threshold for marking a node to be discouraged, e.g. disconnected and added to the discouragement filter. */
    34 static const int DISCOURAGEMENT_THRESHOLD{100};
    35+/** The size of the field, used to compute sketches to reconcile transactions (see BIP-330). */
    36+static constexpr unsigned int RECON_FIELD_SIZE = 32;
    37+static constexpr unsigned int BYTES_PER_SKETCH_CAPACITY = RECON_FIELD_SIZE / 8;
    


    jnewbery commented at 10:35 pm on January 17, 2021:
    This doesn’t need to be in the header, since it’s only used inside net_processing.cpp.
  180. in src/net_processing.h:280 in f472f3077e outdated
    277+        /**
    278+         * Reconciliation involves computing a space-efficient representation of transaction identifiers (a sketch).
    279+         * A sketch has a capacity meaning it allows reconciling at most a certain number of elements. (see BIP-330).
    280+         * Considering whether we are going to send a sketch to a peer or use locally, we estimate the set difference.
    281+         */
    282+        Minisketch ComputeSketch(const std::set<uint256> local_set, uint16_t& capacity)
    


    jnewbery commented at 10:38 pm on January 17, 2021:
    Inside a structure inside Peer feels like the wrong place for a lot of this complex logic. I think ideally, Peer would continue to be a struct (i.e. data members only) and the logic would be contained in a separate module.
  181. in src/net_processing.h:410 in f472f3077e outdated
    407+            }
    408+            return remote_missing;
    409+        }
    410+    };
    411+
    412+    RecursiveMutex cs_recon_state;
    


    jnewbery commented at 10:39 pm on January 17, 2021:

    Don’t use old cs nomenclature for mutexes:

    0    RecursiveMutex m_recon_state_mutex;
    

    Also prefer to use a Mutex over a RecursiveMutex in new code.


    naumenkogs commented at 3:24 pm on January 19, 2021:
    I think I can’t make it non-recursive because of the call inside GetFloodingOutboundsCount. How would you suggest to overcome this issue?
  182. in src/net_processing.h:571 in f472f3077e outdated
    566+     * Transaction reconciliation should happen with peers in the same order,
    567+     * because the efficiency gain is the highest when reconciliation set difference
    568+     * is predictable. This queue is used to maintain the order of
    569+     * peers chosen for reconciliation.
    570+     */
    571+    RecursiveMutex cs_recon_queue;
    


    jnewbery commented at 10:40 pm on January 17, 2021:
    Again, prefer a non-reenrant mutex and don’t use the cs nomenclature.

    naumenkogs commented at 3:00 pm on January 19, 2021:
    I can switch to non-recursive if UpdateNextReconRequest() takes m_recon_queue.size() as an argument, instead of accessing it inside. Do you think that would be preferable?

    jnewbery commented at 4:30 pm on January 19, 2021:
    I haven’t looked into your branch in great detail, but it looks like UpdateNextReconRequest() is only called in one place, which already holds m_recon_queue_mutex. Could you make the function EXCLUSIVE_LOCKS_REQUIRED(m_recon_queue_mutex) and not retake the mutex inside?
  183. in test/functional/p2p_erlay.py:6 in f472f3077e outdated
    0@@ -0,0 +1,595 @@
    1+#!/usr/bin/env python3
    2+# Copyhigh (c) 2016-2017 The Bitcoin Core developers
    3+# Distributed under the MIT software license, see the accompanying
    4+# file COPYING or http://www.opensource.org/licenses/mit-license.php.
    5+"""Test reconciliation-based transaction relay protocol.
    6+
    


    jnewbery commented at 10:41 pm on January 17, 2021:
    Why this space?
  184. in test/functional/p2p_erlay.py:25 in f472f3077e outdated
    20+from enum import IntEnum
    21+from io import BytesIO
    22+import random
    23+import hashlib
    24+import time
    25+import struct
    


    jnewbery commented at 10:42 pm on January 17, 2021:
    Please sort, with standard library imports first.
  185. in test/functional/p2p_erlay.py:159 in f472f3077e outdated
    154+        msg = msg_getdata(inv=[CInv(MSG_WTX, h=wtxid) for wtxid in ask_wtxids])
    155+        self.send_message(msg)
    156+
    157+class ReconResult(IntEnum):
    158+    RECON_FAILED = 0
    159+    RECON_SUCCESS = 1
    


    jnewbery commented at 10:42 pm on January 17, 2021:
    Why not a bool?
  186. in test/functional/test_framework/messages.py:137 in f472f3077e outdated
    132+def ser_uint128(u):
    133+    rs = b""
    134+    for i in range(4):
    135+        rs += struct.pack("<I", u & 0xFFFFFFFF)
    136+        u >>= 32
    137+    return rs
    


    jnewbery commented at 10:43 pm on January 17, 2021:
    I think that these are unneeded now that truncated txids aren’t used
  187. in test/functional/test_framework/messages.py:240 in f472f3077e outdated
    235+    for i in range(nit):
    236+        t = struct.unpack("<B", f.read(1))[0]
    237+        r.append(t)
    238+    return r
    239+
    240+
    


    jnewbery commented at 10:43 pm on January 17, 2021:
    why two blank lines?
  188. in test/functional/test_framework/p2p.py:30 in f472f3077e outdated
    26@@ -27,6 +27,7 @@
    27 import struct
    28 import sys
    29 import threading
    30+import socket
    


    jnewbery commented at 10:44 pm on January 17, 2021:
    sort
  189. in test/functional/test_framework/p2p.py:66 in f472f3077e outdated
    59@@ -59,6 +60,11 @@
    60     msg_pong,
    61     msg_sendaddrv2,
    62     msg_sendcmpct,
    63+    msg_sendrecon,
    


    jnewbery commented at 10:44 pm on January 17, 2021:
    sort
  190. in test/functional/test_framework/p2p.py:116 in f472f3077e outdated
    112@@ -107,6 +113,11 @@
    113     b"verack": msg_verack,
    114     b"version": msg_version,
    115     b"wtxidrelay": msg_wtxidrelay,
    116+    b"sendrecon": msg_sendrecon,
    


    jnewbery commented at 10:44 pm on January 17, 2021:
    sort
  191. in test/functional/test_framework/p2p.py:429 in f472f3077e outdated
    407@@ -379,6 +408,7 @@ def on_sendaddrv2(self, message): pass
    408     def on_sendcmpct(self, message): pass
    409     def on_sendheaders(self, message): pass
    410     def on_tx(self, message): pass
    411+    def on_sendrecon(self, message): pass
    


    jnewbery commented at 10:44 pm on January 17, 2021:
    sort
  192. jnewbery commented at 10:53 pm on January 17, 2021: member

    Just some style comments so far. A couple of high-level points:

    1. I agree with the comment here #18261 (review) that we should aim to separate this from the rest of net_processing. This PR currently has about 850 LOC change in net_processing.{cpp|h}, out of a total of around 5700 lines. That means ~15% of net_processing is erlay code after this PR (and over half of net_processing.h is erlay specific). Would it be possible to split the erlay logic into its own subcomponent? One immediate benefit for you is that there would be far fewer disruptive rebases if you did that.
    2. Could the minisketch code and tests be split into their own PR? It seems like there is pretty wide support for incorporating erlay. Reviewing and merging minisketch first would make this PR a lot smaller and more focused on just the p2p changes.
  193. naumenkogs force-pushed on Jan 19, 2021
  194. naumenkogs force-pushed on Jan 21, 2021
  195. in src/net_processing.cpp:246 in c452e5dbd6 outdated
    241+     * of relay of particular transactions due to collisions in short IDs.
    242+     */
    243+    const uint64_t m_local_recon_salt;
    244+
    245+    Mutex m_recon_state_mutex;
    246+    /// nullptr if we're not reconciling (neither passively nor actively) with this peer.
    


    jnewbery commented at 9:02 am on January 25, 2021:
    Please use doxygen style comments.
  196. in src/net_processing.h:10 in c452e5dbd6 outdated
     6@@ -7,6 +7,7 @@
     7 #define BITCOIN_NET_PROCESSING_H
     8 
     9 #include <net.h>
    10+#include <reconciliation.h>
    


    jnewbery commented at 9:03 am on January 25, 2021:
    This isn’t needed in the header.
  197. in src/net_processing.cpp:3410 in c452e5dbd6 outdated
    3406+
    3407+            // Flood those transactions which were received either via flooding, or inbound reconciliation,
    3408+            // but NOT via outbound reconciliation. Flooding then is mainly used for initial propagation
    3409+            // of new transactions across a network of reachable nodes quickly.
    3410+            LOCK(peer->m_recon_state_mutex);
    3411+            bool flood = !(peer->m_recon_state && peer->m_recon_state->m_responder);
    


    jnewbery commented at 9:04 am on January 25, 2021:
    Use a WITH_LOCK() macro or code block here to avoid holding the mutex for the rest of this block.
  198. in src/net_processing.cpp:4163 in c452e5dbd6 outdated
    4159@@ -3944,6 +4160,163 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
    4160         return;
    4161     }
    4162 
    4163+    std::chrono::microseconds current_time = GetTime<std::chrono::microseconds>();
    


    jnewbery commented at 9:06 am on January 25, 2021:
    Please place this inside the REQRECON message handling code block (we probably want to split each message handler into its own function in the future, so we should minimize the use of code outside those code blocks).
  199. in src/net_processing.cpp:4919 in c452e5dbd6 outdated
    4915@@ -4543,6 +4916,7 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    4916         // Message: inventory
    4917         //
    4918         std::vector<CInv> vInv;
    4919+        PeerRef peer = GetPeerRef(pto->GetId());
    


    jnewbery commented at 9:08 am on January 25, 2021:
    Not required. There is already a local peer in this function.
  200. in src/net.h:693 in c452e5dbd6 outdated
    689@@ -688,12 +690,12 @@ class CNode
    690         }
    691     }
    692 
    693-    void PushTxInventory(const uint256& hash)
    694+    void PushTxInventory(const uint256& hash, const bool flood)
    


    jnewbery commented at 9:10 am on January 25, 2021:
    No need to mark pass-by-value arguments as const.
  201. in src/net_processing.h:72 in c452e5dbd6 outdated
    69@@ -69,6 +70,6 @@ class PeerManager : public CValidationInterface, public NetEventsInterface
    70 };
    71 
    72 /** Relay transaction to every node */
    73-void RelayTransaction(const uint256& txid, const uint256& wtxid, const CConnman& connman) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
    74+void RelayTransaction(const uint256& txid, const uint256& wtxid, const CConnman& connman, bool flood) EXCLUSIVE_LOCKS_REQUIRED(cs_main);
    


    jnewbery commented at 9:13 am on January 25, 2021:
    Consider making flood=false the default, so that call sites don’t need to be updated for this change.
  202. in src/reconciliation.h:6 in c452e5dbd6 outdated
    0@@ -0,0 +1,335 @@
    1+#include <minisketch/include/minisketch.h>
    


    jnewbery commented at 9:14 am on January 25, 2021:
    Add copyright boilerplate

    jnewbery commented at 9:17 am on January 25, 2021:
    This file should include all the standard library headers that it uses (eg chrono, set, etc)

    jnewbery commented at 9:18 am on January 25, 2021:
    Add include guards
  203. in src/reconciliation.h:14 in c452e5dbd6 outdated
    0@@ -0,0 +1,335 @@
    1+#include <minisketch/include/minisketch.h>
    2+
    3+/** Static component of the salt used to compute short txids for transaction reconciliation. */
    


    jnewbery commented at 9:15 am on January 25, 2021:
    Can you try to limit these lines to 80 or 100 chars to avoid wrapping.

    naumenkogs commented at 10:27 am on February 11, 2021:
    This particular line is less than 100 chars, so I assume you referred to the other ones which are longer than 100. I see a couple, gonna limit them.
  204. in src/reconciliation.h:85 in c452e5dbd6 outdated
    80+    static constexpr double DEFAULT_RECON_Q = 0.02;
    81+
    82+    ReconState(bool requestor, bool responder, bool flood_to, uint64_t k0, uint64_t k1) :
    83+        m_requestor(requestor), m_responder(responder), m_flood_to(flood_to), m_k0(k0), m_k1(k1), m_local_q(DEFAULT_RECON_Q) {}
    84+
    85+    /// Whether this peer will send reconciliation requests.
    


    jnewbery commented at 9:16 am on January 25, 2021:
    doxygen
  205. in src/reconciliation.h:78 in c452e5dbd6 outdated
    73+ * One reconciliation round consists of a sequence of messages. The sequence is
    74+ * asymmetrical, there is always a requestor and a responder. At the end of the
    75+ * sequence, nodes are supposed to exchange transactions, so that both of them
    76+ * have all relevant transactions. For more protocol details, refer to BIP-0330.
    77+ */
    78+struct ReconState {
    


    jnewbery commented at 9:21 am on January 25, 2021:
    This should really be a class since it has function members.
  206. in src/net_processing.cpp:3383 in ce83d3fbcc outdated
    3379+
    3380+            // Flood those transactions which were received either via flooding, or inbound reconciliation,
    3381+            // but NOT via outbound reconciliation. Flooding then is mainly used for initial propagation
    3382+            // of new transactions across a network of reachable nodes quickly.
    3383+            LOCK(peer->m_recon_state_mutex);
    3384+            bool flood = !(peer->m_recon_state && peer->m_recon_state->m_responder);
    


    mzumsande commented at 8:37 pm on February 5, 2021:
    Does this work as intended in the intermediate stage where not all of our peers support Erlay? If we are a non-reachable node that supports Erlay but has just one older pre-Erlay outbound peer - wouldn’t we flood all of its transactions also to our Erlay-supporting peers because flood == true here? (Considering the goal that “only well-connected publicly reachable nodes flood transactions to other publicly reachable nodes via outbound connections”).

    naumenkogs commented at 11:18 am on February 10, 2021:

    “only well-connected publicly reachable nodes flood transactions to other publicly reachable nodes via outbound connections” only is an end-goal when most of the network implement Erlay.

    In the meanwhile, I think this behavior is better because it’s most intuitive: for legacy peers, we support legacy behavior (flood). Technically, changing this condition wouldn’t break stuff, but I think it’s just less intuitive.

    I probably should expand the comment to address this misunderstanding though.


    mzumsande commented at 9:14 pm on February 22, 2021:

    But the legacy behavior also affects Erlay peers (because if a non-reachable node received transactions from legacy peers, it would act as if it was a reachable hub and forward them via flooding to other outbound peers, including Erlay ones).

    Apart from this specific spot that could easily be adjusted if needed, I think that when there are two relay mechanisms with different time scales existing in parallel, this could lead to some non-intuitive consequences - such as legacy relay being dominant and cannibalizing Erlay relay way into the regime when the majority of nodes already support Erlay.

    Would it maybe be worth doing a mixed-network simulation run and measuring the respective contributions at different ratios of Erlay to legacy nodes, if this is possible within the existing simulation framework?


    naumenkogs commented at 2:54 pm on March 15, 2021:
    @mzumsande sorry I just saw this question. Yeah, that might be useful, once I’m done with big refactor for this… Intuitively, I think this shouldn’t be the case legacy peers cannibalize efficiency I think.
  207. in src/protocol.h:276 in f63ad15426 outdated
    271+ * Requests a reconciliation, and provides local reconciliation set size
    272+ * and coefficient used to accurately estimate reconciliation set difference
    273+ * for a peer to construct a set sketch.
    274+ * Peer should respond with "sketch" message.
    275+ */
    276+extern const char *REQRECON;
    


    mzumsande commented at 8:42 pm on February 5, 2021:
    This message is called “reqreconcil” in the BIP.
  208. in src/net.h:561 in 094c4ed8cb outdated
    550@@ -551,9 +551,11 @@ class CNode
    551 
    552         mutable RecursiveMutex cs_tx_inventory;
    553         CRollingBloomFilter filterInventoryKnown GUARDED_BY(cs_tx_inventory){50000, 0.000001};
    554-        // Set of transaction ids we still have to announce.
    555-        // They are sorted by the mempool before relay, so the order is not important.
    556-        std::set<uint256> setInventoryTxToSend;
    557+        // Transaction ids we still have to announce, and whether we may flood them:
    558+        // - true: flood unless peer correctly negotiated reconciliation and we didn't choose that peer for flooding.
    559+        // - false: use reconciliation unless it's not supported by the peer, flood otherwise.
    


    ariard commented at 4:12 pm on February 9, 2021:

    094c4ed

    This comment is really unclear. What about : “Transactions ids and their associated announcement protocols :

    • true, use flooding
    • false, use reconciliation”

    I think it’s easier to use this map if it stays blind w.r.t to protocol negotiations and flooding-peers selection.


    naumenkogs commented at 11:33 am on February 10, 2021:
    How to reflect that if it’s “false, use reconciliation”, it still can be flooded, in case if reconciliation is not supported for a given peer?

    ariard commented at 1:49 pm on February 10, 2021:
    See this comment which I believe is superseding this discussion.
  209. in src/net_processing.cpp:160 in aff98fcc1e outdated
    146@@ -147,6 +147,12 @@ static constexpr uint32_t MAX_GETCFILTERS_SIZE = 1000;
    147 static constexpr uint32_t MAX_GETCFHEADERS_SIZE = 2000;
    148 /** the maximum percentage of addresses from our addrman to return in response to a getaddr message. */
    149 static constexpr size_t MAX_PCT_ADDR_TO_SEND = 23;
    150+/**
    151+ * When considering whether we should flood to an outbound connection supporting reconciliation,
    152+ * see how many outbound connections are already used for flooding. Flood only if the limit is not reached.
    


    ariard commented at 4:44 pm on February 9, 2021:

    aff98fc

    Maybe precise “Count how many outbound connections are already used for flooding, including the ones not supporting reconciliation at all”. Until I read GetFloodingOutboundsCount I did have a doubt on whether or not this limit was scoping outbound peers not-supporting reconciliation.

  210. in src/net_processing.cpp:398 in 34a485e0f1 outdated
    353+     * because the efficiency gain is the highest when reconciliation set difference
    354+     * is predictable. This queue is used to maintain the order of
    355+     * peers chosen for reconciliation.
    356+     */
    357+    Mutex m_recon_queue_mutex;
    358+    std::deque<CNode*> m_recon_queue GUARDED_BY(m_recon_queue_mutex);
    


    ariard commented at 4:55 pm on February 9, 2021:

    34a485e

    m_recon_resp_queue better ?

  211. in src/net_processing.cpp:359 in f63ad15426 outdated
    352@@ -353,6 +353,13 @@ class PeerManagerImpl final : public PeerManager
    353     /** Send a version message to a peer */
    354     void PushNodeVersion(CNode& pnode, int64_t nTime);
    355 
    356+    /**
    357+     * Reconciliations are requested periodically:
    358+     * every RECON_REQUEST_INTERVAL seconds we pick a peer from the queue.
    


    ariard commented at 5:07 pm on February 9, 2021:

    f63ad15

    Looking on UpdateNextReconRequest implementation, I think this should mention the delay is decreasing linearly with the number of reconciliation responders available.

    IIUC, you assume that more we have reconciliation peers, better the propagation of our local transactions set is. Thus we should space our requests to save bandwidth ? If this is correct, maybe link the section or the Erlay paper or anywhere else where it’s documented?

  212. in src/net_processing.cpp:1064 in ce83d3fbcc outdated
    1031+            // non-reachable nodes never flood transactions from other nodes (they have no inbounds).
    1032+            // Thus, making them flood these would tell a receiver that these indeed belong to the
    1033+            // flooding non-reachable nodes. Instead, we relay them via reconciliation,
    1034+            // in which case a receiver can't distinguish them from transactions we reconciled
    1035+            // with some other peer.
    1036+            RelayTransaction(txid, tx->GetWitnessHash(), m_connman, false);
    


    ariard commented at 5:44 pm on February 9, 2021:

    ce83d3fb

    Thanks for the comment but I don’t think the non-reachability is a source of leak for the present scenario. Non-reachable nodes may open connections to outbound not supporting reconciliations, thus relying on flooding for transaction announcements, at least until Erlay is well-deployed.

    I think the leak would be constituted by a differential tx-announcement protocol selection for re-broadcast tx rather than the usual one previously negotiated with the peer. Doesn’t matter if it’s flooding or reconciliation.

    inbound->outbound better to say “flooding is initiated by inbound towards outbound peers”. Not sure that -> to signify initiation is well-understood as a comment notation…


    naumenkogs commented at 2:14 pm on March 15, 2021:

    Non-reachable nodes may open connections to outbound not supporting reconciliations, thus relying on flooding for transaction announcements, at least until Erlay is well-deployed.

    Sure. But for reconciliation conns, the current policy is to flood only what was received via inbound. So if we rebroadcast to a reconciling peer via flooding, they will know it’s rebroadcast, because non-reachable never receive any stuff via inbound (by definition)

    Continue here

  213. in src/net_processing.cpp:1724 in ce83d3fbcc outdated
    1694+            pnode->PushTxInventory(wtxid, flood);
    1695         } else {
    1696-            pnode->PushTxInventory(txid);
    1697+            // Reconciliations are not supported for non-wtxid peers,
    1698+            // so we always use flooding.
    1699+            pnode->PushTxInventory(txid, true);
    


    ariard commented at 5:57 pm on February 9, 2021:

    ce83d3fb

    Is the tx-announcement protocol marker is really required m_transactions_to_announce given that ultimately you will check peer’s m_recon_state and m_flood to fan out between vInv/txs_to_reconcile ? A non-wtxid peer should be m_recon_state==null anyway.

    I think it’s better to remove this notion of announcement protocol selection attached to tx itself and instead only consider the link.


    naumenkogs commented at 1:49 pm on March 15, 2021:

    I know it’s more complicated logic, but think it’s necessary. m_flood inside peer’s ReconState is one factor, but it’s not sufficient, because we can’t flood all transactions to them.

    If we did so, there will be too much flooding. A non-reachable node always sets m_flood for its 8 outbound peers to true. In that case, it will flood all transactions it gets for them, too much bandwidth.

    I suggest flooding only transactions received via inbound flooding, so this is never the case for non-reachable nodes.

    I’m open to suggestions on other ways how to limit flooding, but I think this one makes the most sense.

  214. ariard commented at 6:06 pm on February 9, 2021: member

    Still doing a first parse of the PR, some comments are really minors but I think few more substantial.

    At first sight, I think this PR could benefit from better code organization (e.g split requester/responder state in different struct, encapsulate processing of new messages, …) but need to finish the high-level review before to propose.

  215. in src/net_processing.cpp:4171 in 7e5d826532 outdated
    4166+    // All initial reconciliation responses will be done at the same time to prevent tx-related privacy leaks.
    4167+    if (msg_type == NetMsgType::REQRECON) {
    4168+        LOCK(peer->m_recon_state_mutex);
    4169+        if (peer->m_recon_state == nullptr) return;
    4170+        if (!peer->m_recon_state->m_requestor) return;
    4171+        if (peer->m_recon_state->m_incoming_recon != RECON_NONE) return;
    


    dergoegge commented at 7:41 pm on February 9, 2021:
    Should we disconnect the peer instead of ignoring the message? In all three cases the peer “misbehaved” and sent an unsolicited REQRECON message.
  216. dergoegge commented at 8:11 pm on February 9, 2021: member
    Concept ACK on Erlay - Reducing bandwidth usage and increasing connectivity is great! The slightly increased transaction propagation times are an OK tradeoff IMO.
  217. in src/net.h:563 in c452e5dbd6 outdated
    556-        std::set<uint256> setInventoryTxToSend;
    557+        // Transaction ids we still have to announce, and whether we may flood them:
    558+        // - true: flood unless peer correctly negotiated reconciliation and we didn't choose that peer for flooding.
    559+        // - false: use reconciliation unless it's not supported by the peer, flood otherwise.
    560+        // Transactions are sorted by the mempool before relay, so the order is not important.
    561+        std::map<uint256, bool> m_transactions_to_announce;
    


    unseddd commented at 1:23 am on February 10, 2021:
    nit: since order is not important, maybe std::unordered_map for efficiency?
  218. in src/reconciliation.h:27 in c452e5dbd6 outdated
     9+ * This value allows to reconcile ~100 transactions (7 tx/s * 16s) during normal system operation at capacity.
    10+ * More frequent reconciliations would cause significant constant bandwidth overhead due to
    11+ * reconciliation metadata (sketch sizes etc.), which would nullify the efficiency.
    12+ * Less frequent reconciliations would introduce high transaction relay latency.
    13+ */
    14+static constexpr std::chrono::microseconds RECON_REQUEST_INTERVAL{16s};
    


    unseddd commented at 2:24 am on February 10, 2021:

    nit: consider using smth like:

    0static constexpr std::chrono::microseconds RECON_REQUEST_INTERVAL{16000000};
    

    using the "" s operator requires a using namespace std::chrono_literals or similar: https://en.cppreference.com/w/cpp/chrono/operator%22%22s.

    Same for RECON_RESPONSE_INTERVAL.

  219. unseddd commented at 4:21 am on February 10, 2021: none

    Added a couple fuzz harnesses in my erlay branch, if you want them.

    Also contains some other minor fixups, like a change for src/test/fuzz/net.cpp (for the flood param):

     0diff --git a/src/test/fuzz/net.cpp b/src/test/fuzz/net.cpp
     1index 31b99600e..7d48a7ec3 100644
     2--- a/src/test/fuzz/net.cpp
     3+++ b/src/test/fuzz/net.cpp
     4@@ -86,7 +86,8 @@ FUZZ_TARGET_INIT(net, initialize_net)
     5                 node.AddKnownTx(inv_opt->hash);
     6             },
     7             [&] {
     8-                node.PushTxInventory(ConsumeUInt256(fuzzed_data_provider));
     9+                node.PushTxInventory(ConsumeUInt256(fuzzed_data_provider), false);
    10+                node.PushTxInventory(ConsumeUInt256(fuzzed_data_provider), true);
    11             },
    12             [&] {
    13                 const std::optional<CService> service_opt = ConsumeDeserializable<CService>(fuzzed_data_provider);
    
  220. unseddd commented at 4:31 am on February 10, 2021: none

    Concept ACK

    I understand that the requestor being the one to compute the sketches protects public nodes against potential DoS. Are there any protections for requestor(s)?

    Thinking about the scenario where one public node requests a sketch from a malicious public node who returns parameters for high diff sketches as a DoS.

  221. jnewbery commented at 10:27 am on February 10, 2021: member

    I think the approach needs rethinking for the reasons stated here: #18261#pullrequestreview-570100861. The Erlay logic in this PR is spread across many functions in net_processing.cpp and reconcilliation.h, which means that it’s impossible to test that logic in isolation. I suppose that’s why there aren’t any unit tests included in this PR.

    I think it would be much cleaner to have a fully separate module, similar to how txrequest was separated from the rest of net_processing. That module maintains its own private state and exposes a limited number of interface methods to net_processing. That means that it’s possible to very thoroughly unit test and fuzz test that module in isolation.

  222. jonatack commented at 10:43 am on February 10, 2021: member

    Needs rebase.

    Edit: This has 166 hidden comments, so apologies if I missed something, but are there any special build steps or flags to pass? With gcc 10.2.1 I’m seeing 16 occurrences of the same build warning (see comment below) and the functional tests are all failing for me with detected inconsistent lock order for 'peer->m_recon_state_mutex' in net_processing.cpp:4936 (in thread 'msghand'). Trying with clang now.

    Edit: no warnings with clang 9 but the functional tests fail for the same reason.

    Edit: updated, rebooted, cleared build cache, made distclean, rinse, repeat…same result.

    Edit: the unit tests also hang for me at Entering directory '/home/jon/projects/bitcoin/bitcoin/src/minisketch'

    Edit: if the minisketch unit tests run indefinitely, it may be good to not include them in make check (or a bounded run time version)

  223. in src/minisketch/include/minisketch.h:306 in c452e5dbd6 outdated
    288+    {
    289+        std::vector<uint64_t> result(max_elements);
    290+        ssize_t ret = minisketch_decode(m_minisketch.get(), max_elements, result.data());
    291+        if (ret == -1) return {};
    292+        result.resize(ret);
    293+        return std::move(result);
    


    jonatack commented at 10:49 am on February 10, 2021:

    with gcc 10.2.1

    0In file included from ./reconciliation.h:1,
    1                 from ./net_processing.h:10,
    2                 from rpc/net.cpp:13:
    3./minisketch/include/minisketch.h: In member function std::optional<std::vector<long unsigned int> > Minisketch::Decode(size_t) const:
    4./minisketch/include/minisketch.h:293:25: warning: redundant move in return statement [-Wredundant-move]
    5  293 |         return std::move(result);
    6      |                ~~~~~~~~~^~~~~~~~
    7./minisketch/include/minisketch.h:293:25: note: remove std::move call
    
  224. in src/net_processing.cpp:4943 in c452e5dbd6 outdated
    4932@@ -4559,22 +4933,23 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    4933 
    4934             if (pto->m_tx_relay != nullptr) {
    4935                 LOCK(pto->m_tx_relay->cs_tx_inventory);
    4936+                LOCK(peer->m_recon_state_mutex);
    


    jonatack commented at 11:54 am on February 10, 2021:
    After building with both gcc 10.2.1 and clang 9, functional tests are failing with AssertionError: Unexpected stderr Assertion failed: detected inconsistent lock order for 'peer->m_recon_state_mutex' in net_processing.cpp:4936 (in thread 'msghand')
  225. in src/net_processing.cpp:2944 in 34a485e0f1 outdated
    2814+    // Feature negotiation of tx reconciliation should happen between VERSION and
    2815+    // VERACK, to avoid relay problems from switching after a connection is up.
    2816+    if (msg_type == NetMsgType::SENDRECON) {
    2817+        if (!pfrom.m_tx_relay) return;
    2818+        LOCK(peer->m_recon_state_mutex);
    2819+        if (peer->m_recon_state != nullptr) return; // Do not support reconciliation salt/version updates.
    


    ariard commented at 1:54 pm on February 10, 2021:

    34a485e

    This checks sounds to assume we may have in the future a new reconciliation protocol for which we would reuse ReconState but with a earlier signaling mechanism. Given SENDRECON is versioned, I would expect a version-bump to introduce this hypothetical reconciliation protocol instead of new message already spawning ReconState allocation.

    Thus I think this check can be removed.

  226. in src/net_processing.cpp:2987 in 34a485e0f1 outdated
    2853+        uint256 full_salt = (CHashWriter(RECON_SALT_HASHER) << salt1 << salt2).GetSHA256();
    2854+
    2855+        peer->m_recon_state = MakeUnique<ReconState>(recon_requestor, recon_responder, flood_to, full_salt.GetUint64(0), full_salt.GetUint64(1));
    2856+
    2857+        // Reconcile with all outbound peers supporting reconciliation (even if we flood to them),
    2858+        // to not miss transactions they have for us but won't flood.
    


    ariard commented at 2:56 pm on February 10, 2021:

    34a485e

    I don’t find the section in the paper where you’re detailing the purpose of reconciliation as a redundant tx-announcement for strategic peers ? Given RECON_REQUEST_INTERVAL is superior to INVENTORY_BROADCAST_INTERVAL, I guess the motivation isn’t about improving latency of transactions discovered during floods round towards strategic peers.


    naumenkogs commented at 2:11 pm on March 15, 2021:

    “strategic” is not a bidirectional link property, it’s decided on every peer locally. We might flood stuff to them, but they won’t flood to us (because they only flood outbound).

    In that case, we will never get their transactions, if we don’t reconcile.

  227. pinheadmz commented at 3:19 pm on February 10, 2021: member

    Update: With a make clean, branch compiled fine. Sorry Gleb!

    I had issues compiling this branch as well. Full error: https://pastebin.com/6mc9eRed

    Mostly errors like this:

    0  CXXLD    libbitcoinconsensus.la
    1
    2Undefined symbols for architecture x86_64:
    3  "ChaCha20::ChaCha20(unsigned char const*, unsigned long)", referenced from:
    

    Command: ./autogen.sh && ./configure && make && make check

    I didn’t git clean or anything, probably had another PR branch up in my local repo before attempting to compile, looked good up until libbitcoinconsensus.la

    System info:

    0--> uname -a
    1Darwin   18.7.0 Darwin Kernel Version 18.7.0: Tue Aug 20 16:57:14 PDT 2019; root:xnu-4903.271.2~2/RELEASE_X86_64 x86_64
    2--> gcc -v
    3Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/usr/include/c++/4.2.1
    4Apple clang version 11.0.0 (clang-1100.0.33.8)
    5Target: x86_64-apple-darwin18.7.0
    6Thread model: posix
    7InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
    
  228. in src/net_processing.cpp:3423 in ce83d3fbcc outdated
    3374@@ -3358,7 +3375,13 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
    3375             // requests for it.
    3376             m_txrequest.ForgetTxHash(tx.GetHash());
    3377             m_txrequest.ForgetTxHash(tx.GetWitnessHash());
    3378-            RelayTransaction(tx.GetHash(), tx.GetWitnessHash(), m_connman);
    3379+
    3380+            // Flood those transactions which were received either via flooding, or inbound reconciliation,
    3381+            // but NOT via outbound reconciliation. Flooding then is mainly used for initial propagation
    3382+            // of new transactions across a network of reachable nodes quickly.
    


    ariard commented at 3:27 pm on February 10, 2021:

    ce83d3fb

    IIUC, this comment correctly, you mean that a transaction discovered through an accepted-reconciliation will be marked for reconciliation for its future announcements, no matter the peer type (strategic outbound/non-strategic outbound/inbound).

    I don’t get the rational of such approach, propagation of such transactions could be accelerated if we were flooding them for our outbound peers.

    Also you may consider “inbound reconciliation” -> “initiated reconciliation”, “outbound reconciliation” -> “accepted reconciliation”. Even if reconciliation roles are decided in function of our peer selection, they should be clearly dissociated as orthogonal.


    naumenkogs commented at 2:49 pm on March 15, 2021:

    IIUC, this comment correctly, you mean that a transaction discovered through an accepted-reconciliation will be marked for reconciliation for its future announcements, no matter the peer type (strategic outbound/non-strategic outbound/inbound).

    Correct if you mean accepted=outbound (see confusion below)

    I don’t get the rational of such approach, propagation of such transactions could be accelerated if we were flooding them for our outbound peers.

    Discuss here

    Also you may consider “inbound reconciliation” -> “initiated reconciliation”, “outbound reconciliation” -> “accepted reconciliation”. Even if reconciliation roles are decided in function of our peer selection, they should be clearly dissociated as orthogonal.

    ACK, although I think “inbound” is not initiated, but it’s accepted.

  229. pinheadmz commented at 3:59 pm on February 10, 2021: member

    @naumenkogs the new test I think has the wrong permissions:

    0-rwxr-xr-x    1 matthewzipkin  staff    5718 Feb  3 10:10 p2p_disconnect_ban.py
    1-rwxr-xr-x    1 matthewzipkin  staff    3833 Dec  3 15:09 p2p_dos_header_tree.py
    2-rw-r--r--    1 matthewzipkin  staff   28920 Feb 10 09:34 p2p_erlay.py
    3-rwxr-xr-x    1 matthewzipkin  staff    5741 Feb  3 10:10 p2p_eviction.py
    4-rwxr-xr-x    1 matthewzipkin  staff    5185 Feb  3 10:10 p2p_feefilter.py
    
    0$ test/functional/p2p_erlay.py
    1-bash: test/functional/p2p_erlay.py: Permission denied
    
  230. in src/net_processing.cpp:5090 in ce83d3fbcc outdated
    4895+                            // Since we reconcile frequently, it either means:
    4896+                            // (1) a peer for some reason does not request reconciliations from us for a long while, or
    4897+                            // (2) really a lot of valid fee-paying transactions were dumped on us at once.
    4898+                            // We don't care about a laggy peer (1) because we probably can't help them even if we flood transactions.
    4899+                            // However, exploiting (2) should not prevent us from relaying certain transactions.
    4900+                            // Since computing a sketch over too many elements is too expensive, we will just flood some transactions here.
    


    ariard commented at 4:10 pm on February 10, 2021:

    ce83d3fb

    I think we should have this discussion about just throwing away transactions under some SKETCH_MAX_SIZE. Sort them by feerate, keep the ones above the mark, discard the others.

    We’re already doing the same assumptions if a a transaction is under mempool min fee, we won’t relay it further. That can be the same with reconciliation, we don’t allocate sketch resources for low-fee transactions.

    That said, I feel it should be part of a wider conversation about how we price better network resources while still flowing through the network what is “near confirmation”.

  231. ariard commented at 4:29 pm on February 10, 2021: member

    Thanks for working on this, the scope of topics covered is really wide and interesting : privacy attacks, network resources usage, latency, …

    What I’m still not able to reason on easily is the selection of the tx-announcement protocol, which AFAIU is function of both the peer types (strategic outbound, non-strategic outbound, inbound), the transaction origin considered (initial-broadcast, relay-from-flooding, relay-from-initiated-reconciliation, relay-from-accepted-reconciliation) and lastly the sketch processing limits.

    I think it’s more a work of tightening better the bip/paper to the implementation comments, and mark the divergences from them if they are some.

    W.r.t to encapsulation, it could be at least to move new messages handlers in their own methods the same way it has been done for BIP157 (ProcessReconcilDiff, ProcessSketch, …).

  232. jonatack commented at 0:02 am on February 11, 2021: member

    @naumenkogs the new test I think has the wrong permissions: -bash: test/functional/p2p_erlay.py: Permission denied

    Confirming @pinheadmz’ comment, I needed to run chmod 755 test/functional/p2p_erlay.py on the file. The test then fails with Assertion failed: detected inconsistent lock order for 'cs_main' in net_processing.cpp:2933 (in thread 'msghand')

     0Traceback (most recent call last):
     1  File "/home/jon/projects/bitcoin/bitcoin/test/functional/p2p_erlay.py", line 583, in <module>
     2    ReconciliationTest().main()
     3  File "/home/jon/projects/bitcoin/bitcoin/test/functional/test_framework/test_framework.py", line 149, in main
     4    exit_code = self.shutdown()
     5  File "/home/jon/projects/bitcoin/bitcoin/test/functional/test_framework/test_framework.py", line 278, in shutdown
     6    self.stop_nodes()
     7  File "/home/jon/projects/bitcoin/bitcoin/test/functional/test_framework/test_framework.py", line 525, in stop_nodes
     8    node.stop_node(wait=wait, wait_until_stopped=False)
     9  File "/home/jon/projects/bitcoin/bitcoin/test/functional/test_framework/test_node.py", line 334, in stop_node
    10    raise AssertionError("Unexpected stderr {} != {}".format(stderr, expected_stderr))
    11AssertionError: Unexpected stderr Assertion failed: detected inconsistent lock order for 'cs_main' in net_processing.cpp:2933 (in thread 'msghand'), details in debug log. != 
    12[node 0] Cleaning up leftover process
    
  233. Add minisketch dependency fae78730d9
  234. Squashed 'src/minisketch/' content from commit a570de261
    git-subtree-dir: src/minisketch
    git-subtree-split: a570de26152a0ec224c38f8157f796030cb0bd20
    156004af80
  235. Merge commit '156004af80c20f0967ac867ca900086bcc17d37f' as 'src/minisketch' 5cafbbcd30
  236. Announce reconciliation support
    If a peer supports wtxid our node should notify them
    that we are willing to participate in tx reconciliation.
    At this point we generate a salt for computing
    short tx IDs for reconciliations over this connection.
    01bf7ee3d2
  237. naumenkogs marked this as a draft on Feb 11, 2021
  238. pinheadmz commented at 2:20 pm on February 11, 2021: member
    @jonasnick I chmod a+x and the test ran and passed.
  239. jonatack commented at 2:58 pm on February 11, 2021: member

    the test ran and passed.

    Building with --enable-debug? The lock order errors seem pertinent to these changes. I even rebuilt again a few hours later yesterday after updating my depends; same thing. It’s odd if no one else is seeing them.

  240. jonatack commented at 3:00 pm on February 11, 2021: member
    I suppose if no one is seeing the issues, I’ll wait until the next push here and look at what the CI says.
  241. Handle reconciliation support announcement
    Once we receive a message from a peer signalling
    reconciliation support, we initialize the data structures
    required to perform reconciliations. If we are planning to
    initiate reconciliations with this peer (if it's an outbound connection),
    add this peer to the reconciliation queue.
    fb0d2575bf
  242. Limit transaction flooding
    See how many flooding and how many reconciling
    outbound peers we have, which is useful to see whether our new
    peer should be using flooding or reconciliation.
    
    It helps to save bandwidth and presumably reduces privacy leak.
    86ee3c6235
  243. Remove node from reconciliation queue when disconnecting 9fa4f359dc
  244. Add a function to announce transactions after reconciliation
    Have a separate function to announce transactions to a peer (via INVs)
    considering INV message limitations.
    13f953a729
  245. Prepare to distinguish transactions to flood and to reconcile
    We need assign and keep track of the way we're going to relay a transaction
    based on the connection type the transaction arrived from, to
    enable transaction reconciliation protocol.
    f03dfadaa4
  246. naumenkogs force-pushed on Feb 13, 2021
  247. Distinguish transactions to flood and to reconcile
    Flooding and reconciliation are two different ways to announce
    transactions to the peers.
    For transactions received from inbound links, use flooding to enable
    a rapid relay across the reachable nodes.
    
    Use reconciliation (add transactions to the reconciliation sets) for
    all transactions which were received from outbound links to enable
    an slower but more efficient relay to unreachable nodes.
    f8eb773720
  248. Reduce tx broadcast interval
    Since reconciliation is naturally slower than flooding, and also
    since we limit the flooding aspect, we should reduce the intervals
    between broadcasts (flooding out or adding to reconciliation sets).
    Otherwise, transaction relay will be too slow
    (and probably less inefficient).
    
    Note that for privacy reasons the ratio between inbound and outbound
    delays matter much more than the actual delays. That ratio is preserved
    here, so it is not a privacy degradation.
    bbb771b33b
  249. Request tx reconciliation as scheduled
    If the peer is next in the queue for reconciliation,
    and enough time is passed from the previous
    reconciliation, consider a peer for reconciliation.
    
    If there is no pending reconciliation request to the peer,
    send one, otherwise just move to the end of queue and
    update global next reconciliation request time.
    787763e384
  250. Process incoming reconciliation request
    Upon receiving a reconciliation request, a node stores
    it and schedules a response.
    Do not respond to a request right away as it would enable
    a DoS attack and allow monitoring of transaction a node has.
    Instead, respond to all reconciliation requests at a same time
    after a small delay.
    28e8c77281
  251. Add helper to compute reconciliation tx short id
    Short ids are used to compute reconciliation sketches.
    da38a6f169
  252. Add helper to compute sketches for tx reconciliation
    Sketch is a representation of list of transaction IDs,
    which enables reconciliation (an efficient sync of lists
    between peers).
    218c99e387
  253. Respond to a reconciliation request
    When the time comes, a node computes a sketch of the
    local transactions based on the parameters sent
    in the reconciliation request, and sends that sketch
    to the requestor. Clear local state.
    2c92c14ae7
  254. Add a finalize reconciliation function
    This currently unused function is supposed to be used once
    a reconciliation round is done. It cleans the state corresponding
    to the passed reconciliation.
    3ddecd312c
  255. Add a function to identify local/remote missing txs
    When the sketches from both sides are combined successfully,
    the diff is produced. Then this diff can (together with the local txs)
    be used to identified which transactions are missing locally and remotely.
    b05ed3f51f
  256. Handle reconciliation sketch and successful decoding
    Send/request missing transactions, clear the state,
    and send a finalization message.
    1b5f82f79c
  257. Request extension if decoding failed
    If after decoding a reconciliation sketch it turned out
    to be insufficient to find set difference, request extension.
    Store the initial sketches so that we are able to process
    extension sketch.
    f2def2e8da
  258. Handle reconciliation extension request
    If peer failed to reconcile based on our initial response sketch,
    they will ask us for a sketch extension. Store this request to respond later.
    681609a508
  259. Prepare to responding to extension requests
    Add 2 variables tracking reconciliation state:
    (1) recon set snapshot and (2) capacity snapshot.
    (1) is used to store transactions arrived after we sent out
    an initial sketch, but before the reconciliation is over, since
    these transactions should not go into a sketch extension.
    (2) is used to set the capacity of the extended sketch because
    it provides good estimation (efficiency/failure tradeoff).
    d7ee88968f
  260. Respond to an extension request
    Compute a sketch with extra capacity and send
    those extra syndromes to the peer.
    It is safe to respond to an extension request without a delay
    because these requests are already limited by initial requests.
    3df995e6d0
  261. Handle extension sketch
    If a peer responded to our request with a sketch extension,
    attempt to decode it to find missing transactions by combining
    it with the initial sketch.
    If success, share/request missing transactions.
    If failure, send all the transactions we had for the peer.
    54bc4302ca
  262. Add a function to get wtxids by shortids
    At the end of a reconciliation round, a peer may ask us
    for transactions by their short id. Add a function for a local
    lookup short_id->wtxid.
    c90ba70b69
  263. Handle reconciliation finalization message
    Once a peer tells us reconciliation is done, we should behave as follows:
    - if it was successful, just respond them with the transactions they asked
      by short ID.
    - if it was a full failure, respond with all local transactions from the reconciliation
      set snapshot
    - if it was a partial failure (only low or high part was failed after a bisection),
      respond with all transactions which were asked for by short id,
      and announce local txs which belong to the failed chunk.
    836568f9d3
  264. tests: support (inbound) connecting to mininode 3627a06200
  265. Add tests for set reconciliation c5389c20b8
  266. naumenkogs force-pushed on Feb 13, 2021
  267. in src/net_processing.cpp:2561 in 01bf7ee3d2 outdated
    2554@@ -2546,6 +2555,22 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
    2555 
    2556         if (greatest_common_version >= WTXID_RELAY_VERSION) {
    2557             m_connman.PushMessage(&pfrom, msg_maker.Make(NetMsgType::WTXIDRELAY));
    2558+
    2559+            // Reconciliation is supported only when wtxid relay is supported for only
    2560+            // those connections which (at least might) support transaction relay.
    2561+            if (pfrom.IsFullOutboundConn() || pfrom.IsInboundConn() || pfrom.IsManualConn()) {
    


    amitiuttarwar commented at 5:53 am on February 14, 2021:
    can you extract this into its own CNode function with a name that captures the intent & a case statement for all the connection types? you can see ExpectServicesFromConn as an example.
  268. in src/net_processing.cpp:2861 in fb0d2575bf outdated
    2856+            if (!recon_responder) return;
    2857+            // TODO: Flood only through a limited number of outbound connections.
    2858+            flood_to = true;
    2859+        }
    2860+
    2861+        uint64_t local_salt = peer->m_local_recon_salt;
    


    amitiuttarwar commented at 6:26 am on February 14, 2021:

    this salt calculation (and probably also the flood_to logic) seems like an example of code that can be extracted into the reconciliation module rather than added to ProcessMessage.

    it would make sense to me for the flow to be something like:

    • ProcessMessage does internal state checks (m_tx_relay exists, wtxid is enabled)
    • extract the message from vRecv
    • pass it along into the module to handle
    • module returns saying “no-op” or “here is a ReconState object”
    • if it exists, ProcessMessage stores the object on the peer

    I suspect we could go even further, but this would be a start. Ideally the module would abstract this nuanced erlay specific logic away from net processing, even if its per-peer. The txrequest module is a good example for this, eg. PeerInfo struct and how its applied within the module to make choices, and the decisions are surfaced to net_processing.


    naumenkogs commented at 9:33 am on February 14, 2021:

    Yeah, I’m planning to do something similar in the next couple days. That’s why I marked the PR draft for now :)

    Thank you for the design suggestion!

  269. in src/net_processing.cpp:938 in 86ee3c6235 outdated
    933+{
    934+    size_t result = 0;
    935+    m_connman.ForEachNode([this, &result, &skip_node](const CNode* pnode) {
    936+        if (!pnode->m_tx_relay) return;
    937+        if (pnode->GetId() == skip_node.GetId()) return;
    938+        if (!pnode->IsFullOutboundConn() && !pnode->IsManualConn()) return;
    


    amitiuttarwar commented at 6:31 am on February 14, 2021:
    I find (!A && !B) to be very confusing. Again, would be good to extract into its own function with switch statement.
  270. in src/net_processing.cpp:932 in 86ee3c6235 outdated
    928@@ -914,6 +929,28 @@ void PeerManagerImpl::PushNodeVersion(CNode& pnode, int64_t nTime)
    929     }
    930 }
    931 
    932+size_t PeerManagerImpl::GetFloodingOutboundsCount(const CNode& skip_node) const
    


    amitiuttarwar commented at 6:42 am on February 14, 2021:

    instead of recalculating whenever we get a SENDRECON message, could this logic be significantly simplified by just storing a count that gets updated when nodes do something to change their state? the characteristics being checked are all established early in the connection (connection type, m_tx_relay struct, flood_to being set)

    I think this should remove the need for the “skip node” logic in this function, but if not, the node being processed can be checked and then subtract one if needed.

  271. amitiuttarwar commented at 6:43 am on February 14, 2021: contributor
    making my way through the first couple commits
  272. naumenkogs commented at 1:24 pm on March 2, 2021: member

    An update here. Based on some suggestions, I moved reconciliation to a separate module. I suggest doing some initial review of that in the PR in my repo. Could you folks help to evaluate my new approach? @jnewbery @amitiuttarwar @sipa @jonatack (everyone else is welcome too)

    This way we could save the review efforts of those contributors who are less concerned about the modularity? First do that, then proceed to other questions.

    Once that is done, I’ll get back here and welcome full review again.

  273. amitiuttarwar commented at 7:42 pm on March 8, 2021: contributor
    @naumenkogs I took a first look and I see that most of the comments I’ve left here are still applicable. I’m happy to do a deeper dive on the branch one you’ve addressed the outstanding comments.
  274. in src/net_processing.cpp:2984 in c5389c20b8
    2979+        uint64_t salt1 = local_salt, salt2 = remote_salt;
    2980+        if (salt1 > salt2) std::swap(salt1, salt2);
    2981+        static const auto RECON_SALT_HASHER = TaggedHash(RECON_STATIC_SALT);
    2982+        uint256 full_salt = (CHashWriter(RECON_SALT_HASHER) << salt1 << salt2).GetSHA256();
    2983+
    2984+        peer->m_recon_state = MakeUnique<ReconState>(recon_requestor, recon_responder, flood_to, full_salt.GetUint64(0), full_salt.GetUint64(1));
    


    fanquake commented at 0:39 am on March 12, 2021:
  275. naumenkogs commented at 9:00 pm on March 23, 2021: member
    Closing in favor of #21515. I believe I addressed most of the comments from here.
  276. naumenkogs closed this on Mar 23, 2021

  277. naumenkogs commented at 8:08 am on March 28, 2021: member
    #21515 is ready for review now
  278. DrahtBot locked this on Aug 16, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-06-29 07:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me