p2p: Fill reconciliation sets and request reconciliation (Erlay) #26283

pull naumenkogs wants to merge 7 commits into bitcoin:master from naumenkogs:2022-10-erlay2 changing 10 files +752 −14
  1. naumenkogs commented at 7:29 am on October 8, 2022: member

    See #28765

    First, this PR enables keeping track of per-peer reconciliation sets, containing those transactions which we intend to exchange efficiently. The remaining transactions are announced via flooding, as usual.

    Second, this PR enables periodically initiating a reconciliation round via a new p2p message.

    Erlay Project Tracking: #28646

  2. naumenkogs marked this as a draft on Oct 8, 2022
  3. naumenkogs commented at 7:33 am on October 8, 2022: member

    Marking this draft until

    1. Previous PR is merged Follow-up is merged
    2. There is one in-code TODO I have to resolve (it is minor, but it has to be improved)
    3. I add unit and functional tests for these features.

    Another task is to sync parent PR with this version. I have a local branch that compiles, but tests of the full Erlay there needs a little care. If you want to see how this PR works in the broader context, the parent should be good enough to get it. Otherwise, I intend to update it soon.

  4. DrahtBot added the label P2P on Oct 8, 2022
  5. DrahtBot commented at 4:15 pm on October 8, 2022: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Code Coverage

    For detailed information about the code coverage, see the test coverage report.

    Reviews

    See the guideline for information on the review process. A summary of reviews will appear here.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #28463 (p2p: Increase inbound capacity for block-relay only connections by mzumsande)
    • #21515 (Erlay: bandwidth-efficient transaction relay protocol by naumenkogs)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  6. DrahtBot added the label Needs rebase on Oct 13, 2022
  7. in src/net_processing.cpp:5756 in 5070c97459 outdated
    5741+                                    // 1. Flood parent+child.
    5742+                                    // 2. Reconcile parent+child.
    5743+                                    // 3. Flood parent, reconcile child.
    5744+                                    // We choose (2) because it has the easiest implementation.
    5745+                                    // The latency impact is not that bad:
    5746+                                    // 1. If the parent is in the reocnciliation set, the two
    


    ariard commented at 6:17 pm on October 18, 2022:
    s/reocnciliation set/reconciliation set/g
  8. in src/net_processing.cpp:5793 in 5070c97459 outdated
    5778+                            // We don't care about a laggy peer (1) because we probably can't help them even if we flood transactions.
    5779+                            // However, exploiting (2) should not prevent us from relaying certain transactions.
    5780+                            //
    5781+                            // Transactions which don't make it to the set due to the limit are announced via fan-out.
    5782+                            const size_t recon_set_size = m_txreconciliation->GetPeerSetSize(pto->GetId());
    5783+                            if (!flood_target && txs_to_reconcile.size() + recon_set_size < MAX_PEER_TX_ANNOUNCEMENTS) {
    


    ariard commented at 6:41 pm on October 18, 2022:
    Could be <= as MAX_PEER_TX_ANNOUNCEMENTS should be the maximum number of elements/transactions we announce to our peers. Note, while only strictly inferior transaction requests are considered L1409 in net_processing.cpp, this is different for the processing of NOTFOUND, L4793, still in net_processing.cpp.

    naumenkogs commented at 9:26 am on November 4, 2022:
    Applied <=, but I don’t get the second sentence :(

    ariard commented at 7:56 pm on February 22, 2023:
    Well we might have some small discrepancy in old code, doesn’t seem to matter here.
  9. in src/node/txreconciliation.h:127 in 5070c97459 outdated
    120+    bool ShouldFloodTo(uint256 wtxid, NodeId peer_id) const;
    121+
    122+    /**
    123+     * Check whether a particular transaction is being currently reconciled with a given peer.
    124+     */
    125+    bool CurrentlyReconcilingTx(NodeId peer_id, const uint256 wtxid) const;
    


    ariard commented at 6:52 pm on October 18, 2022:
    Could be called IsAlreadyInPeerSet or something more speaking than transaction is already in peer local set but not announced yet.
  10. in src/net_processing.cpp:5860 in 5070c97459 outdated
    5834 
    5835+        //
    5836+        // Message: reconciliation request
    5837+        //
    5838+        {
    5839+            if (!m_chainman.ActiveChainstate().IsInitialBlockDownload()) {
    


    ariard commented at 7:07 pm on October 18, 2022:
    Note, current inv-based transaction announcement (L5812) isn’t gated on IBD being over though I agree it makes sense to save bandwidth as the node is unlikely to be able to validate most of the transactions received for inaccuracy of its local UTXO set.
  11. in src/node/txreconciliation.h:93 in 5070c97459 outdated
    86+     * the peer just announced the transaction to us.
    87+     */
    88+    void TryRemovingFromReconSet(NodeId peer_id, const uint256 wtxid_to_remove);
    89+
    90+    /**
    91+     * Step 2. If a it's time to request a reconciliation from the peer, this function will return
    


    ariard commented at 7:10 pm on October 18, 2022:
    s/if a it’s time/if it’s time/g
  12. in src/net_processing.cpp:5853 in 5070c97459 outdated
    5836+        // Message: reconciliation request
    5837+        //
    5838+        {
    5839+            if (!m_chainman.ActiveChainstate().IsInitialBlockDownload()) {
    5840+                if (m_txreconciliation && m_txreconciliation->IsPeerRegistered(pto->GetId())) {
    5841+                    auto reconciliation_request_data = m_txreconciliation->MaybeRequestReconciliation(pto->GetId());
    


    ariard commented at 7:18 pm on October 18, 2022:
    reconciliation_request_data could be called reconciliation_request_parameters to dissociate clearly we’re sending reconciliation config info and not the set itself.
  13. in src/net_processing.cpp:164 in 5070c97459 outdated
    162  *  Limits the impact of low-fee transaction floods. */
    163 static constexpr unsigned int INVENTORY_BROADCAST_PER_SECOND = 7;
    164 /** Maximum number of inventory items to send per transmission. */
    165-static constexpr unsigned int INVENTORY_BROADCAST_MAX = INVENTORY_BROADCAST_PER_SECOND * count_seconds(INBOUND_INVENTORY_BROADCAST_INTERVAL);
    166+static constexpr unsigned int INVENTORY_BROADCAST_MAX = INVENTORY_BROADCAST_PER_SECOND *
    167+    count_seconds(std::max(INBOUND_INVENTORY_BROADCAST_INTERVAL, INBOUND_INVENTORY_BROADCAST_INTERVAL_RECON));
    


    ariard commented at 7:29 pm on October 18, 2022:
    Not sure what you aimed to achieve here with the std::max between INBOUND_INVENTORY_BROADCAST_INTERVAL and INBOUND_INVENTORY_BROADCAST_INTERVAL_RECON as the former is always superior to the latter. Maybe could be rather a static_assert to enforce the order at compilation time in case of future changes.
  14. in src/net_processing.cpp:5611 in 5070c97459 outdated
    5595                     fSendTrickle = true;
    5596                     if (pto->IsInboundConn()) {
    5597-                        tx_relay->m_next_inv_send_time = NextInvToInbounds(current_time, INBOUND_INVENTORY_BROADCAST_INTERVAL);
    5598+                        if (supports_recon) {
    5599+                            // Use shorter intervals for reconciliation peers because we use
    5600+                            // low-fanout, and 1) we need to make faster; 2) we won't get much
    


    ariard commented at 7:31 pm on October 18, 2022:
    If you can develop the comment here why “we need to make faster” or point towards another code comment or BIP section where it’s explained.
  15. in src/node/txreconciliation.cpp:29 in 5070c97459 outdated
    20+
    21+/** Announce transactions via full wtxid to a limited number of inbound and outbound peers. */
    22+constexpr double INBOUND_FANOUT_DESTINATIONS_FRACTION = 0.1;
    23+constexpr double OUTBOUND_FANOUT_DESTINATIONS_FRACTION = 0.1;
    24+/** Coefficient used to estimate reconciliation set differences. */
    25+constexpr double RECON_Q = 0.25;
    


    ariard commented at 7:36 pm on October 18, 2022:
    Could add comment if the rational behind those values selection is specified in the BIP, paper or other research.
  16. in src/node/txreconciliation.cpp:44 in 5070c97459 outdated
    31+/**
    32+ * Interval between initiating reconciliations with peers.
    33+ * This value allows to reconcile ~(7 tx/s * 8s) transactions during normal operation.
    34+ * More frequent reconciliations would cause significant constant bandwidth overhead
    35+ * due to reconciliation metadata (sketch sizes etc.), which would nullify the efficiency.
    36+ * Less frequent reconciliations would introduce high transaction relay latency.
    


    ariard commented at 7:44 pm on October 18, 2022:
    Curious, what we care about a low transaction relay latency ? Like I would say lower latency makes it a) harder to observe original transaction broadcast by deanonymization attacker b) disincentive transaction issuers front-running the standard tx-relay rules to place their transactions first in the mempools in case of congestion and c) for instant/0confs flows improve UX, do we have more properties ?

    naumenkogs commented at 10:26 am on November 4, 2022:
    I thought it’s kinda obvious for 0conf experience, yes. Not even accepting payment, but at least seeing it in the network… Not sure what to include in the codebase.

    ariard commented at 7:58 pm on February 22, 2023:
    I think it’s accurate with the current comment of warning about less frequent reconciliations introducing high transaction relay latency. Like some trade-off between bandwidth and 0confs UX to be aware, I would say.
  17. in src/node/txreconciliation.cpp:145 in 5070c97459 outdated
    131+     */
    132+    std::chrono::microseconds m_next_recon_request GUARDED_BY(m_txreconciliation_mutex){0};
    133+    void UpdateNextReconRequest(std::chrono::microseconds now) EXCLUSIVE_LOCKS_REQUIRED(m_txreconciliation_mutex)
    134+    {
    135+        // We have one timer for the entire queue. This is safe because we initiate reconciliations
    136+        // with outbound connections, which are unlikely to game this timer in a serious way.
    


    ariard commented at 8:00 pm on October 18, 2022:
    “which are unlikely to game this timer in a serious way”, though is there a timeout in the rest of the patchset to evict lazy/buggy outbound peers which would stuck the reconciliation, making us stale on Phase::INIT_REQUESTED and which would prevent MaybeRequestReconciliation() to move forward due to the check L258 ?

    naumenkogs commented at 10:29 am on November 4, 2022:
    I don’t think Phase::INIT_REQUESTED is somehow related to being stuck… if one peer stays in that state, it doesn’t prevent us from requesting recon from other peers. Only will delay by m_next_recon_request once?

    ariard commented at 7:59 pm on February 22, 2023:
    Gotcha, just means the malicious outbound peer will never move forward its own reconciliation.
  18. ariard commented at 8:01 pm on October 18, 2022: contributor
    Just few comments as a first parse on the post-#23443 commits.
  19. in src/net_processing.cpp:5839 in 5070c97459 outdated
    5821@@ -5611,11 +5822,31 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    5822                             tx_relay->m_tx_inventory_known_filter.insert(txid);
    5823                         }
    5824                     }
    5825+
    5826+                    // Populating local reconciliation set.
    5827+                    if (txs_to_reconcile.size() != 0) {
    


    aureleoules commented at 8:36 am on October 26, 2022:

    1d9b1f7d4bbf5ca5a8f43180b994b7b2fc03447f

    0                    if (!txs_to_reconcile.empty()) {
    
  20. in src/node/txreconciliation.cpp:47 in 5070c97459 outdated
    38+constexpr std::chrono::microseconds RECON_REQUEST_INTERVAL{8s};
    39+
    40+/**
    41+ * Represents phase of the current reconciliation round with a peer.
    42+ */
    43+enum Phase {
    


    aureleoules commented at 8:36 am on October 26, 2022:

    5070c97459282346cdcff7af08914c702462fe0d

    0enum class Phase {
    
  21. in src/node/txreconciliation.cpp:144 in 5070c97459 outdated
    134+    {
    135+        // We have one timer for the entire queue. This is safe because we initiate reconciliations
    136+        // with outbound connections, which are unlikely to game this timer in a serious way.
    137+        size_t we_initiate_to_count = std::count_if(m_states.begin(), m_states.end(),
    138+            [](std::pair<NodeId, std::variant<uint64_t, TxReconciliationState>> indexed_state) {
    139+                auto* cur_state = std::get_if<TxReconciliationState>(&indexed_state.second);
    


    aureleoules commented at 8:38 am on October 26, 2022:

    7cffb4cc0211b0494a7563d9a80a67633fa61255

    0                const auto* cur_state = std::get_if<TxReconciliationState>(&indexed_state.second);
    
  22. in src/node/txreconciliation.cpp:234 in 5070c97459 outdated
    244+        AssertLockNotHeld(m_txreconciliation_mutex);
    245+        if (!IsPeerRegistered(peer_id)) return std::nullopt;
    246+        LOCK(m_txreconciliation_mutex);
    247+        auto& recon_state = std::get<TxReconciliationState>(m_states.find(peer_id)->second);
    248+
    249+        if (m_queue.size() > 0) {
    


    aureleoules commented at 8:38 am on October 26, 2022:
    0        if (!m_queue.empty()) {
    
  23. in src/node/txreconciliation.cpp:305 in 5070c97459 outdated
    300+        auto recon_state = m_states.find(peer_id);
    301+        return (recon_state != m_states.end() &&
    302+                std::holds_alternative<TxReconciliationState>(recon_state->second));
    303+    }
    304+
    305+    bool ShouldFloodTo(uint256 wtxid, NodeId peer_id) const EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    


    aureleoules commented at 8:39 am on October 26, 2022:

    4cb08ebcd6190535bc2300e35557dd8b730b8951

    0    bool ShouldFloodTo(const uint256& wtxid, NodeId peer_id) const EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    
  24. in src/node/txreconciliation.cpp:305 in 5070c97459 outdated
    315+
    316+        const bool we_initiate = recon_state.m_we_initiate;
    317+        // Find all peers of the similar reconciliation direction.
    318+        std::for_each(m_states.begin(), m_states.end(),
    319+            [&eligible_peers, we_initiate](std::pair<NodeId, std::variant<uint64_t, TxReconciliationState>> indexed_state) {
    320+                const auto cur_state = std::get<TxReconciliationState>(indexed_state.second);
    


    aureleoules commented at 8:40 am on October 26, 2022:

    4cb08ebcd6190535bc2300e35557dd8b730b8951

    0                const auto& cur_state = std::get<TxReconciliationState>(indexed_state.second);
    
  25. in src/node/txreconciliation.cpp:304 in 5070c97459 outdated
    314+        std::vector<NodeId> eligible_peers;
    315+
    316+        const bool we_initiate = recon_state.m_we_initiate;
    317+        // Find all peers of the similar reconciliation direction.
    318+        std::for_each(m_states.begin(), m_states.end(),
    319+            [&eligible_peers, we_initiate](std::pair<NodeId, std::variant<uint64_t, TxReconciliationState>> indexed_state) {
    


    aureleoules commented at 8:41 am on October 26, 2022:

    4cb08ebcd6190535bc2300e35557dd8b730b8951

    0            [&eligible_peers, we_initiate](auto indexed_state) {
    
  26. in src/node/txreconciliation.cpp:353 in 5070c97459 outdated
    335+
    336+        const size_t peer_index = it - eligible_peers.begin();
    337+        return txidHasher(wtxid) % flood_index_modulo == peer_index % flood_index_modulo;
    338+    }
    339+
    340+    bool CurrentlyReconcilingTx(NodeId peer_id, const uint256 wtxid) const EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    


    aureleoules commented at 8:41 am on October 26, 2022:

    1d9b1f7d4bbf5ca5a8f43180b994b7b2fc03447f: there are other instances where wtxid is not passed by ref

    0    bool CurrentlyReconcilingTx(NodeId peer_id, const uint256& wtxid) const EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    
  27. in src/node/txreconciliation.cpp:126 in c60235bacc outdated
    121+        const bool we_initiate = !is_peer_inbound && is_peer_recon_responder;
    122+
    123+        // If we ever announce support for both requesting and responding, this will need
    124+        // tie-breaking. For now, this is mutually exclusive because both are based on the
    125+        // inbound flag.
    126+        assert(!(they_initiate && we_initiate));
    


    aureleoules commented at 9:02 am on October 26, 2022:

    c60235bacceecbe7f83bb70f761a0adb3e5bf65b nit but I think it’s easier to read

    0        assert(!they_initiate || !we_initiate));
    

    naumenkogs commented at 12:46 pm on November 4, 2022:
    Not sure it’s any better
  28. in src/node/txreconciliation.cpp:129 in c60235bacc outdated
    124+        // tie-breaking. For now, this is mutually exclusive because both are based on the
    125+        // inbound flag.
    126+        assert(!(they_initiate && we_initiate));
    127+
    128+        // The peer set both flags to false, we treat it as a protocol violation.
    129+        if (!(they_initiate || we_initiate)) return PROTOCOL_VIOLATION;
    


    aureleoules commented at 9:03 am on October 26, 2022:

    c60235bacceecbe7f83bb70f761a0adb3e5bf65b nit but I think it’s easier to read

    0        if (!they_initiate && !we_initiate) return PROTOCOL_VIOLATION;
    
  29. aureleoules commented at 9:24 am on October 26, 2022: contributor
    First code-review parse, left some minor comments.
  30. naumenkogs force-pushed on Nov 4, 2022
  31. naumenkogs force-pushed on Nov 4, 2022
  32. naumenkogs force-pushed on Nov 4, 2022
  33. DrahtBot removed the label Needs rebase on Nov 4, 2022
  34. DrahtBot added the label Needs rebase on Nov 4, 2022
  35. naumenkogs force-pushed on Nov 8, 2022
  36. DrahtBot removed the label Needs rebase on Nov 8, 2022
  37. naumenkogs force-pushed on Nov 10, 2022
  38. naumenkogs force-pushed on Nov 10, 2022
  39. naumenkogs force-pushed on Nov 10, 2022
  40. in src/node/txreconciliation.cpp:347 in 913c2cbcea outdated
    353+        // 3. Otherwise, return false.
    354+        const size_t starting_point = txidHasher(wtxid) % eligible_peers.size();
    355+        if (starting_point <= peer_position && peer_position < starting_point + round_down_flood_targets) {
    356+            return true;
    357+        } else if (peer_position == starting_point + round_down_flood_targets) {
    358+            return rand() < (flood_targets - round_down_flood_targets) * RAND_MAX;
    


    aureleoules commented at 12:48 pm on November 11, 2022:
    Using PRNGs is usually discouraged, is this safe here?

    naumenkogs commented at 1:08 pm on November 11, 2022:
    I think so — since txidHasher is salted, the peer would have no control of it, and thus can’t exploit this rand. But I’ll keep this comment in case someone has an objection, just in case.

    sipa commented at 3:50 pm on November 30, 2022:
    You should really use our own PRNG (FastRandomContext), not the standard library built-in one which has no guarantees about its quality or performance. If you really can’t use FastRandomContext for some reason, use C++ RNG functions, not C ones (rand() is not thread-safe, in addition to possibly being poor quality).
  41. DrahtBot added the label Needs rebase on Nov 30, 2022
  42. naumenkogs force-pushed on Nov 30, 2022
  43. naumenkogs marked this as ready for review on Nov 30, 2022
  44. DrahtBot removed the label Needs rebase on Nov 30, 2022
  45. in src/node/txreconciliation.cpp:202 in 624e098805 outdated
    131@@ -124,6 +132,35 @@ class TxReconciliationTracker::Impl
    132         return ReconciliationRegisterResult::SUCCESS;
    133     }
    134 
    135+    void AddToSet(NodeId peer_id, const std::vector<uint256>& txs_to_reconcile) EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    136+    {
    137+        AssertLockNotHeld(m_txreconciliation_mutex);
    138+        Assume(txs_to_reconcile.size() > 0);
    139+        assert(IsPeerRegistered(peer_id));
    


    sipa commented at 3:45 pm on November 30, 2022:

    This will lock m_txreconciliation_mutex inside IsPeerRegistered(), check the state, and unlock again. On the next line we will then lock it again, and get a reference to the actual state. I don’t think that’s what we want:

    • It doesn’t guarantee that the state didn’t change in between the unlocking in IsPeerRegistered and getting the state two lines below, so the assertion doesn’t actually prevent race conditions.
    • It’s inefficient to lock and unlock twice.

    You should check the TxReconciliationState within the same critical section as using it (perhaps by making IsPeerRegistered require m_txreconciliation_mutex being held?).

  46. in src/node/txreconciliation.cpp:220 in 624e098805 outdated
    152+    }
    153+
    154+    void TryRemovingFromSet(NodeId peer_id, const uint256& wtxid_to_remove) EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    155+    {
    156+        AssertLockNotHeld(m_txreconciliation_mutex);
    157+        assert(IsPeerRegistered(peer_id));
    


    sipa commented at 3:46 pm on November 30, 2022:
    Same thing here: check the state within the same critical section as using it.
  47. in src/node/txreconciliation.cpp:208 in 624e098805 outdated
    140+        LOCK(m_txreconciliation_mutex);
    141+        auto& recon_state = std::get<TxReconciliationState>(m_states.find(peer_id)->second);
    142+
    143+        size_t added = 0;
    144+        for (auto& wtxid: txs_to_reconcile) {
    145+            if (recon_state.m_local_set.insert(wtxid).second) {
    


    sipa commented at 3:47 pm on November 30, 2022:
    Shorter: added += recon_state.m_local_set.insert(wtxid).second;.
  48. in src/node/txreconciliation.cpp:207 in 624e098805 outdated
    139+        assert(IsPeerRegistered(peer_id));
    140+        LOCK(m_txreconciliation_mutex);
    141+        auto& recon_state = std::get<TxReconciliationState>(m_states.find(peer_id)->second);
    142+
    143+        size_t added = 0;
    144+        for (auto& wtxid: txs_to_reconcile) {
    


    sipa commented at 3:48 pm on November 30, 2022:
    Style nit (feel free to ignore, it’s not in the style guide, but I don’t think we use anything like this anywhere): space before :.
  49. in src/node/txreconciliation.cpp:295 in 51f1c19076 outdated
    202+    bool ShouldFloodTo(NodeId peer_id, const uint256& wtxid) const EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    203+    {
    204+        AssertLockNotHeld(m_txreconciliation_mutex);
    205+        if (!IsPeerRegistered(peer_id)) return true;
    206+        LOCK(m_txreconciliation_mutex);
    207+        const auto recon_state = std::get<TxReconciliationState>(m_states.find(peer_id)->second);
    


    sipa commented at 3:51 pm on November 30, 2022:
    This creates a copy of the TxReconciliationState. Sure you don’t want to get a reference here?
  50. in src/node/txreconciliation.cpp:114 in 51f1c19076 outdated
    85@@ -72,6 +86,12 @@ class TxReconciliationTracker::Impl
    86 private:
    87     mutable Mutex m_txreconciliation_mutex;
    88 
    89+    /**
    90+     * We need a ReconciliationTracker-wide randomness to decide to which peers we should flood a
    91+     * given transaction based on a (w)txid.
    92+     */
    93+    const SaltedTxidHasher txidHasher;
    


    sipa commented at 3:52 pm on November 30, 2022:
    Style: m_txid_hasher.
  51. in src/node/txreconciliation.cpp:324 in 51f1c19076 outdated
    230+        } else {
    231+            flood_targets = eligible_peers.size() * INBOUND_FANOUT_DESTINATIONS_FRACTION;
    232+            if (flood_targets == 0) return false;
    233+        }
    234+
    235+        const size_t round_down_flood_targets = floor(flood_targets);
    


    sipa commented at 3:54 pm on November 30, 2022:
    No need for floor() here. Casting a floating-point type to integer type will automatically round down.
  52. sipa commented at 3:56 pm on November 30, 2022: member
    Some comments on the first commits.
  53. naumenkogs force-pushed on Dec 1, 2022
  54. naumenkogs force-pushed on Dec 1, 2022
  55. in src/protocol.h:268 in 54777ce347 outdated
    263@@ -264,6 +264,11 @@ extern const char* WTXIDRELAY;
    264  * txreconciliation, as described by BIP 330.
    265  */
    266 extern const char* SENDTXRCNCL;
    267+/**
    268+ * Contains a 4-byte local reconciliation set size and 4-byte q-coefficient.
    


    ariard commented at 4:49 pm on December 2, 2022:
    The comment could be descriptive of actually what is contains and which protocol phase it’s triggering (as other p2p messages coments). From the BIP: “The reqrecon message initiates a reconciliation round”
  56. in src/node/txreconciliation.cpp:297 in 54777ce347 outdated
    312+        AssertLockNotHeld(m_txreconciliation_mutex);
    313+        LOCK(m_txreconciliation_mutex);
    314+        if (!IsPeerRegistered(peer_id)) return true;
    315+        const auto& recon_state = std::get<TxReconciliationState>(m_states.find(peer_id)->second);
    316+
    317+        // We assume that reconciliation is always initiated from inbound to outbound to avoid
    


    ariard commented at 6:17 pm on December 2, 2022:
    If you can use the same wording than the BIP here “The initiator of the P2P connection assumes the role of reconciliation initiator” otherwise it’s obscure how the initiator/responder overlaps with connection direction.

    naumenkogs commented at 7:55 am on December 5, 2022:
    This comment was outdated, i think it’s not needed here at all… deleting for now.
  57. in src/node/txreconciliation.cpp:93 in 54777ce347 outdated
    89@@ -53,6 +90,17 @@ class TxReconciliationState
    90      */
    91     uint64_t m_k0, m_k1;
    92 
    93+    /**
    94+     * Store all wtxids which we would announce to the peer (policy checks passed, etc.)
    


    ariard commented at 6:33 pm on December 2, 2022:

    Have you considered to enforce policy checks like feefilter at REQTXRCNCL sending instead of at reconciliation set fulfilling in INV sending ?

    Due to RECON_REQUEST_INTERVAL, I think some transactions could become stalled, wasting bandwidth in case of fast-paced mempool congestion change (not to exclude when we see few transactions issuers on the network provoking spikes by themselves). Though if correct, more likely a marginal performance improvement better left for follow-up.


    naumenkogs commented at 8:00 am on December 5, 2022:
    This would largely complicate the code I think…. Policy check stuff should be moved out of net_processing.cpp. Yeah, I see your concern (although I tested Erlay in mainnet and it was ok), let’s see if other reviewers think it’s needed now.

    glozow commented at 2:58 pm on December 7, 2022:
    To clarify, we’re considering the scenario in which our peer’s fee filter changes between adding tx to txs_to_reconcile and sending the inv after reconciliation? Given that RECON_REQUEST_INTERVAL is 8sec and AVG_FEEFILTER_BROADCAST_INTERVAL is 10min, I don’t think this is too much of a concern (at least for this PR)?
  58. in src/node/txreconciliation.cpp:213 in 54777ce347 outdated
    216+        size_t added = 0;
    217+        for (auto& wtxid: txs_to_reconcile) {
    218+            added += recon_state.m_local_set.insert(wtxid).second;
    219+        }
    220+
    221+        LogPrint(BCLog::NET, "Added %i new transactions to the reconciliation set for peer=%d. " /* Continued */
    


    ariard commented at 6:38 pm on December 2, 2022:
    Can update this logger and few other to BCLog::TXRECONCILIATION
  59. in src/node/txreconciliation.cpp:224 in 99acf14cf4 outdated
    154+        AssertLockNotHeld(m_txreconciliation_mutex);
    155+        LOCK(m_txreconciliation_mutex);
    156+        assert(IsPeerRegistered(peer_id));
    157+        auto& recon_state = std::get<TxReconciliationState>(m_states.find(peer_id)->second);
    158+
    159+        recon_state.m_local_set.erase(wtxid_to_remove);
    


    mzumsande commented at 7:34 pm on December 2, 2022:
    If adding is logged, maybe also log removal similarly?

    naumenkogs commented at 8:09 am on December 5, 2022:
    The removal is currently per-tx (not batched), so it would make the logs really annoying… Which raises another question: should I refactor it to make it batched where possible (on hearing INV batch from the given peer)?
  60. in src/node/txreconciliation.cpp:321 in 89beb9043e outdated
    231+        double flood_targets;
    232+        if (we_initiate) {
    233+            flood_targets = OUTBOUND_FANOUT_DESTINATIONS;
    234+        } else {
    235+            flood_targets = eligible_peers.size() * INBOUND_FANOUT_DESTINATIONS_FRACTION;
    236+            if (flood_targets == 0) return false;
    


    mzumsande commented at 9:42 pm on December 2, 2022:
    Using == to compare floating-point numbers seems not ideal due to precision issues, but it seems like we can never hit the return here anyway since we assert that eligible_peers.size() > 0 above, so maybe the entire line could be deleted or be an assert?
  61. in src/node/txreconciliation.cpp:331 in 89beb9043e outdated
    241+        const auto it = std::find(eligible_peers.begin(), eligible_peers.end(), peer_id);
    242+        Assume(it != eligible_peers.end());
    243+        const size_t peer_position = it - eligible_peers.begin();
    244+        // The requirements to this algorithm is the following:
    245+        // 1. Every transaction should be assigned to *some* peer, at least assuming a static list
    246+        // of peers. For this function that means no randomness.
    


    mzumsande commented at 10:05 pm on December 2, 2022:
    since you later use insecure_rand.randrange, “no randomness” seems a bit misleading.
  62. in src/node/txreconciliation.cpp:333 in 89beb9043e outdated
    243+        const size_t peer_position = it - eligible_peers.begin();
    244+        // The requirements to this algorithm is the following:
    245+        // 1. Every transaction should be assigned to *some* peer, at least assuming a static list
    246+        // of peers. For this function that means no randomness.
    247+        // 2. The choice doesn't leak the internal order of peers (m_states) to the external
    248+        // observer. This is achieved by hashing the txid.
    


    mzumsande commented at 10:09 pm on December 2, 2022:
    nit: wtxid
  63. in src/node/txreconciliation.cpp:329 in 89beb9043e outdated
    239+        const size_t round_down_flood_targets = flood_targets;
    240+
    241+        const auto it = std::find(eligible_peers.begin(), eligible_peers.end(), peer_id);
    242+        Assume(it != eligible_peers.end());
    243+        const size_t peer_position = it - eligible_peers.begin();
    244+        // The requirements to this algorithm is the following:
    


    mzumsande commented at 10:29 pm on December 2, 2022:
    I don’t think I understand this algorithm yet. Say we have 20 inbound peers (0…19), as in the example. What if the starting point is 19 because of the hash of the wtxid? Then we’d only flood the tx to peer 19, but not to peer 0 (because the algorithm doesn’t wrap around) - so we’d flood it to 1 target out of 20, not meeting the goal of flooding to 2.4 targets even ignoring the fraction - is that intended?

    naumenkogs commented at 8:47 am on December 5, 2022:
    You are right, it is broken. At least it was documented well enough you found the issue :)
  64. mzumsande commented at 11:48 pm on December 2, 2022: contributor
    Some comments on the first 3 commits.
  65. naumenkogs force-pushed on Dec 5, 2022
  66. in src/node/txreconciliation.cpp:220 in e480bed661 outdated
    225+
    226+    void TryRemovingFromSet(NodeId peer_id, const uint256& wtxid_to_remove) EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    227+    {
    228+        AssertLockNotHeld(m_txreconciliation_mutex);
    229+        LOCK(m_txreconciliation_mutex);
    230+        assert(IsPeerRegistered(peer_id));
    


    dergoegge commented at 3:48 pm on December 6, 2022:

    I would suggest not to have an assert here. Generally (imo) we should only add asserts when crashing the node is better than continuing. In this case simply adding a if(!IsPeerRegistered()) return; is better than crashing if the user of the API makes a mistake. You would be treating non-existence of the peer the same as the peer having an empty set (i.e. nothing is removed as it didn’t exist in the first place).

    Additionally, the way you are using this API right now is not generally safe.

    0if (m_txreconciliation && m_txreconciliation->IsPeerRegistered(pfrom.GetId())) {
    1    m_txreconciliation->TryRemovingFromSet(pfrom.GetId(), wtxid);
    2}
    

    The peer could be de-registered from the tracker in between the IsPeerRegistered and TryRemovingFromSet calls causing the node to crash for no good reason. This is only safe right now because cs_main is held here as well as when a peer is forgotten from the tracker. Since the tracker has its own mutex it should not rely on another mutex such as cs_main.

  67. in src/net_processing.cpp:5724 in e480bed661 outdated
    5722+                        State(pto->GetId())->m_recently_announced_invs.insert(txid);
    5723+                        State(pto->GetId())->m_recently_announced_invs.insert(wtxid);
    5724+
    5725+                        bool adding_to_recon_set = false;
    5726+                        // Check if peer supports reconciliations.
    5727+                        if (supports_recon) {
    


    dergoegge commented at 4:04 pm on December 6, 2022:

    I think it would make sense to move this block to its own function. This entire block for the tx announcement logic in SendMessages is already quite big.

    Maybe this can even move to the tracker?


    glozow commented at 3:39 pm on December 7, 2022:
    Agree, though it wouldn’t make sense to have txreconciliation module depend on mempool. Would probably recommend grabbing the txiter ahead of time and passing in the parent wtxids.

    dergoegge commented at 4:23 pm on December 7, 2022:

    Adding to this:

    Afaict it’s impossible for our fuzzers to exercise this code at the moment. Encapsulating this logic within the TxReconciliationTracker would make that possible.


    naumenkogs commented at 11:47 am on December 19, 2022:
    Could your review 15006207099882aa73c18738e5013bf0c313be93 separately? If that’s roughly what you’re asked for and you are satisfied, I will squash into the original commit.
  68. in src/node/txreconciliation.cpp:202 in e480bed661 outdated
    208+    void AddToSet(NodeId peer_id, const std::vector<uint256>& txs_to_reconcile) EXCLUSIVE_LOCKS_REQUIRED(!m_txreconciliation_mutex)
    209+    {
    210+        AssertLockNotHeld(m_txreconciliation_mutex);
    211+        Assume(txs_to_reconcile.size() > 0);
    212+        LOCK(m_txreconciliation_mutex);
    213+        assert(IsPeerRegistered(peer_id));
    


    dergoegge commented at 4:09 pm on December 6, 2022:
    Same thing here about the assert.
  69. dergoegge changes_requested
  70. in src/node/txreconciliation.cpp:332 in ffbb2b814e outdated
    238+        const auto it = std::find(eligible_peers.begin(), eligible_peers.end(), peer_id);
    239+        Assume(it != eligible_peers.end());
    240+        const size_t peer_position = it - eligible_peers.begin();
    241+        // The requirements to this algorithm is the following:
    242+        // 1. Every transaction should be assigned to *some* peer.
    243+        // 2. The choice doesn't leak the internal order of peers (m_states) to the external
    


    mzumsande commented at 8:38 pm on December 6, 2022:

    One more conceptual question for the flooding algorithm: Doesn’t this still leak internal order by flooding preferentially to adjacent nodes in eligible_peers if flood_targets > 1?

    Would it be a simpler alternative to have a deterministic randomizer that takes the wtxid, and then pick flood_targets random peers (and pick a last one only with an appropiate probability if flood_targets is not an integer), in the same way that way PeerManagerImpl::RelayAddress() picks 2 peers for addr relay? (but without the 24h reset happening there)


    naumenkogs commented at 9:04 am on December 9, 2022:

    Doesn’t this still leak internal order by flooding preferentially to adjacent nodes in eligible_peers if flood_targets > 1?

    Good point. Yes. I should think about the fix.

    Would it be a simpler alternative to have a deterministic randomizer that takes the wtxid, and then pick flood_targets random peers (and pick a last one only with an appropiate probability if flood_targets is not an integer), in the same way that way PeerManagerImpl::RelayAddress() picks 2 peers for addr relay? (but without the 24h reset happening there)

    I’ll try.


    naumenkogs commented at 11:46 am on December 19, 2022:
    Could you review if 02e480d174e4edb1ffedb643fc454ad9b77e9001 is any better? Looking at the following commit may add some useful context 15006207099882aa73c18738e5013bf0c313be93
  71. in src/net_processing.cpp:5719 in 869c682423 outdated
    5677@@ -5678,7 +5678,11 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    5678                         }
    5679                         if (tx_relay->m_bloom_filter && !tx_relay->m_bloom_filter->IsRelevantAndUpdate(*txinfo.tx)) continue;
    5680                         // Send
    5681-                        State(pto->GetId())->m_recently_announced_invs.insert(hash);
    5682+
    5683+                        // Make a transaction requestable by both txid and wtxid, to avoid making
    5684+                        // an assumption that a child arrives after the parent.
    5685+                        State(pto->GetId())->m_recently_announced_invs.insert(txid);
    


    mzumsande commented at 10:16 pm on December 6, 2022:

    Trying to understand commit 869c6824235ff2af09d24b13cfab9a6cabd52b00: “It’s possible that a parent will be scheduled to relay same time as child, but a child arrives earlier. " Is this only possible with reconciliation? The previous commit tries to make sure that the child is also scheduled for recon, if the parent was - so can it only happen in the case where the recon set is at max capacity?

    Also, since now twice as many entries will be saved in m_recently_announced_invs, does it need a larger capacity?


    naumenkogs commented at 9:18 am on December 19, 2022:

    Is this only possible with reconciliation?

    Yes I think so

    so can it only happen in the case where the recon set is at max capacity?

    Not only that. The order is lost when set difference is decoded. The decoding party only knows wtxids and will request them in random order, via a regular INV. The ordering could be handled on the responder side, but I’m afraid it would be hard and messy. I can try though.

    The logic in the last commit is not super-reliable either, so I think it’s better to have belt-and-suspenders like this.

    Unless they break something else of course.

    Also, since now twice as many entries will be saved in m_recently_announced_invs, does it need a larger capacity?

    good point.

  72. in src/net_processing.cpp:156 in 9f64cfe63e outdated
    161+ *     2a) announcing both ways simultaneously (inefficiency);
    162+ *     2b) inference based on the announced transactions (privacy leak).
    163+ *     That's why it's ok to make this delay low as well, and lower delay is generally good to
    164+ *     facilitate good transaction relay speed when slow reconciliations prevail. */
    165 static constexpr auto INBOUND_INVENTORY_BROADCAST_INTERVAL{5s};
    166+static constexpr auto INBOUND_INVENTORY_BROADCAST_INTERVAL_RECON{2s};
    


    mzumsande commented at 10:45 pm on December 6, 2022:
    Commit 9f64cfe63e3b833e0f9faf17004cf51c4c925aec: “Note that for privacy reasons the ratio between inbound and outbound delays matter much more than the actual delays. That ratio is preserved here, so it is not a privacy degradation.” The ratio is not exactly preserved, the old one was 5s/2s, now it’s 2s/1s.
  73. in src/node/txreconciliation.cpp:239 in ef5a0457ae outdated
    245+        if (!m_queue.empty()) {
    246+            // Request transaction reconciliation periodically to efficiently exchange transactions.
    247+            // To make reconciliation predictable and efficient, we reconcile with peers in order
    248+            // based on the queue, taking a delay between requests.
    249+            auto current_time = GetTime<std::chrono::seconds>();
    250+            if (m_next_recon_request <= current_time && m_queue.front() == peer_id) {
    


    mzumsande commented at 11:40 pm on December 6, 2022:
    I think it might be possible for a peer to abuse this logic and prevent us from reconciling with other peers: If an attacker sends a SENDTXRCNCL but stalls the VERACK, they would be registered and added to m_queue - but SendMessages() will never get to the part where REQTXRCNCL is sent for this peer because we abort early here before hanshake completion. Therefore, once that peer is in the front of the queue, it won’t get cleared and prevents us from reconciling with other peers while connected (up to DEFAULT_PEER_CONNECT_TIMEOUT = 60 seconds).

    naumenkogs commented at 1:02 pm on December 12, 2022:
    I don’t see that yet. No matter what happens to this peer, it will be moved to the end of the queue, and the next peer will be requested in couple seconds.

    mzumsande commented at 3:32 pm on December 12, 2022:

    I don’t see that yet. No matter what happens to this peer, it will be moved to the end of the queue, and the next peer will be requested in couple seconds.

    I was thinking of the following: 1.) Peer X gets connected, sends SENDTXRCNCL, RegisterPeer puts it into the back of the queue. 2.) X doesn’t send VERACK 3.) We do regular reconciliation requests with all other peers of the queue, so now X is in the front of it 4.) Since X is not succesfully connected, SendMessages aborts early and we don’t reach the code where REQTXRCNCL is processed 5.) X won’t be removed from the front of the queue, and we don’t request reconciliations from any other peer because of that.


    naumenkogs commented at 3:33 pm on December 12, 2022:
    ahhh good point, should be addressed indeed.

    naumenkogs commented at 10:24 am on December 13, 2022:
    This is a dangerous piece of code… This could also happen if the peer enters IBD again or something.

    mzumsande commented at 3:13 pm on December 13, 2022:
    Yeah - just to bring this alternative up, if we uncoupled peers and had every peer on its separate timer (instead of a queue), would that lead to major efficiency losses?

    naumenkogs commented at 6:04 pm on December 13, 2022:
    I started with having two separate calls — one for queue management, the other for sending REQTXRCNCL. I’ll submit that shortly. Then I’ll think about your idea.

    naumenkogs commented at 9:35 am on October 16, 2023:

    I made this code extra safe:

    • it’s very early in SendMessages;
    • it’s called even if in IBD;

    Also reminding that this only applies to outbound.

    Having separate queues for every peer would nullify all our previous tuning and performance benchmarks (both mine and from others), that’s why I’d rather stick to the current approach.


    naumenkogs commented at 10:57 am on October 17, 2023:
    After all, I decided to introduce independent timers, because my previous idea became too ugly. I think the way i implemented it it doesn’t nullify previous benchmarks, and should be robust to manipulations.
  74. in src/node/txreconciliation.cpp:28 in e480bed661 outdated
    27+constexpr double OUTBOUND_FANOUT_DESTINATIONS = 1;
    28+/**
    29+ * If there's a chance a transaction is not streamlined along the first couple hops, it would take
    30+ *  very long to relay.
    31+ */
    32+static_assert(OUTBOUND_FANOUT_DESTINATIONS >= 1);
    


    glozow commented at 3:54 pm on December 7, 2022:

    in ffbb2b814e82284c7c77c5a5b23058cbc7c0149c:

    Based on this, is the “The requirements to this algorithm is the following: 1. Every transaction should be assigned to some peer.” a hard requirement? If so, it seems the logic in f0685a1781 net_processing if recon parent -> recon child seems to break this, since you might have chosen just 1 peer to flood the child to, but then switch to recon because you’re reconing the parent with that peer. It doesn’t seem like ShouldFloodTo uses any state about whether we’ve chosen other peers for flooding, so we might end up not flooding the tx to anybody? Am I missing something / is this okay?


    naumenkogs commented at 9:49 am on December 19, 2022:

    Every transaction should be assigned to some peer." a hard requirement?

    I meant to relay in some way (either flooded or reconciled), so that the transaction doesn’t get stuck. That’s it. That resolves your question right? Fixing code comment for now.

  75. in src/net_processing.cpp:5851 in e480bed661 outdated
    5846 
    5847+        //
    5848+        // Message: reconciliation request
    5849+        //
    5850+        {
    5851+            if (!m_chainman.ActiveChainstate().IsInitialBlockDownload() && m_txreconciliation) {
    


    glozow commented at 4:16 pm on December 7, 2022:
    IIUC with this PR we’ll request reconciliation with each other but since no NetMsgType::REQTXRCNCL handling is added, we ignore the request. That’s fine, but should we also have the logic for falling back to flooding the wtxids in the reconciliation set when it gets full / after a timeout?

    naumenkogs commented at 12:04 pm on December 12, 2022:
    We’re talking about someone enabling the -txreconciliation=1 CLI flag… Why do you think this would be useful for such tester?
  76. in src/node/txreconciliation.h:84 in e480bed661 outdated
    75@@ -76,6 +76,35 @@ class TxReconciliationTracker
    76     ReconciliationRegisterResult RegisterPeer(NodeId peer_id, bool is_peer_inbound,
    77                                               uint32_t peer_recon_version, uint64_t remote_salt);
    78 
    79+    /**
    80+     * Step 1. Add new transactions we want to announce to the peer to the local reconciliation set
    81+     * of the peer, so that those transactions will be reconciled later.
    82+     * The caller *must* check that the peer is registered for reconciliations.
    83+     */
    84+    void AddToSet(NodeId peer_id, const std::vector<uint256>& txs_to_reconcile);
    


    dergoegge commented at 4:17 pm on December 7, 2022:
    It would be useful if this returned the number of added wtxids.
  77. in src/node/txreconciliation.h:91 in e480bed661 outdated
    86+    /**
    87+     * Before Step 2, we might want to remove a wtxid from the reconciliation set, for example if
    88+     * the peer just announced the transaction to us.
    89+     * The caller *must* check that the peer is registered for reconciliations.
    90+     */
    91+    void TryRemovingFromSet(NodeId peer_id, const uint256& wtxid_to_remove);
    


    dergoegge commented at 4:17 pm on December 7, 2022:
    It would be useful if this returned whether or not the wtxid was removed.
  78. in src/node/txreconciliation.h:100 in e480bed661 outdated
     96+     * know what we need:
     97+     * - size of our reconciliation set for the peer
     98+     * - our q-coefficient with the peer, formatted to be transmitted as integer value
     99+     * If the peer was not previously registered for reconciliations, returns nullopt.
    100+     */
    101+    std::optional<std::pair<uint16_t, uint16_t>> MaybeRequestReconciliation(NodeId peer_id);
    


    dergoegge commented at 4:21 pm on December 7, 2022:
    0    std::optional<std::pair<uint16_t, uint16_t>> MaybeRequestReconciliation(NodeId peer_id, std::chrono::microsecond now);
    

    Passing in the time here would be nice for testing.

    • We only have one global mock time currently (globals are always a bit annoying when writing unit/fuzz tests, as you have to reset state after each test)
    • The current global mock time is only accurate in intervals of seconds
  79. dergoegge commented at 4:28 pm on December 7, 2022: member

    I’ve drafted up a fuzz target for the TxReconciliationTracker here: https://github.com/dergoegge/bitcoin/commits/2022-10-erlay2

    I think having a fuzz target is worth while, now that there is quite a bit more going on besides registering/forgetting peers. I am happy to PR that separately after this is merged.

    Some suggestions inline after working on that target:

  80. in src/node/txreconciliation.cpp:320 in e480bed661 outdated
    318+        std::vector<NodeId> eligible_peers;
    319+        const bool we_initiate = recon_state.m_we_initiate;
    320+        // Find all peers of the same reconciliation direction.
    321+        std::for_each(m_states.begin(), m_states.end(),
    322+                      [&eligible_peers, we_initiate](auto indexed_state) {
    323+                          const auto& cur_state = std::get<TxReconciliationState>(indexed_state.second);
    


    dergoegge commented at 5:02 pm on December 7, 2022:

    This throws a std::bad_variant_access for peers that aren’t fully registered yet (found by the fuzzer).

    Maybe create an internal helper method that only iterates through all fully registered peers/states to avoid this entirely?

  81. naumenkogs force-pushed on Dec 19, 2022
  82. in src/net_processing.cpp:5375 in 1500620709 outdated
    5370+    // We must look into the reconciliation queue first. Since the queue applies to all peers,
    5371+    // this peer might block other reconciliation if we don't make this call regularly and
    5372+    // unconditionally.
    5373+    bool reconcile = false;
    5374+    if (m_txreconciliation) {
    5375+        // reconcile = m_txreconciliation->IsPeerNextToReconcileWith(pto->GetId(), current_time);
    


    aureleoules commented at 4:16 pm on December 19, 2022:
    e894824cea7bbb835c762fc57b9523f76c51339f commented code here
  83. in src/net_processing.cpp:5808 in 1500620709 outdated
    5803+        {
    5804+            if (!m_chainman.ActiveChainstate().IsInitialBlockDownload()) {
    5805+                if (reconcile) {
    5806+                    const auto reconciliation_request_params = m_txreconciliation->InitiateReconciliationRequest(pto->GetId());
    5807+                    if (reconciliation_request_params) {
    5808+                        const auto [local_set_size, local_q_formatted] = (*reconciliation_request_params);
    


    aureleoules commented at 4:17 pm on December 19, 2022:
    e894824cea7bbb835c762fc57b9523f76c51339f no need for parentheses
  84. in src/node/txreconciliation.cpp:375 in 1500620709 outdated
    373+        const double inbound_destinations = all_inbound_peers * INBOUND_FANOUT_DESTINATIONS_FRACTION;
    374+        const size_t outbound_destinations = std::min<int>(0, OUTBOUND_FANOUT_DESTINATIONS - already_flooded_to_outbound);
    375+
    376+        std::vector<std::pair<uint64_t, NodeId>> best_inbound_peers(size_t(inbound_destinations) + 1, std::make_pair(0, 0));
    377+        std::vector<std::pair<uint64_t, NodeId>> best_outbound_peers(outbound_destinations, std::make_pair(0, 0));
    378+        ;
    


    aureleoules commented at 4:18 pm on December 19, 2022:

    02e480d174e4edb1ffedb643fc454ad9b77e9001

  85. in src/node/txreconciliation.h:79 in 1500620709 outdated
    75@@ -76,6 +76,44 @@ class TxReconciliationTracker
    76     ReconciliationRegisterResult RegisterPeer(NodeId peer_id, bool is_peer_inbound,
    77                                               uint32_t peer_recon_version, uint64_t remote_salt);
    78 
    79+    /** TODO */
    


    aureleoules commented at 4:21 pm on December 19, 2022:
    e894824cea7bbb835c762fc57b9523f76c51339f Forgotten todo?
  86. in src/node/txreconciliation.cpp:405 in 1500620709 outdated
    403+        return result;
    404+    }
    405+
    406+    // For a given peer, see whether the transaction should be flooded
    407+    bool ShouldFloodTo(const uint256& wtxid, const std::vector<uint256> parents_wtxids,
    408+                       const std::vector<uint256> wtxids_to_reconcile, CSipHasher deterministic_randomizer, NodeId peer_id,
    


    aureleoules commented at 4:21 pm on December 19, 2022:
    15006207099882aa73c18738e5013bf0c313be93 Should maybe pass by const ref for parents_wtxids and wtxids_to_reconcile, same for TxReconciliationTracker::ShouldFloodTo.
  87. aureleoules commented at 6:36 pm on December 19, 2022: contributor
    Left some comments. Is it possible to add some unit tests for the ShouldFloodTo function?
  88. naumenkogs commented at 8:32 am on December 20, 2022: member
    @aureleoules fixed all you pointed out. Note that the last two commits are WIP, as I’m looking for a confirmation that those two are indeed an improvement. If yes, then I will squash them with the earlier commits, add tests, etc. You could state your opinion about them too (each commit separately) :).
  89. naumenkogs force-pushed on Dec 20, 2022
  90. ariard commented at 8:13 pm on February 16, 2023: contributor

    Curious, what we care about a low transaction relay latency ? Like I would say lower latency > makes it a) harder to observe original transaction broadcast by deanonymization attacker b) disincentive transaction issuers front-running the standard tx-relay rules to place their transactions first in the mempools in case of congestion and c) for instant/0confs flows improve UX, do we have more properties ?

    Yeah low-transaction relay latency is still to reconsider in light of all the timevalue DoS that you might have as a Lightning node running on top of Bitcoin Core. Implications with things like #21224.

  91. naumenkogs force-pushed on Feb 20, 2023
  92. naumenkogs force-pushed on Feb 21, 2023
  93. naumenkogs force-pushed on Feb 21, 2023
  94. naumenkogs force-pushed on Feb 21, 2023
  95. naumenkogs force-pushed on Feb 21, 2023
  96. naumenkogs force-pushed on Feb 21, 2023
  97. naumenkogs force-pushed on Feb 21, 2023
  98. naumenkogs force-pushed on Feb 22, 2023
  99. in src/net_processing.cpp:3936 in 37a471c986 outdated
    3719@@ -3699,6 +3720,9 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
    3720                 if (!fAlreadyHave && !m_chainman.ActiveChainstate().IsInitialBlockDownload()) {
    3721                     AddTxAnnouncement(pfrom, gtxid, current_time);
    3722                 }
    3723+                if (m_txreconciliation && m_txreconciliation->IsPeerRegistered(pfrom.GetId()) && gtxid.IsWtxid()) {
    3724+                    m_txreconciliation->TryRemovingFromSet(pfrom.GetId(), gtxid.GetHash());
    


    ariard commented at 6:53 pm on February 22, 2023:
    Don’t know if transaction received through BIP152’s BLOCKTXN should be TryRemovingFromSet().

    naumenkogs commented at 7:04 am on February 23, 2023:
    What you described means that the peer clearly has the transaction, thus it can safely be removed from the corresponding set (“try” means it’s ok if the tx is not there). What’s the problem with that?
  100. in src/net_processing.cpp:148 in 37a471c986 outdated
    152+ *  For reconciliation peers the delay is chosen according the following
    153+ *  considerations:
    154+ *  1. Reconciliation. When the transaction is reconciled, this delay is applied to adding to
    155+ *     reconciliation sets, not actual reconciliation (less frequent). That should happen rather
    156+ *     fast, so that sets are in sync and reconciliation is efficient. At the same time, not too
    157+ *     fast to avoid privacy leaks (e.g., infer connections via set probing).
    


    ariard commented at 6:55 pm on February 22, 2023:
    If the “fast” (i.e 2s) has been reasoned out in the bip or the paper, the section can be referenced too, I think.
  101. in src/node/txreconciliation.h:99 in 37a471c986 outdated
    92+     *
    93+     * The caller *must* check that the peer is registered for reconciliations.
    94+     */
    95+    bool TryRemovingFromSet(NodeId peer_id, const uint256& wtxid_to_remove);
    96+
    97+    bool IsPeerNextToReconcileWith(NodeId peer_id, std::chrono::microseconds now);
    


    ariard commented at 7:04 pm on February 22, 2023:
    Can add a comment about what TxReconciliationTracker::IsPeerNextToReconcileWith() does, like assumptions on peers queue and phase we’re in.
  102. in src/net_processing.cpp:102 in 37a471c986 outdated
     98@@ -99,7 +99,9 @@ static constexpr int32_t MAX_PEER_TX_REQUEST_IN_FLIGHT = 100;
     99 /** Maximum number of transactions to consider for requesting, per peer. It provides a reasonable DoS limit to
    100  *  per-peer memory usage spent on announcements, while covering peers continuously sending INVs at the maximum
    101  *  rate (by our own policy, see INVENTORY_BROADCAST_PER_SECOND) for several minutes, while not receiving
    102- *  the actual transaction (from any peer) in response to requests for them. */
    103+ *  the actual transaction (from any peer) in response to requests for them.
    104+ *  Also limits a maximum number of elements to store in the reconciliation set.
    


    ariard commented at 7:35 pm on February 22, 2023:
    Not sure this new comment still holds with the introduction of MAX_SET_SIZE=3000.
  103. in src/node/txreconciliation.cpp:211 in 37a471c986 outdated
    206+        if (!IsPeerRegistered(peer_id)) return 0;
    207+        auto& recon_state = std::get<TxReconciliationState>(m_states.find(peer_id)->second);
    208+
    209+        size_t added = 0;
    210+        for (auto& wtxid : txs_to_reconcile) {
    211+            added += recon_state.m_local_set.insert(wtxid).second;
    


    ariard commented at 7:47 pm on February 22, 2023:
    Correct the max set size (MAX_SET_SIZE) is enforced by ShouldFanoutTo() which happens before the call to AddToSet(), don’t know if it could be more conservative with another check here.
  104. naumenkogs force-pushed on Feb 23, 2023
  105. in src/node/txreconciliation.cpp:342 in 00df0aefff outdated
    340+                }
    341+            }
    342+        };
    343+
    344+        for (auto indexed_state : m_states) {
    345+            const auto& cur_state = std::get<TxReconciliationState>(indexed_state.second);
    


    dergoegge commented at 3:20 pm on March 14, 2023:
    Same as #26283 (review)

    dergoegge commented at 3:44 pm on March 14, 2023:
    Could use std::visit.
  106. in src/net_processing.cpp:5731 in dcde537b5b outdated
    5683+
    5684+                        bool fanout = true;
    5685+                        if (reconciles_txs) {
    5686+                            LOCK(m_mempool.cs);
    5687+                            auto txiter = m_mempool.GetIter(txinfo.tx->GetHash());
    5688+                            assert(txiter);
    


    mzumsande commented at 9:20 pm on March 15, 2023:
    I think it might be possible that in between the spot where txinfo is queried, and the time we call m_mempool.GetIter() some other thread could remove the tx from the mempool (because no lock is held), so that this assert would be hit.
  107. in src/node/txreconciliation.cpp:215 in dcde537b5b outdated
    210+                                         bool we_initiate, float limit) const EXCLUSIVE_LOCKS_REQUIRED(m_txreconciliation_mutex)
    211+    {
    212+        // To handle fractional values, we add one peer optimistically and then probabilistically
    213+        // drop it later.
    214+        // Initiate the randomness here so that it's not influenced by the following code.
    215+        double fractional_peer;
    


    mzumsande commented at 2:40 pm on March 16, 2023:
    Having a mix of float (limit) and double (fractional_peer) is probably not ideal.
  108. in src/node/txreconciliation.cpp:209 in dcde537b5b outdated
    204@@ -192,6 +205,87 @@ class TxReconciliationTracker::Impl
    205         LOCK(m_txreconciliation_mutex);
    206         return IsPeerRegistered(peer_id);
    207     }
    208+
    209+    std::vector<NodeId> GetFanoutTargets(const uint256& wtxid, CSipHasher& deterministic_randomizer,
    


    mzumsande commented at 2:48 pm on March 16, 2023:
    wtxid is unused
  109. mzumsande commented at 3:49 pm on March 16, 2023: contributor
    I found the algorithm to pick fanout peers in commit “p2p: Initiate reconciliation round” rather hard to understand. It would be helpful to have more documentation in this part of the code and the function descriptions of (ShouldFanoutTo() and GetFanoutTargets()), especially since this is not covered in the BIP.
  110. in src/net_processing.cpp:4039 in 00df0aefff outdated
    4042@@ -4021,6 +4043,10 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
    4043             return;
    4044         }
    4045 
    4046+        if (m_txreconciliation && m_txreconciliation->IsPeerRegistered(pfrom.GetId())) {
    4047+            m_txreconciliation->TryRemovingFromSet(pfrom.GetId(), wtxid);
    4048+        }
    


    dergoegge commented at 4:49 pm on March 16, 2023:
    0        if (m_txreconciliation) m_txreconciliation->TryRemovingFromSet(pfrom.GetId(), wtxid);
    

    You are already checking internally that the peer is registered.

  111. in src/net_processing.cpp:5747 in 00df0aefff outdated
    5743+                            } else {
    5744+                                size_t inbounds_fanouted = 0, outbounds_fanouted = 0;
    5745+                                m_connman.ForEachNode([&inbounds_fanouted, &outbounds_fanouted, this](CNode* pnode) {
    5746+                                    inbounds_fanouted += pnode->IsInboundConn() && pnode->m_relays_txs;
    5747+                                    outbounds_fanouted += pnode->IsFullOutboundConn() && pnode->m_relays_txs && !m_txreconciliation->IsPeerRegistered(pnode->GetId());
    5748+                                });
    


    dergoegge commented at 5:08 pm on March 16, 2023:
     0                                {
     1                                    LOCK(m_peer_mutex);
     2                                    for (auto [id, peer] : m_peer_map) {
     3                                        const auto state{State(id)};
     4                                        if (!state) continue;
     5
     6                                        if (auto tx_relay = peer->GetTxRelay()) {
     7                                            LOCK(tx_relay->m_bloom_filter_mutex);
     8                                            inbounds_fanouted += state->m_is_inbound && tx_relay->m_relay_txs;
     9                                            outbounds_fanouted += !state->m_is_inbound && tx_relay->m_relay_txs && !m_txreconciliation->IsPeerRegistered(id);
    10                                        }
    11                                    }
    12                                }
    

    See https://github.com/bitcoin/bitcoin/pull/27270

  112. in src/net_processing.cpp:5745 in 00df0aefff outdated
    5741+                                // sets for this one child is not good either.
    5742+                                fanout = true;
    5743+                            } else {
    5744+                                size_t inbounds_fanouted = 0, outbounds_fanouted = 0;
    5745+                                m_connman.ForEachNode([&inbounds_fanouted, &outbounds_fanouted, this](CNode* pnode) {
    5746+                                    inbounds_fanouted += pnode->IsInboundConn() && pnode->m_relays_txs;
    


    dergoegge commented at 1:51 pm on March 29, 2023:
    0                                    inbounds_fanouted += pnode->IsInboundConn() && pnode->m_relays_txs && !m_txreconciliation->IsPeerRegistered(pnode->GetId());
    

    Right? because otherwise this includes erlay peers which we are not flooding to by default.

  113. in src/net_processing.cpp:5751 in 00df0aefff outdated
    5747+                                    outbounds_fanouted += pnode->IsFullOutboundConn() && pnode->m_relays_txs && !m_txreconciliation->IsPeerRegistered(pnode->GetId());
    5748+                                });
    5749+
    5750+                                auto fanout_randomizer = m_connman.GetDeterministicRandomizer(RANDOMIZER_ID_FANOUTTARGET);
    5751+                                fanout = m_txreconciliation->ShouldFanoutTo(wtxid, fanout_randomizer, pto->GetId(),
    5752+                                                                            std::make_pair(m_connman.GetNodeCount(ConnectionDirection::In), inbounds_fanouted),
    


    dergoegge commented at 1:56 pm on March 29, 2023:
    m_connman.GetNodeCount(ConnectionDirection::In) includes all inbound connections, even ones that do want transactions relayed to them. IIUC, you want this to be “number of all inbound peers that sent fRelay=true” and the second member of the pair should be “number of all non-erlay inbound peers that sent fRelay=true”
  114. DrahtBot added the label Needs rebase on Mar 30, 2023
  115. in test/functional/p2p_reqtxrcncl.py:36 in 00df0aefff outdated
    31+        self.extra_args = [['-txreconciliation']]
    32+
    33+    def run_test(self):
    34+        t = int(time.time())
    35+        self.nodes[0].setmocktime(t)
    36+        self.generate(self.nodes[0], 200) # mature coinbase UTXO used later
    


    brunoerg commented at 5:12 pm on May 30, 2023:

    In 00df0aefff938e849bdab5278d6658d5c9d5d064, If the purpose is maturing them, I think we could do:

    0        self.generate(self.nodes[0], COINBASE_MATURITY) # mature coinbase UTXO used later
    
  116. in src/net_processing.cpp:5860 in 30c39c1316 outdated
    5794@@ -5787,6 +5795,21 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    5795         if (!vInv.empty())
    5796             m_connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv));
    5797 
    5798+        //
    5799+        // Message: reconciliation request
    5800+        //
    5801+        {
    5802+            if (!m_chainman.ActiveChainstate().IsInitialBlockDownload()) {
    


    brunoerg commented at 6:55 pm on May 30, 2023:
    In 30c39c1316a7b5c1914654f2a1487309b6e550ac: If we’re in IBD, wouldn’t we put that peer in the end of the queue with no need (I mean, without doing a rec request in fact)?
  117. in src/net_processing.cpp:204 in 00df0aefff outdated
    200@@ -184,6 +201,8 @@ static constexpr double MAX_ADDR_RATE_PER_SECOND{0.1};
    201 static constexpr size_t MAX_ADDR_PROCESSING_TOKEN_BUCKET{MAX_ADDR_TO_SEND};
    202 /** The compactblocks version we support. See BIP 152. */
    203 static constexpr uint64_t CMPCTBLOCKS_VERSION{2};
    204+/** Used to determine whether to fanout or to reconcile a transaction with a given peer */
    


    ariard commented at 0:27 am on August 8, 2023:
    could say “low-fanout flooding” as it’s the term used above to describe the two transaction announcement strategy
  118. naumenkogs force-pushed on Aug 16, 2023
  119. naumenkogs force-pushed on Aug 16, 2023
  120. naumenkogs force-pushed on Aug 16, 2023
  121. DrahtBot removed the label Needs rebase on Aug 16, 2023
  122. DrahtBot added the label CI failed on Aug 16, 2023
  123. naumenkogs force-pushed on Aug 16, 2023
  124. maflcko commented at 9:19 am on August 17, 2023: member
    Looks like the functional test times out in CI?
  125. naumenkogs force-pushed on Aug 17, 2023
  126. naumenkogs force-pushed on Aug 18, 2023
  127. naumenkogs force-pushed on Aug 18, 2023
  128. in src/node/txreconciliation.cpp:461 in 9d8e2d5c55 outdated
    171@@ -166,5 +172,5 @@ void TxReconciliationTracker::ForgetPeer(NodeId peer_id)
    172 
    173 bool TxReconciliationTracker::IsPeerRegistered(NodeId peer_id) const
    174 {
    175-    return m_impl->IsPeerRegistered(peer_id);
    176+    return m_impl->IsPeerRegisteredExternal(peer_id);
    


    ariard commented at 1:07 am on August 19, 2023:

    Note to reviewers, this is used in AddToSet(), which is itself called in the transaction to relay selection at message NetMsgType::INV generation in SendMessages().

    This allows one m_txreconciliation_mutex lock taking to cover the insert in the m_states entry.

  129. in src/node/txreconciliation.cpp:32 in 194e2559fc outdated
    16@@ -17,7 +17,7 @@ namespace {
    17 /** Static salt component used to compute short txids for sketch construction, see BIP-330. */
    18 const std::string RECON_STATIC_SALT = "Tx Relay Salting";
    19 const HashWriter RECON_SALT_HASHER = TaggedHash(RECON_STATIC_SALT);
    20-
    21+constexpr size_t MAX_SET_SIZE = 3000;
    


    ariard commented at 1:19 am on August 19, 2023:
    “The maximum number of wtxids stored in a peer local set, bounded to protect the memory of the reconciliation sets and short ids mapping storage size and CPU used for sketch computation” (as documented in AddToSet).
  130. in src/node/txreconciliation.cpp:157 in 194e2559fc outdated
    152+        //
    153+        // Transactions which don't make it to the set due to the limit are announced via fan-out.
    154+        if (recon_state.m_local_set.size() >= MAX_SET_SIZE) return false;
    155+
    156+        Assume(recon_state.m_local_set.insert(wtxid).second);
    157+        LogPrintLevel(BCLog::TXRECONCILIATION, BCLog::Level::Debug, "Added a transaction to the reconciliation set for peer=%d. " /* Continued */
    


    ariard commented at 1:34 am on August 19, 2023:
    The wtxid could be added to the LogPrintLevel to ease debug.
  131. in src/net_processing.cpp:5662 in dfde4393b3 outdated
    5656@@ -5650,9 +5657,13 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    5657         }
    5658 
    5659         if (auto tx_relay = peer->GetTxRelay(); tx_relay != nullptr) {
    5660+                // Lock way before it's used to maintain lock ordering.
    5661+                LOCK(m_mempool.cs);
    5662+                LOCK(m_peer_mutex);
    


    ariard commented at 1:54 am on August 19, 2023:
    I think LOCK2 could be used.
  132. in src/net_processing.cpp:5932 in dfde4393b3 outdated
    5757+                            auto txiter = m_mempool.GetIter(txinfo.tx->GetHash());
    5758+                            if (txiter) {
    5759+                                if ((*txiter)->GetCountWithDescendants() > 1) {
    5760+                                    // If a transaction has in-mempool children, always fanout it.
    5761+                                    // Until package relay is implemented, this is needed to avoid
    5762+                                    // breaking parent+child relay expectations in some cases.
    


    ariard commented at 2:13 am on August 19, 2023:
    Note to reviewers, intersections with #27463 and BIP331.
  133. in src/net_processing.cpp:5939 in dfde4393b3 outdated
    5764+                                    // Potentially reconciling parent+child would mean that for every
    5765+                                    // child we need to to check if any of the parents is currently
    5766+                                    // reconciled so that the child isn't fanouted ahead. But then
    5767+                                    // it gets tricky when reconciliation sets are full: a) the child
    5768+                                    // can't just be added; b) removing parents from reconciliation
    5769+                                    // sets for this one child is not good either.
    


    ariard commented at 2:33 am on August 19, 2023:

    I think the issue can be even worst if you’re considering the receiver-initiated approach, where the receiver might have already the child from another peer and the BIP331’s getskgtxns is in-flight. Both the parent and child wtxid reconciliation set entries are going to be wasted, I think.

    I believe the best approach to combine it with package relay support for now is just to look if it doesn’t raise privacy or CPU robustness approach for now (at least until one is finalized and deployed), and swallow the bullet in term of bandwidth savings performance.


    naumenkogs commented at 8:29 am on October 16, 2023:
    I assume what you’re saying is that “your code works, but the problem it solves is harder than what the comment says”. Correct me if I’m wrong.
  134. ariard commented at 2:42 am on August 19, 2023: contributor

    Reviewed up to dfde439 - Commit “p2p: Add transactions to reconciliation sets”

    About Erlay:

  135. naumenkogs force-pushed on Aug 21, 2023
  136. naumenkogs force-pushed on Aug 22, 2023
  137. naumenkogs force-pushed on Aug 22, 2023
  138. ariard commented at 11:09 pm on August 22, 2023: contributor
    Sounds p2p_reqtxrcncl.py is the source of failing checks for all the unsuccessful CI tasks.
  139. naumenkogs force-pushed on Aug 23, 2023
  140. naumenkogs force-pushed on Aug 25, 2023
  141. DrahtBot removed the label CI failed on Aug 25, 2023
  142. ariard commented at 3:34 pm on August 25, 2023: contributor
    All checks good, will review more.
  143. in src/node/txreconciliation.cpp:410 in 5b886367c8 outdated
    295+            // number of inbound targets.
    296+            const double inbound_targets = (inbounds_nonrcncl_tx_relay + inbound_rcncl_peers) * INBOUND_FANOUT_DESTINATIONS_FRACTION;
    297+            destinations = inbound_targets - inbounds_nonrcncl_tx_relay;
    298+        }
    299+
    300+        if (destinations < 0.01) {
    


    ariard commented at 3:03 pm on August 28, 2023:
    If my understanding is correct, if we have a too low number of inbound / outbound connections and given the limit is absolute not relative, we might have reconciliations failure and we’ll fall back to vInv announcement. A node operator could have a low maxconnections.
  144. in src/net_processing.cpp:156 in 89de0f71dc outdated
    152+ *     2a) announcing both ways simultaneously (inefficiency);
    153+ *     2b) inference based on the announced transactions (privacy leak).
    154+ *     That's why it's ok to make this delay low as well, and lower delay is generally good to
    155+ *     facilitate good transaction relay speed when slow reconciliations prevail. */
    156 static constexpr auto INBOUND_INVENTORY_BROADCAST_INTERVAL{5s};
    157+static constexpr auto INBOUND_INVENTORY_BROADCAST_INTERVAL_RECON{2s};
    


    ariard commented at 3:21 pm on August 28, 2023:
    If INBOUND_INVENTORY_BROADCAST_INTERVAL superior to INBOUND_INVENTORY_BROADCAST_INTERVAL_RECON it can be static_assert too.
  145. in src/node/txreconciliation.cpp:102 in 0ec1b427ec outdated
     96@@ -97,6 +97,13 @@ class TxReconciliationTracker::Impl
     97      */
     98     std::unordered_map<NodeId, std::variant<uint64_t, TxReconciliationState>> m_states GUARDED_BY(m_txreconciliation_mutex);
     99 
    100+    /**
    101+     * Maintains a queue of reconciliations we should initiate. To achieve higher bandwidth
    102+     * conservation and avoid overflows, we should reconcile in the same order, because then it’s
    


    ariard commented at 4:19 pm on August 28, 2023:
    The comment could be more precised and if I’m understanding correctly it could say the order of reconciliating peers should be maintained between each reconciliation.
  146. ariard commented at 4:21 pm on August 28, 2023: contributor
    Reviewed until 272f0d5fa: “p2p: Initiate reconciliation round”.
  147. naumenkogs force-pushed on Sep 6, 2023
  148. in test/functional/p2p_reqtxrcncl.py:37 in 180052c214 outdated
    32+        self.extra_args = [['-txreconciliation']]
    33+
    34+    def run_test(self):
    35+        t = int(time.time())
    36+        self.nodes[0].setmocktime(t)
    37+        self.generate(self.nodes[0], COINBASE_MATURITY) # mature coinbase UTXO used later
    


    brunoerg commented at 7:39 pm on September 12, 2023:
    In 180052c214738a15e1f027cae96ff2f1431ef4cd: Do we really need to mature them?
  149. in src/protocol.h:269 in 180052c214 outdated
    264@@ -265,6 +265,12 @@ extern const char* WTXIDRELAY;
    265  * txreconciliation, as described by BIP 330.
    266  */
    267 extern const char* SENDTXRCNCL;
    268+/**
    269+ * Contains a 4-byte local reconciliation set size and 4-byte q-coefficient
    


    ariard commented at 4:40 pm on September 21, 2023:
    I think the pair of values returned by InitiateReconciliationRequest is std::optional<std::pair<uint16_t, uint16_t>> so should say 2-byte local reconciliation set size and 2-byte q-coefficient.
  150. naumenkogs force-pushed on Oct 13, 2023
  151. naumenkogs commented at 11:19 am on October 13, 2023: member
    Still have to review a couple of comments, especially the fuzzing suggested by @dergoegge
  152. DrahtBot added the label CI failed on Oct 13, 2023
  153. refactor: Add a pre-mutexed version of IsPeerRegistered
    The pre-mutexed version is useful for external calls, while
    the regular version will be used internally.
    fa013421ac
  154. p2p: Functions to add/remove wtxids to tx reconciliation sets
    They will be used later on.
    8c658b9a06
  155. naumenkogs force-pushed on Oct 17, 2023
  156. naumenkogs force-pushed on Oct 17, 2023
  157. naumenkogs force-pushed on Oct 18, 2023
  158. naumenkogs commented at 4:18 pm on October 18, 2023: member
    This PR is good for review. Fuzzing will come in a separate PR.
  159. in src/node/txreconciliation.cpp:261 in 863a334dae outdated
    256+            //
    257+            // In both cases, we should not delay our reconciliations, but these peers should
    258+            // be moved to the end of the queue to get the next chance.
    259+            //
    260+            // Otherwise, we should bump the global timer and allow to proceed for reconciliation.
    261+            const bool their_time_passed = recon_state.m_last_txrcncl + RECON_REQUEST_INTERVAL < now;
    


    ariard commented at 3:53 am on October 19, 2023:
    naming is ambiguous as could be understood as their_time_passed we-jump-their-reconciliation round until next time. though okay it’s “enough time has passed-since-last-txrcnlc”.
  160. ariard commented at 3:53 am on October 19, 2023: contributor
    will review again pr from scratch
  161. in src/net_processing.cpp:5543 in 863a334dae outdated
    5534@@ -5509,8 +5535,22 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    5535 
    5536     PeerRef peer = GetPeerRef(pto->GetId());
    5537     if (!peer) return false;
    5538+
    5539     const Consensus::Params& consensusParams = m_chainparams.GetConsensus();
    5540 
    5541+    const auto current_time{GetTime<std::chrono::microseconds>()};
    5542+
    5543+    // We must look into the reconciliation queue early. Since the queue applies to all peers,
    


    ariard commented at 1:50 am on October 24, 2023:
    note the call to IsPeerNextToReconcileWith is before MaybeDiscourageAndDisconnect. if a peer is misbehaving and candidate for disconnection, maybe we should disconnect first and let the reconciliation slot be used by another peer

    naumenkogs commented at 8:41 am on October 27, 2023:

    This would make a difference only once, which is fine for (lower-risk) outbound peers I think.

    The problem with latter IsPeerNextToReconcileWith is that the queue won’t be queried. Say, a peer finds a way to exploit MaybeDiscourageAndDisconnect (say in combination with something else) — so that it always returns, but never disconnects. In that case the peer will be stuck at the front of the queue forever.

    I don’t think what i described is currently possible (fDisconnect only can go from false to true), but I’m afraid it might be in the future.

  162. in src/net_processing.cpp:4249 in 3ec03ffa35 outdated
    4245@@ -4241,6 +4246,8 @@ void PeerManagerImpl::ProcessMessage(CNode& pfrom, const std::string& msg_type,
    4246             return;
    4247         }
    4248 
    4249+        if (m_txreconciliation) m_txreconciliation->TryRemovingFromSet(pfrom.GetId(), wtxid);
    


    brunoerg commented at 2:20 pm on October 24, 2023:
    In 3ec03ffa35a37625ff66fcf135f851c1220cde99: Shouldn’t it be into if (AlreadyHaveTx(GenTxid::Wtxid(wtxid)))?

    naumenkogs commented at 8:50 am on October 27, 2023:
    This code should certainly go earlier than the return in that if. I will move it. This looks like a bug, although it only brings some moderate corner-case inefficiency. I should see whether I can test it in latter PRs.
  163. p2p: Add transactions to reconciliation sets
    Transactions eligible for reconciliation are added to the
    reconciliation sets. For the remaining txs, low-fanout is used.
    b07029a67c
  164. p2p: Introduce reconciliation-fanout tx broadcast interval
    For a subset of reconciling peers we announce transactions
    via low fanout. We need to set lower intervals for that to
    achieve lower relay latency.
    
    Note that for privacy reasons the ratio between inbound and outbound
    delays matter more than the actual delays.
    a27e71d4f6
  165. p2p: Add peers to reconciliation queue on negotiation
    When we're finalizing negotiation, we should add the peers
    for which we will initiate reconciliations to the queue.
    3252f54340
  166. in test/functional/p2p_reqtxrcncl.py:68 in 863a334dae outdated
    63+            peer2 = self.nodes[0].add_outbound_p2p_connection(ReqTxrcnclReceiver(), wait_for_verack=True, p2p_idx=4)
    64+
    65+        self.wallet = MiniWallet(self.nodes[0])
    66+        self.wallet.rescan_utxos()
    67+        utxos = self.wallet.get_utxos()
    68+        for utxo in utxos:
    


    brunoerg commented at 12:33 pm on October 25, 2023:

    In 863a334daeac9489e8a09c9a9f651f97081e4b87: I think you could use send_self_transfer_multi to send all utxos.

     0diff --git a/test/functional/p2p_reqtxrcncl.py b/test/functional/p2p_reqtxrcncl.py
     1index d43267f8ba..ed8d1e0fb4 100755
     2--- a/test/functional/p2p_reqtxrcncl.py
     3+++ b/test/functional/p2p_reqtxrcncl.py
     4@@ -65,9 +65,7 @@ class ReqTxRcnclTest(BitcoinTestFramework):
     5         self.wallet = MiniWallet(self.nodes[0])
     6         self.wallet.rescan_utxos()
     7         utxos = self.wallet.get_utxos()
     8-        for utxo in utxos:
     9-            # We want all transactions to be childless.
    10-            self.wallet.send_self_transfer(from_node=self.nodes[0], utxo_to_spend=utxo)
    11+        self.wallet.send_self_transfer_multi(from_node=self.nodes[0], utxos_to_spend=utxos, fee_per_output=1310)
    
  167. naumenkogs force-pushed on Oct 27, 2023
  168. DrahtBot removed the label CI failed on Oct 27, 2023
  169. in test/functional/test_framework/messages.py:1907 in 3637986f0c outdated
    1902+        self.q = struct.unpack("<H", f.read(2))[0]
    1903+
    1904+    def serialize(self):
    1905+        r = b""
    1906+        r += struct.pack("<I", self.set_size)
    1907+        r += struct.pack("<I", self.q)
    


    maflcko commented at 11:25 am on October 27, 2023:

    style-nit: For new code it may be good to use the built-in int.(to/from)_bytes helper, which has the following benefits:

    • No tuples returned in unpack
    • Number of bytes and endianness is expressed in normal integral values or normal English words, instead of cryptic single-character ascii symbols
    • No import is needed
  170. in test/functional/test_framework/messages.py:1911 in 3637986f0c outdated
    1906+        r += struct.pack("<I", self.set_size)
    1907+        r += struct.pack("<I", self.q)
    1908+        return r
    1909+
    1910+    def __repr__(self):
    1911+        return "msg_reqtxrcncl(set_size=%lu, q=%lu)" %\
    


    maflcko commented at 11:26 am on October 27, 2023:
    question: Does the %lu do anything? Could just use modern f-strings instead for new code?
  171. maflcko commented at 11:26 am on October 27, 2023: member
    left a test nit, feel free to ignore
  172. p2p: Initiate reconciliation round
    When the time comes for the peer, we send a
    reconciliation request with the parameters which
    will help the peer to construct a (hopefully) sufficient
    reconciliation sketch for us. We will then use that
    sketch to find missing transactions.
    c93bd5508e
  173. test: functional test for reqtxrcncl b518d26666
  174. in src/net_processing.cpp:5972 in ec61607c77 outdated
    5964@@ -5953,6 +5965,21 @@ bool PeerManagerImpl::SendMessages(CNode* pto)
    5965         if (!vInv.empty())
    5966             m_connman.PushMessage(pto, msgMaker.Make(NetMsgType::INV, vInv));
    5967 
    5968+        //
    5969+        // Message: reconciliation request
    5970+        //
    5971+        {
    5972+            if (!m_chainman.IsInitialBlockDownload()) {
    


    brunoerg commented at 1:24 pm on October 27, 2023:
    In ec61607c77f23b819c6803489808af0a9a916694: Why do we have to check IBD here again? I think reconcile will only be true whether we’re not in IBD.

    naumenkogs commented at 7:37 am on October 30, 2023:
    You are right. This check became redundant in the last code changes.

    naumenkogs commented at 7:52 am on October 30, 2023:
    fixing a bug along this code too: I mixed self-ibd check and peer-ibd check.

    naumenkogs commented at 7:59 am on October 30, 2023:

    And i guess we don’t have a good way to see whether a peer became IBD again… and we will spam it either with INVs or with REQTXRCNCL. At least it’s not a performance degradation with Erlay i guess.

    When #28170 is ready, we could send a don't send me txs once we got into this post-sync IBD.

  175. naumenkogs force-pushed on Oct 30, 2023
  176. naumenkogs commented at 9:07 am on November 1, 2023: member
    I decided to split this off into #28765.
  177. naumenkogs closed this on Nov 1, 2023

  178. bitcoin locked this on Oct 31, 2024

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-18 18:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me