Overhaul transaction request logic #19988

pull sipa wants to merge 9 commits into bitcoin:master from sipa:202009_txrequest_rand_wtxid changing 18 files +2297 −447
  1. sipa commented at 7:21 am on September 21, 2020: member

    This replaces the transaction request logic with an encapsulated class that maintains all the state surrounding it. By keeping it stand alone, it can be easily tested (using included unit tests and fuzz tests).

    The major changes are:

    • Announcements from outbound (and whitelisted) peers are now always preferred over those from inbound peers. This used to be the case for the first request (by delaying the first request from inbound peers), and a bias afters. The 2s delay for requests from inbound peers still exists, but after that, if viable outbound peers remain for any given transaction, they will always be tried first.
    • No more hard cap of 100 in flight transactions per peer, as there is less need for it (memory usage is linear in the number of announcements, but independent from the number in flight, and CPU usage isn’t affected by it). Furthermore, if only one peer announces a transaction, and it has over 100 in flight already, we still want to request it from them. The cap is replaced with a rule that announcements from such overloaded peers get an additional 2s delay (possibly combined with the existing 2s delays for inbound connections, and for txid peers when wtxid peers are available).
    • The limit of 100000 tracked announcements is reduced to 5000; this was excessive. This can be bypassed using the PF_RELAY permission (to accommodate locally dumping a batch of many transactions).

    This replaces #19184, rebased on #18044 and with many small changes.

  2. fanquake added the label P2P on Sep 21, 2020
  3. sipa force-pushed on Sep 21, 2020
  4. jonatack commented at 11:00 am on September 21, 2020: member
    Initial light Concept ACK based on first reading of the code, the documentation in txrequest.h, and thinking about the differences with respect to the current tx request logic. Debug build clean and local tests green at each commit. The new txrequest fuzzer is running so far without issues.
  5. DrahtBot commented at 11:34 am on September 21, 2020: member

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Conflicts

    No conflicts as of last run.

    Coverage

    Coverage Change (pull 19988, 0ad8af22ac7bd7a897a25f2ddcf678565ad8bd76) Reference (master, 380705ef4f007463dc7951376c7bedef1f01a169)
    Lines +0.1109 % 90.7277 %
    Functions +0.1378 % 86.3344 %
    Branches +0.0613 % 52.0367 %

    Updated at: 2020-10-09T08:51:32.622660.

  6. instagibbs commented at 2:59 pm on September 21, 2020: member
    committing to review soon(TM)
  7. sipa force-pushed on Sep 21, 2020
  8. jonatack commented at 6:11 pm on September 21, 2020: member
    Reviewed the first commit, “Add txrequest module”. Overall looks good. Various minor suggestions in https://github.com/jonatack/bitcoin/commits/pr-19988-review-suggestions to not add noise here; feel free to pick and choose.
  9. sipa force-pushed on Sep 22, 2020
  10. sipa commented at 0:58 am on September 22, 2020: member
    Pushed an update, incorporating @jonatack’s nits above, and addressing a number of @ariard’s comments on #19184. I also moved the entire implementation to txrequest.cpp, hidden using a TxRequestTracker::Impl object. I hope this makes it easier to distinguish the comments in the .h (now entirely about the observable behavior without implementation details) vs the .cpp (which works quite differently). This also fixed an AppVeyer issue with MSVC failing to compile.
  11. in src/txrequest.h:28 in 2cac00639e outdated
    23+ * The following information is tracked per peer/tx combination ("announcement"):
    24+ * - Which peer announced it (through their NodeId)
    25+ * - The txid or wtxid of the transaction (collectively called "txhash" in what follows)
    26+ * - Whether it was a tx or wtx announcement (see BIP339).
    27+ * - What the earliest permitted time is that that transaction can be requested from that peer (called "reqtime").
    28+ * - Whether it's from a "preferred" peer or not (outbound and whitelisted peers are preferred).
    


    MarcoFalke commented at 3:30 pm on September 22, 2020:
    0 * - Whether it's from a "preferred" peer or not (outbound and noban peers are preferred).
    

    style-nit: Would be good to mention the exact permission flag, because legacy whitelisted is discouraged, has been deprecated, and might be removed some time in the future.

    (Same below)


    sipa commented at 3:17 am on September 23, 2020:

    I’d like to keep txrequest mostly about the decision logic and data structure, while leaving net_processing responsible for the actual policy choices (“what delays are given to which peers/requests”, “what timeouts are used”, “when exactly is a peer considered preferred”, “when exactly is a peer considered overloaded”, “how many announcements can be tracked per peer”).

    Of course, the explanation here is ideally as comprehensive as possible and not full of “See some random comment in net_processing for the details” references. Still, do you think it would be reasonable to say instead:

    Whether it’s from a “preferred” peer or not (what that means is up to the caller, but this is designed to correspond mostly to outbound peers, or others that are more trusted)

    ?


    sipa commented at 0:10 am on September 24, 2020:
    Done, approximately.
  12. in src/txrequest.h:30 in 1b23bedf57 outdated
    25+ * - The txid or wtxid of the transaction (collectively called "txhash" in what follows)
    26+ * - Whether it was a tx or wtx announcement (see BIP339).
    27+ * - What the earliest permitted time is that that transaction can be requested from that peer (called "reqtime").
    28+ * - Whether it's from a "preferred" peer or not (outbound and whitelisted peers are preferred).
    29+ * - Whether the peer was the "first" to announce this txhash within its class (see further for details).
    30+ * - Whether or not the transaction was requested already, and if so, when it times out (called "exptime").
    


    ajtowns commented at 2:23 am on September 23, 2020:
    exptime is slightly ambiguous with “expected time” to me, maybe replace with “timeout” or “expiry_time” or just “expiry” ?

    sipa commented at 0:10 am on September 24, 2020:
    Changed to “expiry”.
  13. in src/txrequest.h:29 in 1b23bedf57 outdated
    24+ * - Which peer announced it (through their NodeId)
    25+ * - The txid or wtxid of the transaction (collectively called "txhash" in what follows)
    26+ * - Whether it was a tx or wtx announcement (see BIP339).
    27+ * - What the earliest permitted time is that that transaction can be requested from that peer (called "reqtime").
    28+ * - Whether it's from a "preferred" peer or not (outbound and whitelisted peers are preferred).
    29+ * - Whether the peer was the "first" to announce this txhash within its class (see further for details).
    


    ajtowns commented at 2:30 am on September 23, 2020:
    “(see further for details)” is true of all these points, I think.

    sipa commented at 0:10 am on September 24, 2020:
    I was trying to refer to the specific section on the “first” marker. Made that more explicit.
  14. in src/txrequest.h:69 in 1b23bedf57 outdated
    64+ *   - If any preferred peers are available, non-preferred peers are not considered for what follows.
    65+ *
    66+ *     Rationale: preferred peers (outbound, whitelisted) are chosen by us, so are less likely to be under attacker
    67+ *                control.
    68+ *
    69+ *   - Among the remaining candidates, choose the one with the first marker if one exists (there can be at most
    


    ajtowns commented at 2:33 am on September 23, 2020:
    This was confusing – I read it as “the first of many markers” rather than “a marker indicating it was first” and wondered what these “markers” were. Adding quotes around “first” consistently might be better?

    sipa commented at 0:11 am on September 24, 2020:
    Done.
  15. in src/txrequest.h:81 in 1b23bedf57 outdated
    76+ *
    77+ *   - If no remaining candidates have the first marker, pick a uniformly random peer among the candidates.
    78+ *
    79+ *     Rationale: if the first mechanism failed, random assignments are hard to influence for attackers.
    80+ *
    81+ *   The overall effect of these rules is that an attacker that races announcements to us can delay
    


    ajtowns commented at 2:35 am on September 23, 2020:
    Incomplete sentence?

    sipa commented at 0:11 am on September 24, 2020:
    Removed. It was a leftover of what turned into the last comment section.
  16. in src/txrequest.h:95 in 1b23bedf57 outdated
    101+ * Complexity:
    102+ * - Memory usage is proportional to the total number of tracked announcements (Size()) plus the number of
    103+ *   peers with a nonzero number of tracked announcements.
    104+ * - CPU usage is generally logarithmic in the total number of tracked announcements, plus the number of
    105+ *   announcements affected by an operation (amortized O(1) per announcement).
    106+ */
    


    ajtowns commented at 2:41 am on September 23, 2020:
    Add a reference to https://allquantor.at/blockchainbib/pdf/miller2015topology.pdf to document/explain the invblock terminology? “What does this have to do with an INV message about a block?”

    sipa commented at 0:11 am on September 24, 2020:
    Done.
  17. in src/txrequest.h:92 in 1b23bedf57 outdated
    87+ *   the first marker already.
    88+ *
    89+ *   Rationale: these restrictions avoid giving the speed benefit to honest but overloaded peers, and also
    90+ *              reduce the impact an attacker who races the network for many transaction can have.
    91+ *
    92+ * Together these rules strike a balance between being fast in non-adverserial conditions and being influenceable by
    


    ajtowns commented at 2:42 am on September 23, 2020:
    “and not being influencable by attackers”

    sipa commented at 0:11 am on September 24, 2020:
    Done, approximately.
  18. in src/txrequest.h:156 in 1b23bedf57 outdated
    151+
    152+    /** Deletes all entries for a given peer.
    153+     *
    154+     * It should be called when a peer goes offline.
    155+     */
    156+    void DeletedPeer(uint64_t uint64_t);
    


    ajtowns commented at 2:53 am on September 23, 2020:
    “uint64_t” as a parameter name?

    sipa commented at 0:12 am on September 24, 2020:

    Apparently that’s not actually a 128-bit type ;)

    Fixed.

  19. in src/txrequest.h:122 in 1b23bedf57 outdated
    116+    };
    117+
    118+private:
    119+    // Avoid littering this header file with implementation details.
    120+    class Impl;
    121+    std::unique_ptr<Impl> m_impl;
    


    ajtowns commented at 3:00 am on September 23, 2020:

    Isn’t this exactly the same performance as having a subclass and making the methods virtual, except with all the dispatching written out explicitly? ie, could instead write something like:

     0class TxRequestTracker
     1{
     2protected:
     3    TxRequestTracker() { } // pure virtual class, must instantiate via subclass
     4public:
     5    virtual ~TxRequestTracker();
     6    virtual void DeletedPeer(uint64_t peer) = 0;
     7    ...
     8};
     9std::unique_ptr<TxRequestTracker> CreateTxRequestTracker(bool deterministic = false);
    10
    11static std::unique_ptr<TxRequestTracker> g_txrequest = CreateTxRequestTracker() GUARDED_BY(cs_main);
    

    then hide the subclass in txrequest.cpp? Of course, “exactly the same” means no objective reason to prefer changing to this.


    sipa commented at 0:18 am on September 24, 2020:

    It’s not exactly the same, I think. Calling member functions on a TxRequestTracker would incur a lookup in the vtable to find the code to execute. The current solution has link-time determined code flow, and only an extra indirection to find the object’s storage.

    I don’t think either is remotely relevant for performance here, and the subclass approach you suggest is probably somewhat lighter syntactically. I’m happy to change it if other reviewers agree.


    ajtowns commented at 4:46 am on September 24, 2020:
    Ah, good point! That seems a sufficient reason, so going to mark this as resolved.
  20. in src/txrequest.h:134 in 1b23bedf57 outdated
    129+    TxRequestTracker(const TxRequestTracker&) = delete;
    130+    TxRequestTracker& operator=(const TxRequestTracker&) = delete;
    131+
    132+    // Move constructors.
    133+    TxRequestTracker(TxRequestTracker&&) = default;
    134+    TxRequestTracker& operator=(TxRequestTracker&&) = default;
    


    ajtowns commented at 3:05 am on September 23, 2020:
    Are these needed for fuzz testing, or could they be deleted as well? (copy constructors are implicitly deleted because of the unique ptr, so I think currently this is just making the implicit defaults explicit)

    sipa commented at 0:19 am on September 24, 2020:
    Gone.
  21. sipa commented at 3:06 am on September 23, 2020: member
    @sr-gi You may be interested in this.
  22. in src/txrequest.h:173 in 1b23bedf57 outdated
    200+     *    to preference/first rules, and tiebreaking using a deterministic salted hash of peer and txhash).
    201+     *  - The selected entries are sorted in order of announcement (even if multiple were added at the same time, or
    202+     *    even when the clock went backwards while they were being added), converted to GenTxids using their
    203+     *    is_wtxid flag, and returned.
    204+     */
    205+    std::vector<GenTxid> GetRequestable(uint64_t peer, std::chrono::microseconds now);
    


    ajtowns commented at 3:35 am on September 23, 2020:

    Does anything limit the size of the returned vector? I think the main constraints are those in net_processing.cpp:RequestTx which could leave as many as 100k entries in CANDIDATE state for a given peer, so this could be a 4MB vector, which seems like it might be larger than desirable?

    It’s also constrained by how many txs can be INVed by a peer inbetween calls to GetRequestable, so in normal circumstances I’d expect this to be perfectly fine. I think you could get the worst case by quickly sending two max-size INV messages and two max-size NOTFOUND messages once the first request comes in.

    (I think this has to return a copy of the data items, because you want to iterate over them and request them, which would then modify the data structure, and that would invalidate the iterator you were using if you hadn’t made a copy)


    sipa commented at 0:21 am on September 24, 2020:
    I’ve added a commit that just reduces MAX_PEER_TX_ANNOUNCEMENTS; 100000 was ridiculous. It can be bypassed using the PF_RELAY permission.
  23. in src/txrequest.h:223 in c50b25c2c4 outdated
    214@@ -215,6 +215,12 @@ class TxRequestTracker {
    215 
    216     /** Access to the internal PriorityComputer (for testing) */
    217     const PriorityComputer& GetPriorityComputer() const;
    218+
    219+    /** Run internal consistency check (test only). */
    220+    void SanityCheck() const;
    221+
    222+    /** Run a time-dependent consistency check (can only be called immediately after GetRequestable; test only). */
    223+    void TimeSanityCheck(std::chrono::microseconds now) const;
    


    ajtowns commented at 3:44 am on September 23, 2020:
    “Run a check on consistency of request times after a call to GetRequestable (requires the same timestamp as was passed to GetRequestable)” might be a better description? (At first glance I thought it might have been timing the sanity check, or doing a limited sanity check that checked different things depending on how much time it was taking to run)

    sipa commented at 0:23 am on September 24, 2020:

    Renamed to PostGetRequestableSanityCheck, and updated comment.

    It’s not just a consistency check of request times.


    ajtowns commented at 5:09 am on September 24, 2020:
    Not sure I follow, the only asserts are comparisons between entry.m_time and now ? Or should I have said “entry times” because “request” times might only mean REQUESTED entries?

    sipa commented at 5:16 am on September 24, 2020:
    m_time is the reqtime for CANDIDATE_*, but expiry for REQUESTED entries.
  24. in src/net_processing.cpp:747 in c7601cdb31 outdated
    793-
    794-    peer_download_state.m_tx_process_time.emplace(process_time, gtxid);
    795+    auto state = State(nodeid);
    796+    auto delay = std::chrono::microseconds{0};
    797+    bool preferred = state->fPreferredDownload;
    798+    bool overloaded = g_txrequest.CountInFlight(nodeid) >= MAX_PEER_TX_IN_FLIGHT;
    


    ajtowns commented at 4:18 am on September 23, 2020:
    Maybe this should be overloaded == ... || g_txrequest.CountCandidates() >= 5000 or similar? I’m not sure I have a plausible enough scenario where this would be a benefit to justify the added code though.

    sipa commented at 0:22 am on September 24, 2020:

    With MAX_PEER_TX_ANNOUNCEMENTS reduced, is that still needed?

    I’m a bit hesitant about this, as the number of announcements tracked for a peer is somewhat dependent on other peers’ behavior.


    ajtowns commented at 5:01 am on September 24, 2020:
    Yeah, sounds good.
  25. in src/net_processing.cpp:866 in c7601cdb31 outdated
    771-    if (use_txid_delay) process_time += TXID_RELAY_DELAY;
    772-
    773-    return process_time;
    774-}
    775-
    776-void RequestTx(CNodeState* state, const GenTxid& gtxid, std::chrono::microseconds current_time) EXCLUSIVE_LOCKS_REQUIRED(cs_main)
    


    ajtowns commented at 4:30 am on September 23, 2020:
    I think if you were to make RequestTx() a method of class PeerManager, you could make g_txrequest be a private member PeerManager::m_txrequest GUARDED_BY(cs_main) instead of a global. OTOH, might be better to not do that until other globals that PeerManager methods rely on (like mapNodeState) are also moved. cc @jnewbery

    sipa commented at 0:23 am on September 24, 2020:
    I haven’t paid too much attention to that, but I’m happy to change this if desirable.

    jnewbery commented at 11:36 am on September 24, 2020:

    Yes, I think this would be an improvement for encapsulation and for being explicit about when the TxRequestManager is constructed/destructed. It’s my understanding that global static objects can be constructed before main() or later.

    There isn’t any requirement to wait for mapNodeState to move to PeerManager.

    I have a branch that moves TxRequestTracker and RequestTx() to be members of PeerManager here: https://github.com/jnewbery/bitcoin/tree/pr19988.1. The move is in a separate commit in that branch for ease of review, but should be squashed into Change transaction request logic to use txrequest.


    sipa commented at 5:33 pm on September 24, 2020:
    Done.
  26. in test/functional/p2p_tx_download.py:129 in c7601cdb31 outdated
    120-            GETDATA_TX_INTERVAL + MAX_GETDATA_RANDOM_DELAY)
    121+        timeout = 2 + INBOUND_PEER_TX_DELAY + GETDATA_TX_INTERVAL
    122         self.log.info("Tx should be received at node 1 after {} seconds".format(timeout))
    123         self.sync_mempools(timeout=timeout)
    124 
    125-    def test_in_flight_max(self):
    


    ajtowns commented at 5:00 am on September 23, 2020:
    Should replace this test with one that checks we start applying OVERLOAD_PEER_TX_DELAY ?

    sipa commented at 7:05 pm on September 23, 2020:
    @ajtowns Feel like writing such a test?

    sipa commented at 6:24 pm on September 27, 2020:
    Done.
  27. ajtowns commented at 5:22 am on September 23, 2020: member
    Here’s a whole bunch of nits from a first pass at the PR, prior to looking at any of the implementation details. As at 2cac00639e5eb22eb6b076bcc001196d958a4f2e
  28. sipa force-pushed on Sep 24, 2020
  29. sipa force-pushed on Sep 24, 2020
  30. in src/net_processing.cpp:82 in 06d708c7cb outdated
    74@@ -75,7 +75,7 @@ static const unsigned int MAX_INV_SZ = 50000;
    75 /** Maximum number of in-flight transactions from a peer */
    76 static constexpr int32_t MAX_PEER_TX_IN_FLIGHT = 100;
    77 /** Maximum number of announced transactions from a peer */
    78-static constexpr int32_t MAX_PEER_TX_ANNOUNCEMENTS = 2 * MAX_INV_SZ;
    79+static constexpr int32_t MAX_PEER_TX_ANNOUNCEMENTS = 5000;
    


    ajtowns commented at 5:49 am on September 24, 2020:
    Not sure if it’s worth adding any explicit justification for picking that number. But for the record, my thinking is: if our peers use the same INVENTORY_BROADCAST_INTERVAL, INVENTORY_BROADCAST_MAX params as us (and assuming they also halve the interval when they consider us an outbound, ie when we consider them an inbound), then 5000 is about 16 minutes worth of INVs for our outbounds, and 5 minutes for our inbounds. (More presuming we actually receive txs and thus clear entries out, or already had them)

    sipa commented at 10:56 pm on September 25, 2020:
    I added some rationale.

    ajtowns commented at 5:00 am on September 26, 2020:
    Rationale looks fine to me :+1:
  31. in src/txrequest.cpp:676 in 7949d08406 outdated
    671+TxRequestTracker::TxRequestTracker(bool deterministic)
    672+{
    673+    m_impl = MakeUnique<TxRequestTracker::Impl>(deterministic);
    674+}
    675+
    676+TxRequestTracker::~TxRequestTracker() {}
    


    jnewbery commented at 12:00 pm on September 24, 2020:

    Consider using the default syntax to indicate that you only included this in the cpp file so the m_impl unique ptr can be destructed:

    0TxRequestTracker::~TxRequestTracker() = default;
    

    sipa commented at 5:34 pm on September 24, 2020:
    Of course! Done.
  32. in src/txrequest.h:112 in 7949d08406 outdated
    107+ */
    108+class TxRequestTracker {
    109+public:
    110+    /** A functor with embedded salt that computes priority of a txhash/peer combination. Lower priorities are
    111+     *  selected first. */
    112+    class PriorityComputer {
    


    jnewbery commented at 1:38 pm on September 24, 2020:

    It looks like this is only in the header so that the unit and fuzz tests can call GetPriorityComputer() and get the computer, and the only reason they do that is so they can call PriorityComputer::operator()().

    Could you move all this to the implementation file, and add a public method ComputePriority to by used by the tests that calculates the priority internally and returns it?


    sipa commented at 5:34 pm on September 24, 2020:
    Nice, done.
  33. in src/txrequest.cpp:673 in 7949d08406 outdated
    668+    }
    669+};
    670+
    671+TxRequestTracker::TxRequestTracker(bool deterministic)
    672+{
    673+    m_impl = MakeUnique<TxRequestTracker::Impl>(deterministic);
    


    jnewbery commented at 2:06 pm on September 24, 2020:

    Consider using an initializer list here instead of setting m_impl in the ctor function body.

    If you did that, you could make m_impl const, which is fine since you’ve deleted the move constructor and assignment.

    const unique_ptr communicates that this m_impl is always the object that TxRequestTracker owns. See Herb Sutter’s Leak-Freedom talk at https://youtu.be/JfmTagWcqoE?t=571

     0diff --git a/src/txrequest.cpp b/src/txrequest.cpp
     1index f6c387f8f4..b7d90a1ba5 100644
     2--- a/src/txrequest.cpp
     3+++ b/src/txrequest.cpp
     4@@ -668,10 +668,8 @@ public:
     5     }
     6 };
     7 
     8-TxRequestTracker::TxRequestTracker(bool deterministic)
     9-{
    10-    m_impl = MakeUnique<TxRequestTracker::Impl>(deterministic);
    11-}
    12+TxRequestTracker::TxRequestTracker(bool deterministic) :
    13+    m_impl{MakeUnique<TxRequestTracker::Impl>(deterministic)} {}
    14 
    15 TxRequestTracker::~TxRequestTracker() {}
    16 
    17diff --git a/src/txrequest.h b/src/txrequest.h
    18index 6e892eebe1..83f6dbe2fe 100644
    19--- a/src/txrequest.h
    20+++ b/src/txrequest.h
    21@@ -119,7 +119,7 @@ public:
    22 private:
    23     // Avoid littering this header file with implementation details.
    24     class Impl;
    25-    std::unique_ptr<Impl> m_impl;
    26+    const std::unique_ptr<Impl> m_impl;
    27 
    28 public:
    29     //! Construct a TxRequestTracker.
    

    sipa commented at 5:34 pm on September 24, 2020:
    Done.
  34. jnewbery commented at 2:07 pm on September 24, 2020: member
    Just a couple of minor style comments so far.
  35. in src/txrequest.h:138 in 164f504024 outdated
    144+
    145+    /** Deletes all entries for a given peer.
    146+     *
    147+     * It should be called when a peer goes offline.
    148+     */
    149+    void DeletedPeer(uint64_t peer);
    


    instagibbs commented at 4:55 pm on September 24, 2020:

    Deleted(past tense) is a bit weird since it’s actually deleting the peer.

    At a risk of making it too verbose via self-documenting: DeletePeerEntries?

    Alternatively we can name it like some of the other functions e.g. ReceivedInv et. al: DisconnectedPeer describing when it’s to be called and leaving the first sentence in the comment.


    sipa commented at 6:11 pm on September 24, 2020:

    Ah, my thinking behind the name was the caller informing TxRequestTracker “Hey I have deleted peer X, you may want to forget about them”.

    DisconnectedPeer sounds better.


    sipa commented at 11:43 pm on September 24, 2020:
    Renamed to DisconnectedPeer.
  36. in src/txrequest.h:143 in 164f504024 outdated
    149+    void DeletedPeer(uint64_t peer);
    150+
    151+    /** Deletes all entries for a given txhash.
    152+     *
    153+     * This should be called when a transaction is successfully added to the mempool, seen in a block, or for
    154+     * whatever reason we no longer care about it. The is_wtxid flag of gtxid is ignored.
    


    instagibbs commented at 4:58 pm on September 24, 2020:
    0     * whatever reason we no longer care about it. Only the hash field of gtxid is used.
    

    instagibbs commented at 5:01 pm on September 24, 2020:
    Maybe talk about what it’s used for instead of what it’s not, or just change the interface to make it clear.

    sipa commented at 11:44 pm on September 24, 2020:
    Changed the interface to take just a uint256.
  37. in src/txrequest.h:127 in 164f504024 outdated
    133+    //   download after their reqtime has passed.
    134+    //
    135+    // - REQUESTED entries represent transactions that have been requested, and which we're awaiting a response for
    136+    //   from that peer. Their expiry value determines when the request times out.
    137+    //
    138+    // - COMPLETED entries represent transactions that have been requested from a peer, but they timed out, a
    


    instagibbs commented at 5:08 pm on September 24, 2020:

    COMPLETED sounds more like FAILED.

    COMPLETED is also hit when the transaction is received properly, no?


    sipa commented at 6:12 pm on September 24, 2020:
    I’m not sure what you’re suggesting here. FAILED seems like a bad name as it indeed doesn’t imply failure.

    instagibbs commented at 6:22 pm on September 24, 2020:
    apologies I wrote my comment in two frame of minds. I mean to say the comment doesn’t mention “success” as a possibility to transition to this state. Update the comment and I’m happy.

    sipa commented at 11:44 pm on September 24, 2020:
    Done. Better?

    instagibbs commented at 0:56 am on September 25, 2020:
    yes, thank you
  38. in src/txrequest.h:173 in 164f504024 outdated
    193+     *    to preference/first rules, and tiebreaking using a deterministic salted hash of peer and txhash).
    194+     *  - The selected entries are sorted in order of announcement (even if multiple were added at the same time, or
    195+     *    even when the clock went backwards while they were being added), converted to GenTxids using their
    196+     *    is_wtxid flag, and returned.
    197+     */
    198+    std::vector<GenTxid> GetRequestable(uint64_t peer, std::chrono::microseconds now);
    


    instagibbs commented at 5:20 pm on September 24, 2020:
    0    std::vector<GenTxid> ExpireReqestedAndGetRequestable(uint64_t peer, std::chrono::microseconds now);
    

    to make it less reliant on comments for intent


    sipa commented at 6:16 pm on September 24, 2020:

    I’m not entirely sure, as the caller really doesn’t care about the fact that it also expires things. It just wants to know what should be requested.

    It’s not an implementation detail, as the effect of expiration is observable (CountInFlight, CountTracked() and Size() go down), but I think that makes just specification details rather than the caller’s intent.

  39. in src/txrequest.h:201 in 71f551d840 outdated
    207@@ -208,6 +208,15 @@ class TxRequestTracker {
    208 
    209     /** Access to the internal PriorityComputer (for testing) */
    210     const PriorityComputer& GetPriorityComputer() const;
    211+
    212+    /** Run internal consistency check (testing only). */
    


    instagibbs commented at 5:26 pm on September 24, 2020:

    Wouldn’t hurt to make this block not publicly callable somehow.

    Just an idea in case it’s not crazy: https://stackoverflow.com/a/23267346


    jnewbery commented at 5:53 pm on September 24, 2020:
    I’d prefer to move these into a friend TxRequestTrackerTester class (I thought I’d left a review comment suggesting that, but I appear to have lost it).

    sipa commented at 6:17 pm on September 24, 2020:
    @jnewbery How do you suggest to do that in combination with the impl pattern?

    jnewbery commented at 9:15 pm on September 24, 2020:
    Hmmm yes, very good question! I wonder if something along the lines of https://stackoverflow.com/a/34054983/933705 might work (but then again, it might not be worth it just to avoid putting these three test helper functions in the public interface).

    sipa commented at 11:46 pm on September 24, 2020:
    I think it’s not unreasonable to have sanity checking functions for involved data structures in code that actually deals with that data structure. It could possibly be called at runtime using something something similar to -checkmempool too if we wanted.

    jnewbery commented at 6:17 am on September 25, 2020:
    I agree. That seems reasonable.
  40. in src/net_processing.cpp:1233 in 3454757a66 outdated
    1200@@ -1201,6 +1201,13 @@ void PeerManager::BlockConnected(const std::shared_ptr<const CBlock>& pblock, co
    1201             }
    1202         }
    1203     }
    1204+    {
    


    instagibbs commented at 5:32 pm on September 24, 2020:
    tangentially-related nit: Comments above function are getting pretty stale.

    sipa commented at 11:46 pm on September 24, 2020:
    Updated.
  41. sipa force-pushed on Sep 24, 2020
  42. in src/net_processing.cpp:2882 in cb2aa36ac6 outdated
    2875@@ -2869,8 +2876,15 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    2876 
    2877         TxValidationState state;
    2878 
    2879+        if (wtxid != txid) {
    2880+            // Regardless of whether the transaction we received is valid and acceptable or not,
    2881+            // we don't need the version with this exact witness anymore.
    2882+            m_txrequest.ForgetTx(GenTxid{true, wtxid});
    


    instagibbs commented at 5:37 pm on September 24, 2020:
    The witness-ness of GenTxid doesn’t matter. I find the interface distracting since it needs to be passed in.

    sipa commented at 6:53 pm on September 24, 2020:

    My hope here was that making TxRequestTracker deal solely with GenTxids would simplify the interface. The caller just has these gtxids that it hands to TxRequestTracker, and gets gtxids back. The idea was that the knowledge about when the is_wtxid flag mattered in certain contexts could be restricted to TxRequestTracker. I don’t this idea really works. This optimization (in “Expedite removal”) specifically breaks it: it’s calling ForgetTx in a situation where we actually don’t have the tx already, relying on the fact that if txid and wtxid differ, it must be a witness transaction, and thus we certainly don’t need other transactions with the same witness.

    I think there are two solutions to it:

    • Add to ForgetTx logic that if a txid is provided, it deletes both txid and wtxid announcements, but if a wtxid is provided only wtxid announcements are deleted. This would avoid some of the complexity here in net_processing, but technically break the amortized O(1) complexity claim - you could call ForgetTx(wtxid) repeatedly, and it would every time consume time proportional to the number of txid announcements with the same hash.
    • Give up on the perfect abstraction, and just make ForgetTx take a txhash (and rename it to ForgetTxHash, I guess).

    sipa commented at 11:50 pm on September 24, 2020:

    Ok, I’ve looked into this more, and I believe the code looks a lot better with ForgetTxHash that just takes a uint256. This means the caller needs to understand the semantics for when calling with a txid/wtxid is fine, but that was the case already - just less obviously so. So I’ve switched the code to that.

    I’ve also removed the “if (txid != wtxid) ForgetTx(wtxid);” optimization. I think it’s equivalent to just calling ForgetTxHash now with whatever is added to mempool/orphanpool/recentrejects, and the latter is far more obviously correct anyway (these things will be treated as AlreadyHaveTx() anyway, so deleting them early isn’t going to change anything).

  43. instagibbs commented at 5:40 pm on September 24, 2020: member

    Need to review the big commit “Change transaction request logic to use txrequest” but dropping some general comments now

    Edit: glanced over that commit, it’s really in the weeds for me to say anything sensible without a lot more review time

    approach ACK at least

  44. sipa force-pushed on Sep 24, 2020
  45. in src/txrequest.h:90 in fd9f5c7653 outdated
    73+ *
    74+ *     Rationale: in non-attack scenarios we want to give one chance to request from the fastest peer to reduce
    75+ *                latency, and reduce risk of breaking chains of dependent transactions. An attacker who races the
    76+ *                network can exploit this to delay us learning about a transaction, but it is available only once
    77+ *                per txhash.
    78+ *
    


    ajtowns commented at 3:23 am on September 25, 2020:

    Maybe “chose the first (non-overloaded) peer to have announced the transaction (managed by the “first” marker, see below)” would be a better description – ie focus on the aim, rather than the mechanism?

    Might make sense to follow the “Rationale: in non-attack scenarios…” with “This is implemented via using a “first” marker, with the following rules:” rather than having those rules be in a separate section a bit lower.


    sipa commented at 10:57 pm on September 25, 2020:
    I’ve merged the “first” marker section into this bulletpoint, and incorporated your suggested text.

    ajtowns commented at 5:17 am on September 26, 2020:

    “The “first” marker is given to announcements for which at the time they are received:” -> “given to announcements at the time they are received, provided:” might be clearer?

    “(within the class of preferred or non-preferred announcements)” – everywhere else refers to preferred peers, not announcements. Might make sense to explicitly clarify that a peer may switch its preferred status at any time, but that preferred status is sticky to announcements so if a peer is now non-preferred earlier announcements when it was preferred will still be prioritised, and vice-versa? I think that scenario is exercised by the fuzzer but not by net_processing?


    sipa commented at 7:41 am on September 27, 2020:

    Done, and addressed the “preferredness” being changeable in the first section.

    You’re right that fuzzer tests this, but isn’t reachable through the current net_processing layer.

  46. in src/txrequest.h:93 in fd9f5c7653 outdated
    88+ *
    89+ *   Rationale: these restrictions avoid giving the speed benefit to honest but overloaded peers, and also
    90+ *              reduce the extent to which attackers that race the network in announcing all transactions
    91+ *              can break chains of dependent transactions.
    92+ *
    93+ * Together these rules strike a balance between being fast in non-adverserial conditions and avoiding influenceable
    


    ajtowns commented at 3:26 am on September 25, 2020:
    “avoiding influenceable” doesn’t make sense?

    sipa commented at 10:58 pm on September 25, 2020:
    It are English perfect.

    ajtowns commented at 4:55 am on September 26, 2020:
    They is now anyhoo
  47. in src/txrequest.cpp:32 in fd9f5c7653 outdated
    27+class TxRequestTracker::Impl {
    28+    //! The various states a (txid,node) pair can be in.
    29+    //! Note that CANDIDATE is split up into 3 substates (DELAYED, BEST, READY), allowing more efficient
    30+    //! implementation. Also note that the sorting order of EntryTxHash relies on the specific order of values in
    31+    //! this enum.
    32+    enum class State : uint8_t {
    


    ajtowns commented at 6:11 am on September 25, 2020:
    Since Impl is now only defined in the cpp file, I think you can reasonably move State and the like outside of the class, ideally sticking it in an anonymous namespace so the internal class methods get internal linkage. Main benefit would be that it’s easier to see what the actual data being put in Impl is, I think.

    sipa commented at 10:58 pm on September 25, 2020:
    I like that. Moved all the type definitions out of ::Impl.
  48. in src/txrequest.cpp:272 in fd9f5c7653 outdated
    267+        peerit->second.m_requested += it->GetState() == State::REQUESTED;
    268+    }
    269+
    270+    //! Convert a CANDIDATE_DELAYED entry into a CANDIDATE_READY. If this makes it the new best CANDIDATE_READY
    271+    //! (and no REQUESTED exists) and better than the CANDIDATE_BEST (if any), it becomes the new CANDIDATE_BEST.
    272+    void PromoteCandidateReady(typename Index::index<ByTxHash>::type::iterator it)
    


    ajtowns commented at 6:22 am on September 25, 2020:
    This isn’t templated, so typename isn’t needed, I think? Making an alias for the iterators in general might be useful: template<typename Tag> using IndexIter = typename Index::index<Tag>::type::iterator; and void PromoteCandidateReady(IndexIter<ByTxHash> it)?

    sipa commented at 10:59 pm on September 25, 2020:

    I’m pretty sure it was needed at some point.

    I’ve added an Iter type alias as suggested (outside of Impl, yay).

  49. in src/txrequest.cpp:698 in 512e0f9076 outdated
    560+    {
    561+        return m_computer(txhash, peer, preferred, first);
    562+    }
    563+
    564+    void SanityCheck() const
    565+    {
    


    ajtowns commented at 7:28 am on September 25, 2020:

    The comments in txrequest.h are great. I feel like txrequest.cpp could use more though – there’s no real overview of how the optimised data structures are laid out though, and what the deeper invariants are. Maybe it would make sense to use the SanityCheck functions as a way of documenting them? Many of the comments are already there and sufficient, but maybe make it the first function in the class, and make it a bit more readable? If you construct the Counts and PeerInfo maps in separate, static GenSanityCheckFoo functions, what’s left seems pretty close to documentation:

     0// m_peerinfo can be exactly recalculated from the Index.
     1//
     2// This verifies the data in it, including the invariant
     3// that no entries with m_total_announcements==0 exist.
     4assert(m_peerinfo == GenSanityCheckPeerInfo(m_index));
     5
     6for (auto& el : GenSanityCheckCounts(m_index, m_computer)) {
     7    const uint256& hash = el.first;
     8    Counts& c = el.second;
     9    // Cannot have only COMPLETED peers (txid should have been deleted)
    10    assert(c.m_candidate_delayed + c.m_candidate_ready + c.m_candidate_best + c.m_requested > 0);
    

    sipa commented at 10:59 pm on September 25, 2020:
    I haven’t addressed this yet.

    sipa commented at 11:15 pm on September 29, 2020:
    Done. Also included some more comments from your branch linked above.
  50. in src/txrequest.cpp:602 in 512e0f9076 outdated
    597+            uint8_t m_or_all_per_txhash = 0;
    598+        };
    599+
    600+        std::map<uint256, Counts> table;
    601+        for (const auto& a : m_index) {
    602+            auto& entry = table[a.m_txhash];
    


    ajtowns commented at 7:36 am on September 25, 2020:
    for (const Entry& entry : m_index) { Counts& counts = table[entry.m_txhash]; ?

    sipa commented at 11:00 pm on September 25, 2020:
    I’ve changed it to c for Counts objects and e for Entry objects.

    ajtowns commented at 4:58 am on September 26, 2020:
    Similar change would be good for the peerinfo reconstruction; it has auto& entry = for a PeerInfo.
  51. in src/txrequest.cpp:54 in fd9f5c7653 outdated
    49+    /** A flag (in Entry::m_per_txhash) to indicate that for that txhash,
    50+     *  new preferred announcements are not eligible to get the 'first' marker. */
    51+    static constexpr uint8_t TXHASHINFO_NO_MORE_PREFERRED_FIRST = 1;
    52+    /** A flag (in Entry::m_per_txhash) to indicate that for that txhash,
    53+     *  new non-preferred announcements are not eligible to get the 'first' marker. */
    54+    static constexpr uint8_t TXHASHINFO_NO_MORE_NONPREFERRED_FIRST = 2;
    


    ajtowns commented at 8:09 am on September 25, 2020:
    = (1 << 1) ? Maybe I’m dumb but it took me a bit to realise these are getting or’ed together. Seems slightly weird to have m_per_txhash : 2 instead of m_last_preferred : 1 and m_last_nonpreferred : 1 (“If you ain’t first, you’re last” - Ricky Bobby), but I guess it simplifies the code that updates these values enough to make sense.

    sipa commented at 11:00 pm on September 25, 2020:
    At some point I had more flags than 2. I’ve changed it to two bools now (this makes the SanityCheck code for it a lot more readable in particular).
  52. in src/txrequest.cpp:166 in fd9f5c7653 outdated
    161+
    162+        //! Extract the EntryPeer from this Entry.
    163+        EntryPeer ExtractPeer() const { return EntryPeer{m_peer, GetState() == State::CANDIDATE_BEST, m_txhash}; }
    164+
    165+        //! Extract the EntryTxHash from this Entry.
    166+        EntryTxHash ExtractTxid(const PriorityComputer& computer) const
    


    ajtowns commented at 8:38 am on September 25, 2020:
    Might make sense to drop this method and move the logic directly into EntryTxHashExtractor. If done that way, then can move the PriorityComputer definition after Entry and move theEntry::ComputePriority logic into a PriorityComputer::ComputePriority(const Entry&) const method (changing the entry.ComputePriority(computer) calls to computer.ComputePriority(entry) calls). That pretty much lets the definition of Entry be the first thing in the file which seems a lot more logical.

    sipa commented at 11:01 pm on September 25, 2020:
    Yep, done. To match the “functor” concept I’m overloading operator() instead of having a ComputePriority function.
  53. in src/txrequest.cpp:28 in fd9f5c7653 outdated
    23+
    24+static const uint256 UINT256_ZERO;
    25+
    26+/** Actual implementation for TxRequestTracker's data structure. */
    27+class TxRequestTracker::Impl {
    28+    //! The various states a (txid,node) pair can be in.
    


    ajtowns commented at 9:40 am on September 25, 2020:
    (txid,peer)

    sipa commented at 11:02 pm on September 25, 2020:
    (txhash,peer) actually, fixed in many places.
  54. ajtowns commented at 10:01 am on September 25, 2020: member
    Sorry, more nits, still working on getting to the meat. git branch with patches for some of them at https://github.com/ajtowns/bitcoin/tree/202009-txrequest-ideas which might hopefully make it easier to evaluate them.
  55. sipa force-pushed on Sep 25, 2020
  56. sipa force-pushed on Sep 26, 2020
  57. sipa force-pushed on Sep 27, 2020
  58. sipa commented at 4:56 pm on September 27, 2020: member

    I think using .CountInFlight() isn’t actually the right way to determine being overloaded. If a peer dumps 5000 INVs on us at once, they’ll all be inserted at a time when inflight==0, and thus all be eligible to get the “first” marker, and won’t get any delay penalty.

    On the other hand, I think using CountTracked is suboptimal as well, as it includes entries in COMPLETED state as well, which have no bearing on how loaded the peer currently is.

    I think we should use inflight + (entries in CANDIDATE_* state). There is something to be said about excluding CANDIDATE_READY, as those are requests that will most likely go to other peers, but that would complicate specification significantly (as CANDIDATE_READY isn’t observable right now). Also, CANDIDATE_DELAY is sometimes also subject to this, but it’s too early to tell.

  59. sipa force-pushed on Sep 27, 2020
  60. sipa commented at 6:25 pm on September 27, 2020: member
    I’ve made that change, and re-instated a “max in flight” P2P test.
  61. sipa force-pushed on Sep 27, 2020
  62. sipa force-pushed on Sep 27, 2020
  63. in src/net_processing.cpp:751 in 242146dce0 outdated
    802+    //   - INBOUND_PEER_TX_DELAY for announcements from non-preferred connections
    803+    //   - TXID_RELAY_DELAY for announcements from txid peers while wtxid peers are available
    804+    //   - OVERLOADED_PEER_TX_DELAY for announcements from overloaded peers
    805+    auto delay = std::chrono::microseconds{0};
    806+    bool preferred = state->fPreferredDownload;
    807+    bool overloaded = m_txrequest.CountLoad(nodeid) >= MAX_PEER_TX_LOAD;
    


    ajtowns commented at 0:51 am on September 28, 2020:

    I think using .CountInFlight() isn’t actually the right way to determine being overloaded. If a peer dumps 5000 INVs on us at once, they’ll all be inserted at a time when inflight==0, and thus all be eligible to get the “first” marker, and won’t get any delay penalty.

    I’m not sure the original behaviour didn’t make more sense? If you have a peer that drops 5000 INVs at once, then gets requests for all of them 2s later, and replies to all those requests before sending any further INVs, is there any reason to consider that peer to be overloaded?

    I think we’re not worrying about attackers getting the “first” flag – that’s addressed by limiting the impact to one attacker per txid – but even if we were, I don’t think this makes sense as an attack? You’re blocking 5000 txs for a single 60 second period, but your other peers are only going to announce ~1000 txs each over that period, and you have to successfully race them to be the first to announce the tx for it to matter. But if you’re able to ensure you announce first, then all you need to do is announce first by 2-6 seconds, and you delay a request from honest peers by 54-58 seconds rather than 60 seconds?

    OTOH, I don’t think delaying a huge batch of 4900 txs by (an additional) 2s is much of a problem either.

    Either way, it may make sense for noban or relay permission to override considering a peer to be overloaded?


    sipa commented at 2:44 am on September 28, 2020:

    The reason I don’t like using just in-flight is that it has this strong dependency on local ordering of events.

    If you receive 100 INVs, then do a round of SendMessages, and request them, followed by 4900 of them, they get punished (no first, penalty OVERLOADED_PEER_TX_DELAY).

    If you receive 100 INVs, then 4900 INVs, and then do a round of SendMessages, they don’t get punished.

    I think we’re not worrying about attackers getting the “first” flag

    Only minimally, but I think we are. There are plenty of spy peers that do weird things, and if they announce abundantly, they can interfere somewhat with ordering of dependent transactions (after the first request, they go out randomly, so parents and children may go to different peers, and the parent may arrive first).

    OTOH, I don’t think delaying a huge batch of 4900 txs by (an additional) 2s is much of a problem either.

    Judging by actual numbers, having 100 candidates simultaneously for any non-obviously-broken-peer basically doesn’t happen.

    Either way, it may make sense for noban or relay permission to override considering a peer to be overloaded?

    Yes, that may be a good idea.


    ajtowns commented at 5:14 am on September 28, 2020:

    Only minimally, but I think we are. There are plenty of spy peers that do weird things, and if they announce abundantly, they can interfere somewhat with ordering of dependent transactions (after the first request, they go out randomly, so parents and children may go to different peers, and the parent may arrive first).

    Judging by actual numbers, having 100 candidates simultaneously for any non-obviously-broken-peer basically doesn’t happen.

    Hmm. With the new change you could kind-of make that happen:

    • find a victim who is an inbound connection
    • make up 100 transactions
    • INV them to your victim, but don’t respond to their GETDATA
    • send the 100 transactions to everyone else
    • everyone else will announce to victim
    • all victims peers have 100+ CANDIDATE_* announcements, making all victim’s peers appear overloaded for the next real transaction that comes along

    (That said, I’m still not really seeing how being falsely marked as “overloaded” is a problem)

    If you receive 100 INVs, then do a round of SendMessages, and request them, followed by 4900 of them, they get punished … If you receive 100 INVs, then 4900 INVs, and then do a round of SendMessages, they don’t get punished.

    Yeah. I’m looking at that as: “If you request 100 INVs, and then receive 4900 more INVs before receiving a response to your request, then the peer is overloaded” and “If all your requests have been responded too and you receive 4900 more INVs, then you don’t have a reason to think the peer is overloaded”. ie, “overloaded” == “am I getting responses, or am I just getting more and more announcements?” I think that means the peer is able to prioritise “respond to GETDATA” above “forward on more INVs” and thus ensure they’re never overloaded, even if they’re announcing txs in massive bursts, no matter what anyone else is doing (except when they do two large INVs in less than the RTT maybe).


    sipa commented at 10:08 pm on September 28, 2020:

    That’s a pretty convincing argument.

    If we assume that getting the “overloaded” treatment is relevant at all, it seems worse that an attacker can cause it to occur in “third party” connections than possibly missing out on getting that effect himself.

    I also don’t see how a large at-once dump can be at all exploited. It means an attacker needs an excessive amount of transactions to begin with. If those are bogus/invalid or self-generated ones, the first marker has no effect as only attacker nodes will have it. If they’re valid under-attack transactions, it means there is something seriously wrong already that such a batch of transactions is being propagated (remember the max tx rate of INVs…), and the first marker isn’t going to change much.

    I’m going to revert this change.


    sipa commented at 1:02 am on September 29, 2020:
    Done.

    sipa commented at 1:29 am on September 29, 2020:

    Either way, it may make sense for noban or relay permission to override considering a peer to be overloaded?

    Also done.

  64. in src/net_processing.cpp:779 in 242146dce0 outdated
    805+    auto delay = std::chrono::microseconds{0};
    806+    bool preferred = state->fPreferredDownload;
    807+    bool overloaded = m_txrequest.CountLoad(nodeid) >= MAX_PEER_TX_LOAD;
    808+    if (!preferred) delay += INBOUND_PEER_TX_DELAY;
    809+    if (!state->m_wtxid_relay && g_wtxid_relay_peers > 0) delay += TXID_RELAY_DELAY;
    810+    if (overloaded) delay += OVERLOADED_PEER_TX_DELAY;
    


    ajtowns commented at 1:20 am on September 28, 2020:

    I wonder if it wouldn’t make more sense to move the overloaded judgement entirely into ReceivedInv and instead pass the peer’s MAX_PEER_TX_LOAD in and have OVERLOADED_PEER_TX_DELAY passed in when constructingm_txrequest. So ReceivedInv would then do if (CountLoad(peer) > max_load) { first = false; reqtime += m_overloaded_delay; }.

    The reason I say that is I think the “first” marker logic doesn’t really make sense if you don’t bump the reqtime for overloaded peers – you’d end up with the overloaded peer being selected while the first non-overloaded peer was still waiting for its reqtime to arrive.

    The !preferred delay could also be handled the same way. I guess you’d need to pass in current_time and peer_delay = TXID_RELAY_DELAY (when appropriate) (so to speak) for that work. Again, there needs to be a positive delay for !preferred peers or the logic won’t work right, and while it can vary amongst announcements in limited ways without breaking things, I think it would be hard to do that correctly, so being set at the txrequest level wouldn’t be a big problem.

    That might also have the advantage of being able to pass in delay == 0 rather than reqtime == microseconds::min() which seems a little bit more obvious.


    sipa commented at 2:47 am on September 28, 2020:

    I did this at first, but I like the current approach a lot more. It leaves the actual policy decisions inside net_processing, without needing configuration flags etc. in the txrequest interface.

    Not a very strong opinion, but I like the fact that txrequest is mostly the data structure, and its responsibility is consistency and efficiency - not policy parameters. There are currently a lot of future improvements that can be made without changing txrequest - and that due to the nature of its current interface - are already covered by fuzz testing.


    ajtowns commented at 8:46 am on September 28, 2020:

    There are currently a lot of future improvements that can be made without changing txrequest - and that due to the nature of its current interface are already covered by fuzz testing.

    Kind-of? I would have said the fuzz testing only really tests the optimisations work; it doesn’t really check that the basic implementation makes “sense”?

    eg, we could check the “don’t request dependent transactions out of order” goal by adding a constraint that all peers announce tx i prior to tx j, and then checking that tx i is requested before tx j, but I think that would currently fail in the fuzzer due to random delays (and also fail in net_processing in rare cases, eg where a tx is only announced by one node, j is announced less than 2 seconds after i and the node becomes non-overloaded in that same period, or where all the non-wtxid-relay peers disappear in between).

    I’m trying out some logging to see if the “first” marker actually does much good in practice. We’re calling GetRequestable/RequestedTx ten times a second for every peer (provided various queues don’t fill up anyway), so I think you’d have to be pretty unlucky for it to even matter.

     0--- a/src/txrequest.cpp
     1+++ b/src/txrequest.cpp
     2@@ -528,6 +528,29 @@ public:
     3         // in between, which preserve the state of other GenTxids).
     4         assert(it != m_index.get<ByPeer>().end());
     5         assert(it->GetState() == State::CANDIDATE_BEST);
     6+       {
     7+            auto bytxhashit = m_index.project<ByTxHash>(it);
     8+            auto bytxhashit_pre = bytxhashit;
     9+           int delayed = 0;
    10+           int candidates = 0;
    11+           while (bytxhashit_pre != m_index.get<ByTxHash>().begin()) {
    12+               --bytxhashit_pre;
    13+               if (bytxhashit_pre->GetState() != State::CANDIDATE_DELAYED || bytxhashit->m_txhash != it->m_txhash) break;
    14+               ++delayed;
    15+           }
    16+           ++bytxhashit;
    17+           while (bytxhashit != m_index.get<ByTxHash>().end() && bytxhashit->GetState() != State::COMPLETED && bytxhashit->m_txhash == it->m_txhash) {
    18+               ++candidates;
    19+               ++bytxhashit;
    20+           }
    21+           int completed = 0;
    22+           while (bytxhashit != m_index.get<ByTxHash>().end() && bytxhashit->GetState() == State::COMPLETED && bytxhashit->m_txhash == it->m_txhash) {
    23+               ++completed;
    24+               ++bytxhashit;
    25+           }
    26+           assert(bytxhashit == m_index.get<ByTxHash>().end() || bytxhashit->m_txhash != it->m_txhash);
    27+           LogPrintf("ABCD requested txid=%s preferred=%d first=%d delayed=%d candidates=%d completed=%d peer=%d\n", it->m_txhash.ToString(), it->m_preferred, it->m_first, delayed, candidates, completed, it->m_peer);
    28+       }
    29         Modify<ByPeer>(it, [expiry](Entry& entry) {
    30             entry.SetState(State::REQUESTED);
    31             entry.m_time = expiry;
    

    After running it for a few minutes with ~20 peers, only 0.17% of requests actually had other announcements in CANDIDATE_READY, so for 99.82% of txs the first marker didn’t make any difference. About 1.6% of requests weren’t for a “first” marked announcement (a tor outbound managed to overload itself I think). Will let it go ’til it fills up the disk with logs or whatever, and see what it looks like then.


    sipa commented at 1:18 am on September 29, 2020:

    Kind-of? I would have said the fuzz testing only really tests the optimisations work; it doesn’t really check that the basic implementation makes “sense”?

    Yeah, kind of.

    The fuzz test only verifies consistency of the optimized data structure, and observational equivalence with the naive reimplementation. That naive reimplementation is hopefully close enough to the written specification in txrequest.h that we can say the test actually correctness according to the specification. It however does not test whether that specification actually results in desirable high-level behavior. That’s what the scenario unit tests are for, but they’re not nearly as detailed.

    Still, as far as the specification goes, by keeping the policy out of txrequest and it tests, there is evidence that the data structure correctly implements it, and this doesn’t depend on coincidences like “non-preferred requests always have a reqtime in the future” that are currently true, but maybe won’t always be true. The interface doesn’t force such relations, so the fuzz tests should be covering both situations where they do and don’t hold.

    Furthermore, I’m not sure there is much of a simplification in the interface that can be made by moving more into txrequest?

    • Right now: ReceivedInv(peer, gtxid, preferred, overloaded, reqtime).
    • Potentially: ReceivedInv(peer, gtxid, preferred, have_wtxid_peers, current_time).
    • With the addition of bypassing overloaded under PF_RELAY you’d need to pass an additional argument pf_relay_peer argument even.

    The fact that this last change could be made by just changing the parameters to ReceivedInv and not changing the interface - or the fuzz tests - is argument in favor of it, IMO.

    Several potential changes like making delays dynamic in function of load/inflight, or adding randomness to it, are equally possible without impacting txrequest.


    ajtowns commented at 3:44 am on September 29, 2020:

    Yeah, agree with what you wrote. (Well, I was thinking you’d call ReceivedInv(peer, gtxid, preferred, max_load, current_time) and pass max_load=max() to bypass overloaded, fwiw)

    I’m more thinking of the API in terms of what the caller has to do to ensure the txrequest modules behaves “sensibly” – avoids requesting txs out of order, avoiding double requesting a tx, etc. If there’s a way to make txrequest behave “sensibly” no matter how you call the API, that might be better than just documenting the expectations, or the consequences of violating the expectations. But I don’t fully understand what sensibly means or what restrictions that implies yet.


    sipa commented at 4:06 am on September 29, 2020:

    I see what you mean now.

    There is no strict reason why we couldn’t have both. There could a class around TxRequestTracker that adds the policy and sanity checking of inputs, which just forwards to the inner TxRequestTracker. That would let the inner part still be tested for consistency, while permitting testing of the higher level construct for higher level “goal” properties.

    As the logic in that outer layer would just be (part of) what is now in TxRequest(), I don’t think this is worth it - but maybe at some point it is.


    ajtowns commented at 4:33 am on September 29, 2020:
    Yeah. I don’t think a wrapper would really be an improvement on documenting the expectations, and agree that sticking fixed params in txrequest is really too restrictive. Going to mark this thread as resolved, but still continue thinking about it.
  65. sipa force-pushed on Sep 28, 2020
  66. ajtowns commented at 1:56 am on September 28, 2020: member
    Thread for the updated overloaded checking :)
  67. sipa force-pushed on Sep 28, 2020
  68. sr-gi commented at 3:06 pm on September 28, 2020: none
    Started to give a look at this, hope I’m not late to the party.
  69. sipa force-pushed on Sep 29, 2020
  70. in src/txrequest.h:74 in 9e23f6e106 outdated
    69+ *   - If any preferred peers are available, non-preferred peers are not considered for what follows.
    70+ *
    71+ *     Rationale: preferred peers (outbound, whitelisted) are chosen by us, so are less likely to be under attacker
    72+ *                control.
    73+ *
    74+ *   - Among the remaining candidates, choose the first (non-overloaded) peer to have announced the transaction,
    


    ajtowns commented at 4:27 am on September 29, 2020:

    Okay, so I’ve run some stats on this now. Results (for me) after just under 230k calls to RequestedTx are:

    • in 77.9% of cases, it’s requesting the tx from a preferred peer, with no alternative (non-delayed) candidates
    • in 18.2% of cases, it’s requesting the tx from a non-preferred peer, with no alternative (n-d) candidates
    • in 1.3% of cases, it’s requesting the tx from a preferred peer, with the only (n-d) alternative candidates being non-preferred peers
    • in 1.1% of cases, it’s requesting the tx from a preferred peer using the first marker, and other preferred candidates are available (!)
    • in 1.0% of cases, it’s requesting the tx from a non-preferred peer using the first marker, with other non-preferred candidates available (and no preferred candidates available) (!)
    • in 0.2% of cases, it’s requesting the tx from a non-preferred peer using random selection
    • in 0.02% of cases, it’s requesting the tx from a non-preferred peer using random selection

    (The remaining ~0.3% I didn’t sort through)

    This implies that the first marker is only relevant in ~2.1% of cases – the rest of the time it’s determined by the first peer’s reqtime being reached at a call to GetRequestable() prior to any other announcement for that tx. That occurs because (unless I’ve missed something) we’re calling GetRequestable() for each peer about ten times per second (100ms delay in CConnman::ThreadMessageHandler between calls to SendMessages(pnode)), which means the “first” marker only matters for preferred nodes when the first two preferred nodes to announce a tx manage to both set their reqtimes in the same ~100ms period and thus get activated in the same call the GetRequestable.

    I think what the “first” marker is actually doing is slightly different to what I was kind-of expecting. ie, “reqtime” and “preferredness” is almost always what’s used to actually choose which announcement to request first, and the “first” marker is a backup. It’s only relevant when (a) txrequest is used less frequently than every 100ms via some other implementation, (b) the 100ms delay is extended, eg due to cs_main being held while adding a new block or something, or (c) there’s a race and we want a consistent tie-breaker.

    I haven’t instrumented it, but I’m not convinced this does much for dependent transactions. Assuming B spends an output of A, and peers P and Q both announce A prior to B, you could still see announcements as {Q:A, P:A, P:B} then call GetRequestable(P) and have A assigned to Q and B assigned to P, and requested immediately, with A requested from Q moments later. But if that’s not a big problem, what’s the advantage in locking A to Q via the “first” marker?

    As it stands, I want to argue that the “first” marker is an unnecessary complication, and maybe we should remove it. But I’m not really convinced – for one, that claim relies on calling GetRequestable very frequently, and maybe we want the flexibility to not do that? But if we called it for a given peer once a second instead of 10x a second, that adds another 0.5s delay on average to requesting a tx, which is pretty significant compared to the existing delays, so maybe relying on calling GetRequetable frequently is perfectly reasonable?


    sipa commented at 6:44 am on September 29, 2020:

    That’s great data, and perhaps not very surprising: we’d hope to pick good peers as outbounds, so most announcements will come in from preferred peers (and even more within the first 2s after the first announcement from inbound connections). As we fetch immediately from preferred connections in general, the “first” marker shouldn’t do much.

    Given that information, perhaps I wouldn’t have added this to the first iteration of this logic. Now that it’s implemented, I could argue that it may improve robustness in some cases:

    • During the rollout of BIP339, we can expect to see many more transactions fetched from delayed (but possibly preferred) peers, as just one wtxid peer will cause a delay on all announcements from other peers. It’s hard to predict/measure what the impact of this will be.
    • I can imagine we’d add (possibly random) delays even from fetches from preferred peers, for privacy reasons.
    • Very well connected nodes (with way more inbound than outbound nodes) may see a larger percentage fetched from non-preferred connections.

    ajtowns commented at 7:54 am on September 29, 2020:

    I think all those cases would just change the distribution between the cases where the “first” marker doesn’t matter – ie they’re entirely determined by reqtime and preferredness.

    I do think it might improve robustness somewhere though…

    Also, if, in the future, we changed the code to either update current_time less, or to target specific reqtimes (“anything between t=0.0 and t=0.499 will be requested at t=1.234”), it might start mattering

    During the rollout of BIP339, we can expect to see many more transactions fetched from delayed (but possibly preferred) peers, as just one wtxid peer will cause a delay on all announcements from other peers.

    That’s a good point! I’ve restarted my node since the above numbers, and don’t have any logging as to whether any of them were wtxidrelay. However post-restart, I have two inbound wtxidrelay peers, so should already be suffering from BIP339 rollout. And I guess that means all my preferred nodes should be degraded to the same priority as the non-preferred wtxid nodes, which in turn probably explains why I got such a high percentage of non-preferred announcements as the very first request for a txid.

    Yeah, collecting RequestedTx calls by peerid and whether they were preferred or not gives me:

    • 1049 preferred=0 peer=2717
    • 1500 preferred=0 peer=3993
    • 3255 preferred=0 peer=3815
    • 3270 preferred=1 peer=13
    • 14216 preferred=1 peer=15
    • 14651 preferred=1 peer=14
    • 16918 preferred=1 peer=11
    • 18635 preferred=1 peer=2
    • 30675 preferred=0 peer=1163
    • 37776 preferred=1 peer=1
    • 39962 preferred=1 peer=16
    • 52112 preferred=1 peer=0

    (ignoring peers with under 1k requests). Which looks to me like the top 9 peers include the 8 outbounds I connected to initially, and a random inbound wtxidrelay peer which connected about 3h after my peer was restarted. That single node accounts for about 2/3rds of the “18.2% of cases” by the looks.

    So I might desable wtxidrelay, and restart my node again, I think, which ironically seems like the easiest way to simulate an all-wtxid-relay world…


    ajtowns commented at 11:53 pm on September 29, 2020:

    Ok, updated stats based on 257k RequestedTx calls with wtxidrelay disabled (added a false && to the if’s in net_processing)

    I’m seeing 98.04% of requests being trivial cases – there’s no alternatives that have hit reqtime, and this is the first request of a tx. Despite having lots of (up to ~50?) inbounds, it’s still choosing outbounds for almost all txs.

    Preferredness only mattered in 0.67% of cases, and only in 0.59% of cases did it matter for the first request.

    “First” marker only mattered in 0.77% of cases, and only ever for choosing between two non-preferred peers; though it was at least (almost) always for the first actual request for the txid.

    FWIW, ignoring the first hour, I got 520 “accepted orphan tx” messages. I’m running this node with mempool expiry set to 24h so that might be a bit high.

     086.59% 222556 ABCD  requested  preferred=1  first=1  candidates=[-,-]  completed=[-,-]
     111.45%  29429 ABCD  requested  preferred=0  first=1  candidates=[-,-]  completed=[-,-]
     2 0.18%    455 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[Y,-]
     3 0.09%    222 ABCD  requested  preferred=0  first=1  candidates=[-,-]  completed=[Y,-]
     4 0.08%    201 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[-,-]
     5 0.05%    127 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[Y,Y]
     6 0.04%    111 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[Y,Y]
     7 0.03%     79 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[Y,-]
     8 0.01%     17 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[-,Y]
     9 0.01%     15 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[-,Y]
    10
    11 0.59%   1524 ABCD  requested  preferred=1  first=1  candidates=[-,Y]  completed=[-,-]
    12 0.02%     64 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[-,Y]
    13 0.02%     51 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[Y,Y]
    14 0.02%     50 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[Y,-]
    15 0.01%     38 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[Y,Y]
    16 0.01%     32 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[Y,-]
    17 0.01%     24 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[Y,Y]
    18 0.00%     10 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[Y,-]
    19 0.00%      1 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[-,Y]
    20 0.00%      1 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[-,Y]
    21
    22 0.77%   1978 ABCD  requested  preferred=0  first=1  candidates=[-,Y]  completed=[-,-]
    23 0.01%     20 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[Y,Y]
    24 0.00%      8 ABCD  requested  preferred=0  first=1  candidates=[-,Y]  completed=[Y,-]
    25 0.00%      2 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[Y,-]
    26 0.00%      1 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[-,Y]
    

    Just considering the first request for a tx (ie, candidates=[-,-] completed=[-,-] cases) the number of announcements for the tx in CANDIDATE_DELAYED was distributed something like:

     0  48839 ABCD requested preferred=1 delayed=0
     1  37457 ABCD requested preferred=1 delayed=1
     2  32913 ABCD requested preferred=1 delayed=2
     3  27587 ABCD requested preferred=1 delayed=3
     4  22798 ABCD requested preferred=1 delayed=4
     5  17525 ABCD requested preferred=1 delayed=5
     6  12564 ABCD requested preferred=1 delayed=6
     7   8071 ABCD requested preferred=1 delayed=7
     8   5148 ABCD requested preferred=1 delayed=8
     9   2965 ABCD requested preferred=1 delayed=9
    10   1822 ABCD requested preferred=1 delayed=10
    11
    12   4437 ABCD requested preferred=0 delayed=0
    13   2025 ABCD requested preferred=0 delayed=5
    14   1981 ABCD requested preferred=0 delayed=6
    15   1780 ABCD requested preferred=0 delayed=4
    

    which looks pretty reasonable, I think. (I didn’t break the delayed count into preferred/non-preferred, so the delayed= figure should include both inbounds and any overloaded outbounds)

    I wasn’t quite as lucky with my outbounds staying around the entire timeby the looks, but got a much more even looking distribution of requests between the different prefrences:

     0   1048 ABCD requested preferred=0 peer=2013
     1   1294 ABCD requested preferred=0 peer=4076
     2   1359 ABCD requested preferred=0 peer=4096
     3   1725 ABCD requested preferred=0 peer=235
     4   2193 ABCD requested preferred=0 peer=3370
     5   2630 ABCD requested preferred=0 peer=1950
     6   2694 ABCD requested preferred=0 peer=2304
     7   2843 ABCD requested preferred=0 peer=2095
     8   3618 ABCD requested preferred=0 peer=1853
     9
    10      9 ABCD requested preferred=1 peer=6529
    11     21 ABCD requested preferred=1 peer=5853
    12     25 ABCD requested preferred=1 peer=1737
    13     39 ABCD requested preferred=1 peer=1624
    14     44 ABCD requested preferred=1 peer=30
    15     57 ABCD requested preferred=1 peer=644
    16    853 ABCD requested preferred=1 peer=22
    17   2312 ABCD requested preferred=1 peer=11
    18   5470 ABCD requested preferred=1 peer=202
    19  13974 ABCD requested preferred=1 peer=0
    20  14062 ABCD requested preferred=1 peer=2238
    21  14427 ABCD requested preferred=1 peer=14
    22  16520 ABCD requested preferred=1 peer=16
    23  23714 ABCD requested preferred=1 peer=3385
    24  37178 ABCD requested preferred=1 peer=24
    25  46702 ABCD requested preferred=1 peer=29
    26  49527 ABCD requested preferred=1 peer=44
    

    (ignoring non-preferred peers that didn’t have >1000 requests)

    EDIT: I’m guessing the lack of “used the first marker to tie-break between two preferred peers” cases means that none of my preferred peers were overloaded for any length of time, and always had reqtime=min(), and thus the ThreadMessageHandler loop would see the INV and immediately request it in a single iteration with no chance to consider any other peer, no matter how simultaneous the announcements were.

    The previous 1.1% figure was due to the wtxidrelay delay causing all my preferred peers to have a 2s delay, so that if the outbounds were within 100ms there’d be a tie-breaker, ie they’d get added to the index, then both their reqtimes would pass while the thread was sleeping, and both would progress to READY/BEST in a single GetRequestable call.


    ajtowns commented at 3:31 am on September 30, 2020:

    Am currently running a test to see if removing “first” markers has a noticable effect on orphans (or anything else), but my current feeling is that the benefits of “first” markers don’t justify the complexity. I’ve had a quick go at pulling the feature out into its own commit at https://github.com/ajtowns/bitcoin/commits/202009-txoverhaul-sep-first to see how much complexity that is, fwiw.

    I think the robustness that it adds is preserving the “first announcer” when multiple candidate announcements become ready at the same time. At present it only does that when their reqtimes were very similar from the word go, relative to how often GetRequestable is called.

    One thing that might be worth considering is extending that towards a firmer guarantee, eg if a later, equally-preferred peer’s announcement ends up with an earlier reqtime (due to time going backwards or randomized delays maybe) automatically adjust the “first” peer’s announcement’s reqtime to the same figure. (That would make a difference even with the current net_processing code in this PR, I think. Namely in the case where a non-wtxid-relay announces txid X, then a wtxid-relay-node announces wtxid X, ie X has no witness – it’d bump the reqtime of the original peer’s announcement at that point, rather than making it wait for the full TXID_RELAY_DELAY)

    Even if that’s a good idea though, I’m not really convinced it makes sense to do it now, rather than later, with code to actually randomize times and thus make it relevant, and we can see if it covers all the cases that actually make sense. eg, would probably want to bump the reqtime of the “first” preferred announcement upon receipt of a non-preferred announcement with earlier reqtime, but probably not the reverse…


    sipa commented at 8:31 pm on September 30, 2020:
    Done, I’ve changed this PR to drop the “first” marker logic, based on your branch + adjusting fuzz tester and some extra comments. The commit to add it back is here: https://github.com/sipa/bitcoin/commits/202009_txrequest_rand_wtxid_first

    ajtowns commented at 4:28 am on October 1, 2020:

    (replying to resolved thread without marking it unresolved)

    With “first” marker disabled (essentially treating every peer as oveloaded), I got:

     0 324484 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[-,-] -- 81.7%
     1  61457 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[-,-] -- 15.5%
     2    599 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[Y,Y]
     3    122 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[-,Y]
     4    110 ABCD  requested  preferred=0  first=0  candidates=[-,-]  completed=[Y,-]
     5     87 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[Y,Y]
     6     60 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[Y,-]
     7     13 ABCD  requested  preferred=1  first=0  candidates=[-,-]  completed=[-,Y]
     8
     9   2949 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[-,-] -- 0.7%
    10    543 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[Y,Y]
    11    214 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[Y,Y]
    12     51 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[-,Y]
    13     24 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[-,Y]
    14      5 ABCD  requested  preferred=1  first=0  candidates=[Y,Y]  completed=[Y,-]
    15      5 ABCD  requested  preferred=1  first=0  candidates=[-,Y]  completed=[Y,-]
    16      4 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[Y,Y]
    17      3 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[Y,-]
    18      1 ABCD  requested  preferred=1  first=0  candidates=[Y,-]  completed=[-,Y]
    19
    20   3285 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[-,-] -- 0.8%
    21   2910 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[Y,Y] -- 0.7%
    22    107 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[-,Y]
    23      8 ABCD  requested  preferred=0  first=0  candidates=[-,Y]  completed=[Y,-]
    

    I got 561 “accepted orphan tx” entries (ignoring the first first hour), compared to 520 last time, which is an 8% increase in orphans compared to running for about 90% longer or doing 54% more requests; so seems like any differences in behaviour are lost in the noise.

    Looking at the per-peer behaviour, one non-preferred peer got a lot of requests:

     0   1088 ABCD requested preferred=0 peer=8236
     1   1245 ABCD requested preferred=0 peer=7422
     2   1470 ABCD requested preferred=0 peer=7219
     3   1486 ABCD requested preferred=0 peer=7712
     4   1540 ABCD requested preferred=0 peer=7342
     5  13124 ABCD requested preferred=0 peer=53
     6  22367 ABCD requested preferred=0 peer=12516
     7
     8    159 ABCD requested preferred=1 peer=20
     9    166 ABCD requested preferred=1 peer=7
    10    746 ABCD requested preferred=1 peer=6
    11  16624 ABCD requested preferred=1 peer=31
    12  22917 ABCD requested preferred=1 peer=122
    13  25830 ABCD requested preferred=1 peer=24
    14  54666 ABCD requested preferred=1 peer=30
    15  55279 ABCD requested preferred=1 peer=27
    16  72306 ABCD requested preferred=1 peer=29
    17  79637 ABCD requested preferred=1 peer=23
    

    (ignores preferred peers that got less than 100 requests, and non-preferred peers that got less than 1000).

    For peer=12516, almost every request (21625) was when it was the only candidate and no other peer had already been tried for the tx. It looks like it just happened to be fastest most of the time, with plenty of other inbounds (presumably) having already announced when we actually make the request:

     0      1 delayed=114
     1...
     2      7 delayed=60
     3     12 delayed=59
     4     12 delayed=58
     5...
     6    744 delayed=12
     7    904 delayed=11
     8   1025 delayed=10
     9   1146 delayed=9
    10   1252 delayed=8
    11   1389 delayed=7
    12   1448 delayed=6
    13   1335 delayed=5
    14   1354 delayed=4
    15   1096 delayed=3
    16    706 delayed=2
    17    420 delayed=1
    18    193 delayed=0
    

    Results for peer=53 look pretty similar to me, though it had a higher proportion of announcements where delayed=0 or delayed=1.

    So I think that just means it’s showing a heavy bias towards fast peers/first announcers? The bias part is fine and desirable, but maybe it would be good to reduce the heaviness in future by mildly varying peers’ delays, either randomly or based on load in some way? Something to worry about in a followup.

  71. in src/net_processing.cpp:753 in 9e23f6e106 outdated
    807+    //   - OVERLOADED_PEER_TX_DELAY for announcements from overloaded peers
    808+    auto delay = std::chrono::microseconds{0};
    809+    bool preferred = state->fPreferredDownload;
    810+    bool overloaded = !node.HasPermission(PF_RELAY) && m_txrequest.CountInFlight(nodeid) >= MAX_PEER_TX_IN_FLIGHT;
    811+    if (!preferred) delay += INBOUND_PEER_TX_DELAY;
    812+    if (!state->m_wtxid_relay && g_wtxid_relay_peers > 0) delay += TXID_RELAY_DELAY;
    


    ajtowns commented at 5:12 am on September 29, 2020:
    Not 100% on topic for this PR I guess, but if (!gtixd.m_is_wtxid && g_wtxid_relay_peers > 0) perhaps? That was we delay requesting a parent by txid when handing orphans in case we might have already been requesting the parent by wtxid from some peer (even this one if they were announced/requested/replied out of order for some reason).

    sipa commented at 7:43 pm on September 29, 2020:

    Good point; I think was overlooked in #19569.

    Fixed in a separate commit.

  72. DrahtBot added the label Needs rebase on Sep 29, 2020
  73. sipa force-pushed on Sep 29, 2020
  74. DrahtBot removed the label Needs rebase on Sep 29, 2020
  75. sipa force-pushed on Sep 29, 2020
  76. sipa force-pushed on Sep 29, 2020
  77. in src/txrequest.h:29 in a5a80aab13 outdated
    24+ * - Which peer announced it (through their NodeId)
    25+ * - The txid or wtxid of the transaction (collectively called "txhash" in what follows)
    26+ * - Whether it was a tx or wtx announcement (see BIP339).
    27+ * - What the earliest permitted time is that that transaction can be requested from that peer (called "reqtime").
    28+ * - Whether it's from a "preferred" peer or not. Which announcements get this flag is determined by the caller, but
    29+ *   this is designed for outbound peers, or other peers that we have a higher level of trust in). Even when the
    


    jnewbery commented at 1:42 pm on September 30, 2020:
    Unmatched ) parse error

    sipa commented at 8:31 pm on September 30, 2020:
    Fixed.
  78. in src/txrequest.h:50 in a5a80aab13 outdated
    45+ *   Rationale: giving a peer multiple chances to announce a transaction would allow them to bias requests in their
    46+ *              favor, worsening invblock attacks. The flip side is that as long as an attacker manages to prevent
    47+ *              us from receiving a transaction, failed announcements (including those from honest peers) will
    48+ *              linger longer, increasing memory usage somewhat. The impact of this is limited by imposing a cap on
    49+ *              the number of tracked announcements per peer.
    50+ *              Invblocking is the practice of announcing transactions but not answer requests for them, in order
    


    jnewbery commented at 2:32 pm on September 30, 2020:
    Maybe personal preference, but I really don’t like the invblock terminology (for this reason: #19988 (review)) and would prefer not to introduce it to the Bitcoin Core codebase.

    sipa commented at 8:31 pm on September 30, 2020:
    I’ve replaced it with “transaction censorship attacks”.
  79. in src/txrequest.h:59 in a5a80aab13 outdated
    54+ * - Announcements are only forgotten about when the peer that announced them went offline, when the transaction
    55+ *   was received successfully, or when no candidates for a transaction remain that haven't been tried already.
    56+ *
    57+ *   Rationale: we need to eventually forget announcements to keep memory bounded, but as long as viable
    58+ *              candidate peers remain, we prefer to avoid fetching from failed ones. As every request has a finite
    59+ *              timeout and we schedule new request as soon a previous one expired, there is always progress being
    


    jnewbery commented at 2:35 pm on September 30, 2020:
    I think this depends on your definition of progress? If all of your peers are announcing but not relaying transactions, then it doesn’t matter how many times you request the tx, you’re never making progress.

    sipa commented at 7:25 pm on September 30, 2020:
    “progress towards forgetting a transaction” is unambiguous I think? Every minute one peer is crossed off the list of candidates, so eventually you will run out. Note that this section isn’t about successfully receiving the transaction, but bounding memory usage.

    sipa commented at 7:35 pm on September 30, 2020:
    Actually, thinking more about this, it isn’t exactly true, because for non-preferred connections we should assume the attacker can disconnect and reconnect, giving them a new opportunity. I think this should be restated as: an attacker can force us to keep a transaction in memory (even in the queues of honest peers) for as long as they can prevent us from receiving the transaction. That’s not necessarily bounded, but in cases where it isn’t, we have bigger problems. Actual OOM is prevented using per-peer queue size limits.

    sipa commented at 8:32 pm on September 30, 2020:
    I’ve adjusted the comments, merging this paragraph into the previous one (they were already kind of circularly referring to each other). Also changed the formula for delay to take this potential for reconnection behavior into account.
  80. in src/txrequest.h:64 in a5a80aab13 outdated
    59+ *              timeout and we schedule new request as soon a previous one expired, there is always progress being
    60+ *              made towards forgetting a transaction - either successfully or unsuccessfully.
    61+ *
    62+ * - Transactions are not requested from a peer until its reqtime has passed.
    63+ *
    64+ *   Rationale: enable net_processing code to define a delay for less-than-ideal peers, so that (presumed) better
    


    jnewbery commented at 2:36 pm on September 30, 2020:
    change ’net_processing’ to ‘caller’

    sipa commented at 8:33 pm on September 30, 2020:
    Done.
  81. in src/txrequest.h:71 in a5a80aab13 outdated
    66+ *
    67+ * - If multiple viable candidate peers exist according to the above rules, pick a peer as follows:
    68+ *
    69+ *   - If any preferred peers are available, non-preferred peers are not considered for what follows.
    70+ *
    71+ *     Rationale: preferred peers (outbound, whitelisted) are chosen by us, so are less likely to be under attacker
    


    jnewbery commented at 2:40 pm on September 30, 2020:
    remove “(outbound, whitelisted)”. The caller chooses which peers are preferred, so it’s best to avoid what we expect the caller to do here.

    sipa commented at 8:33 pm on September 30, 2020:
    Done.
  82. in src/txrequest.h:77 in a5a80aab13 outdated
    72+ *                control.
    73+ *
    74+ *   - Among the remaining candidates, choose the first (non-overloaded) peer to have announced the transaction,
    75+ *     if that one is still a candidate. This is done using a "first" marker that is added to announcements, which
    76+ *     prioritizes an announcement over all others (within the class of preferred or non-preferred announcements).
    77+ *     The "first" marker is given to announcements at the time they are received, provided:
    


    jnewbery commented at 2:45 pm on September 30, 2020:

    Perhaps reword this to be singular i.e. “..marker is given to an announcement at the time it is received … The peer that announced it was not overloaded.”

    Currently your mixing plural (“announcements”) with singular (“its txhash”, “the same txhash”)


    sipa commented at 8:33 pm on September 30, 2020:
  83. in src/txrequest.cpp:194 in a5a80aab13 outdated
    189+// The ByTime index is sorted by (0 [CANDIDATE_DELAYED,REQUESTED]; 1 [COMPLETED];
    190+// 2 [CANDIDATE_READY,CANDIDATE_BEST], time)
    191+//
    192+// Uses:
    193+// * Finding CANDIDATE_DELAYED entries whose reqtime has passed, and REQUESTED entries whose expiry has passed.
    194+// * Finding CANDIDATE_READY/BEST entries whose reqtime is in the future (when the clock time when backwards).
    


    jnewbery commented at 3:57 pm on September 30, 2020:
    “when backwards” -> “went backwards”?

    sipa commented at 8:33 pm on September 30, 2020:
    Fixed.
  84. in src/txrequest.cpp:210 in a5a80aab13 outdated
    221+struct PeerInfo {
    222+    size_t m_total = 0; //!< Total number of entries for this peer.
    223+    size_t m_completed = 0; //!< Number of COMPLETED entries for this peer.
    224+    size_t m_requested = 0; //!< Number of REQUESTED entries for this peer.
    225+
    226+    friend bool operator==(const PeerInfo& a, const PeerInfo& b)
    


    jnewbery commented at 4:36 pm on September 30, 2020:
    Perhaps comment that this is only used in sanity testing, or even better remove it entirely and move the logic into TxRequestTracker::Impl::SanityCheck() (since it’s only called in one place).

    jnewbery commented at 5:57 pm on September 30, 2020:
    Actually, I think just a comment saying that his is just used for sanity checking is sufficient.

    sipa commented at 8:33 pm on September 30, 2020:
    Done.
  85. in src/txrequest.cpp:237 in a5a80aab13 outdated
    256+    //! Whether any Entry with m_no_more_first_nonpref exists.
    257+    bool m_any_no_more_first_nonpref = false;
    258+};
    259+
    260+/** (Re)compute the PeerInfo map from the index. Only used for sanity checking. */
    261+std::unordered_map<uint64_t, PeerInfo> RecomputePeerInfo(const Index& index)
    


    jnewbery commented at 4:37 pm on September 30, 2020:
    Consider moving this logic into SanityCheck(), since it’s only called in one place.

    jnewbery commented at 5:58 pm on September 30, 2020:
    Marking as resolved. AJ asked you to separate this, and I don’t have a strong opionion.
  86. in src/txrequest.cpp:271 in a5a80aab13 outdated
    269+    }
    270+    return ret;
    271+}
    272+
    273+/** Compute the TxHashInfo map. Only used for sanity checking. */
    274+std::map<uint256, TxHashInfo> ComputeTxHashInfo(const Index& index, const PriorityComputer& computer)
    


    jnewbery commented at 4:58 pm on September 30, 2020:
    What’s the thinking behind having this logic in a separate function, rather than contained in SanityCheck()? It’s only called in one place.

    jnewbery commented at 5:58 pm on September 30, 2020:
    Marking as resolved. AJ asked you to separate this, and I don’t have a strong opionion.
  87. jnewbery commented at 5:04 pm on September 30, 2020: member

    Still only style comments so far. I’m getting closer to reviewing the actual code.

    I’m really enjoying the pimpl structure. It keeps the interface header file very clean and minimal, and should make recompiling quicker by reducing churn in includes.

  88. sipa force-pushed on Sep 30, 2020
  89. sipa commented at 8:59 pm on September 30, 2020: member

    I’ve removed the “first” marker logic as @ajtowns found that it has little effect in practice (and is unlikely to have one absent other significant changes to the logic), see #19988 (review) and preceding comments. A branch that re-enables it (in a final commit on top) is here: https://github.com/sipa/bitcoin/commits/202009_txrequest_rand_wtxid_first

    Also made some changes to comments, after realizing that we can’t assume that the attacker can’t disconnect/reconnect non-preferred connections.

  90. in src/net_processing.cpp:75 in ee3a74a4e5 outdated
    70@@ -71,22 +71,22 @@ static constexpr std::chrono::minutes PING_INTERVAL{2};
    71 static const unsigned int MAX_LOCATOR_SZ = 101;
    72 /** The maximum number of entries in an 'inv' protocol message */
    73 static const unsigned int MAX_INV_SZ = 50000;
    74-/** Maximum number of in-flight transactions from a peer */
    75+/** Maximum number of in-flight transactions from a peer. It is not a hard limit, but the threshold
    76+ *  at which point pushback mechanisms kick in. */
    


    ariard commented at 11:42 pm on September 30, 2020:

    You can add “pushback mechanisms (OVERLOADED_PEER_TX_DELAY)”.

    A future improvement of pushback mechanism could be to scale it up by the number of times of MAX_PEER_TX_IN_FLIGHT is reached, like m_txrequest.CountInFlightMagnitude(nodeid, MAX_PEER_TX_IN_FLIGHT) ? Thus delaying further and further a likely-malicious peer.


    sipa commented at 2:06 am on October 1, 2020:

    You can add “pushback mechanisms (OVERLOADED_PEER_TX_DELAY)”.

    Done.

    A future improvement of pushback mechanism could be to scale it up by the number of times of MAX_PEER_TX_IN_FLIGHT is reached, like m_txrequest.CountInFlightMagnitude(nodeid, MAX_PEER_TX_IN_FLIGHT) ? Thus delaying further and further a likely-malicious peer.

    Yes, there are a number of possibilities there.

  91. in src/net_processing.cpp:85 in ee3a74a4e5 outdated
    83+ *  the actual transaction (from any peer) in response to requests for them. */
    84+static constexpr int32_t MAX_PEER_TX_ANNOUNCEMENTS = 5000;
    85 /** How many microseconds to delay requesting transactions via txids, if we have wtxid-relaying peers */
    86 static constexpr std::chrono::microseconds TXID_RELAY_DELAY{std::chrono::seconds{2}};
    87 /** How many microseconds to delay requesting transactions from inbound peers */
    88 static constexpr std::chrono::microseconds INBOUND_PEER_TX_DELAY{std::chrono::seconds{2}};
    


    ariard commented at 11:47 pm on September 30, 2020:
    I think this delay applies to non-preferred peers which is strictly a different set than inbound ones as some of them might be PF_NOBAN==true. Variable name can be updated to reflect this.

    sipa commented at 4:44 pm on October 1, 2020:
    Done.
  92. in src/net_processing.cpp:776 in ee3a74a4e5 outdated
    820+    //   - OVERLOADED_PEER_TX_DELAY for announcements from overloaded peers
    821+    auto delay = std::chrono::microseconds{0};
    822+    bool preferred = state->fPreferredDownload;
    823+    bool overloaded = !node.HasPermission(PF_RELAY) && m_txrequest.CountInFlight(nodeid) >= MAX_PEER_TX_IN_FLIGHT;
    824+    if (!preferred) delay += INBOUND_PEER_TX_DELAY;
    825+    if (!gtxid.IsWtxid() && g_wtxid_relay_peers > 0) delay += TXID_RELAY_DELAY;
    


    ariard commented at 11:55 pm on September 30, 2020:
    It’s documented in TxRequestTracker specification but I think you could recall that a preferred, txid-relay peer will be always favored on a non-preferred, wtxid-relay one. So it doesn’t matter that all delay penalties are actually 2 seconds. They order peers inside a class, not across ?

    sipa commented at 2:11 am on October 1, 2020:

    It’s documented in TxRequestTracker specification but I think you could recall that a preferred, txid-relay peer will be always favored on a non-preferred, wtxid-relay one.

    I’d rather not duplicate the explanations; it’ll just risk things becoming inconsistent and confusing.

    So it doesn’t matter that all delay penalties are actually 2 seconds. They order peers inside a class, not across ?

    reqtimes don’t really order, they set the earliest request time. Actual requests are picked randomly from all announcements past their reqtime (preferred first, then non-preferred).


    ajtowns commented at 5:11 pm on October 5, 2020:
    Commit for this change says “orphan fetches” but it’s really “fetches of orphan’s parents”…

    sipa commented at 8:09 pm on October 5, 2020:
    Fixed.
  93. in src/net_processing.cpp:768 in ee3a74a4e5 outdated
    822+    bool preferred = state->fPreferredDownload;
    823+    bool overloaded = !node.HasPermission(PF_RELAY) && m_txrequest.CountInFlight(nodeid) >= MAX_PEER_TX_IN_FLIGHT;
    824+    if (!preferred) delay += INBOUND_PEER_TX_DELAY;
    825+    if (!gtxid.IsWtxid() && g_wtxid_relay_peers > 0) delay += TXID_RELAY_DELAY;
    826+    if (overloaded) delay += OVERLOADED_PEER_TX_DELAY;
    827+    auto reqtime = delay.count() ? current_time + delay : std::chrono::microseconds::min();
    


    ariard commented at 0:32 am on October 1, 2020:
    Why not pass current_time only for false-branch of ternary ? Entry should be promoted to {READY/BEST} in SetTimePoint anyway.

    sipa commented at 2:11 am on October 1, 2020:
    Agree, the current code is just unnecessarily complicated. Fixed.

    ajtowns commented at 5:00 am on October 1, 2020:

    With this change, if time goes slightly backwards due to a non-monotonic clock, a later announcement may get requested first, even when no delay is intended:

    • t=1.000 peer=1 preferred=1 txid=X –> ReceivedInv –> reqtime=1.000 state=CANDIDATE_DELAYED
    • t=0.997 GetRequestable(peer=1) –> (no change)
    • t=0.998 peer=2 preferred=1 txid=X –> ReceivedInv –> reqtime=0.999 state=CANDIDATE_DELAYED
    • t=0.999 GetRequestable(peer=2) –> peer=2 txid=X state –> CANDIDATE_BEST

    The previous code would have ensured that when no delay is desired, the announcement will always go to READY or BEST the next time GetRequestable is called.

    (No opinion on whether that edge case is worth the conditional. If it is, might be nice to just pass in (current_time, delay) and have the addition/min done in ReceivedInv though)


    sipa commented at 5:17 pm on October 1, 2020:
    Yes, that was the original reasoning, but I’m not sure clocks going backward is worth extra complexity (it should do something sane, and be tested, but it should be sufficiently rare that the actual behavior doesn’t matter too much).
  94. in src/net_processing.cpp:757 in ee3a74a4e5 outdated
    794-    if (peer_download_state.m_tx_announced.size() >= MAX_PEER_TX_ANNOUNCEMENTS ||
    795-            peer_download_state.m_tx_process_time.size() >= MAX_PEER_TX_ANNOUNCEMENTS ||
    796-            peer_download_state.m_tx_announced.count(gtxid.GetHash())) {
    797-        // Too many queued announcements from this peer, or we already have
    798-        // this announcement
    799+    AssertLockHeld(cs_main); // For m_txrequest
    


    ariard commented at 0:36 am on October 1, 2020:
    Have you considered moving m_txrequest under it’s own lock as it’s a well-contained, new data structure ? a) useless as we already take cs_main independently in all code paths reaching m_txrequest or b) too much work and this PR is already complex enough?

    sipa commented at 2:13 am on October 1, 2020:
    I don’t think there is a benefit to giving it a separate lock. There may be one at some point, but probably together with the majority of net_processing moving from cs_main to its own lock(s). At this point, cs_main is already held anyway.

    jnewbery commented at 11:14 am on October 5, 2020:
    I also have a slight preference that this is guarded by its own mutex since I don’t like adding yet more non-validation state to cs_main, but agree that it should be straightforward to move it later.

    sipa commented at 6:20 pm on October 5, 2020:
    I agree with the goal of eventually having this not under cs_main, but the hard part will be disentangling the existing code calling into txrequest not covered by cs_main. Just adding another lock in addition to cs_main here isn’t helping with that part, so I prefer doing this as a follow-up (together with probably a bigger part of net_processing also losing cs_main).
  95. in src/txrequest.cpp:65 in ee3a74a4e5 outdated
    60+    std::chrono::microseconds m_time;
    61+    /** What peer the request was from. */
    62+    const uint64_t m_peer;
    63+    /** What sequence number this announcement has. */
    64+    const uint64_t m_sequence : 59;
    65+    /** Whether the request is preferred (giving it priority higher than non-preferred ones). */
    


    ariard commented at 0:41 am on October 1, 2020:
    I think this comment isn’t clear with other comments spread elsewhere like “Lower priorities are selected first”.

    sipa commented at 2:13 am on October 1, 2020:
    I’ve removed it, preferredness is explained much better in the .h file now then when this comment was added.
  96. in src/txrequest.cpp:158 in ee3a74a4e5 outdated
    153+
    154+// The ByTxHash index is sorted by (txhash, state, priority [CANDIDATE_READY]; 0 [otherwise])
    155+//
    156+// Uses:
    157+// * Deleting all Entrys with a given txhash in ForgetTxHash.
    158+// * Finding the best CANDIDATE_READY to convert to CANDIDATE_READY, when no other CANDIDATE_READY or REQUESTED
    


    ariard commented at 0:45 am on October 1, 2020:
    to convert to CANDIDATE_BEST?

    sipa commented at 2:13 am on October 1, 2020:
    Done.
  97. in src/txrequest.h:39 in ee3a74a4e5 outdated
    34+ * Transaction requests are then assigned to peers, following these rules:
    35+ *
    36+ * - No transaction is requested as long as another request for the same txhash is outstanding (it needs to fail
    37+ *   first by passing expiry, or a NOTFOUND or invalid transaction has to be received for it).
    38+ *
    39+ *   Rationale: to avoid wasting bandwidth on multiple copies of the same transaction.
    


    ariard commented at 0:48 am on October 1, 2020:
    Is there a caveat here if the same transaction is announced concurrently by txid and wtxid ? You may have a download collision due to transaction identifier ambiguity.

    sipa commented at 2:17 am on October 1, 2020:
    As far as txrequest is converned, transactions are identified by its txhash. If there is both a txid and wtxid announcement for the same transaction (and the wtxid differs from the txid), it’ll be treated as two transactions, and they could be fetched both. That is exactly the reason why an extra delay for txid announcements was introduced (prior to this PR), to avoid downloading the same thing twice.

    ariard commented at 0:09 am on October 2, 2020:
    Okay so the only download duplication we may have is if any preferred, wtxid-relay peers is really slow, requiring from a txid-relay peer and receiving both. Really a edge case once the network is sufficiently upgraded I guess.

    sipa commented at 0:27 am on October 2, 2020:
    I don’t expect it to be super rare, but the risk of double fetching during rollout is inherent to BIP339 (and exists in the same form in current master, so I don’t think it’s significantly affected by this PR).

    sipa commented at 0:39 am on October 2, 2020:
    Added a note about this.
  98. in src/txrequest.h:42 in ee3a74a4e5 outdated
    37+ *   first by passing expiry, or a NOTFOUND or invalid transaction has to be received for it).
    38+ *
    39+ *   Rationale: to avoid wasting bandwidth on multiple copies of the same transaction.
    40+ *
    41+ * - The same transaction is never requested twice from the same peer, unless the announcement was forgotten in
    42+ *   between, and re-announced. Announcements are forgotten when the peer that sent them went offline, when the
    


    ariard commented at 0:50 am on October 1, 2020:
    I find this confusing. Is “Announcements” denoting all Entry under the same txhash or a given (peer/txhash) Entry. In the former, a peer going offline shouldn’t carry deletion of other different-peer/same-txhash Entries.

    sipa commented at 2:19 am on October 1, 2020:

    This is the specification, there is no concept of Entry here.

    Earlier in the file it says that a txhash/peer combination is called an announcement, so I don’t think this is ambiguous.


    sipa commented at 0:39 am on October 2, 2020:
    I’ve elaborated this a bit. Is it clearer now?

    ariard commented at 2:11 pm on October 2, 2020:
    Yes, better!
  99. in src/txrequest.h:73 in ee3a74a4e5 outdated
    68+ *
    69+ *     Rationale: random assignments are hard to influence for attackers.
    70+ *
    71+ * Together these rules strike a balance between being fast in non-adverserial conditions and minimizing
    72+ * susceptibility to censorship attacks. An attacker that races the network:
    73+ * - Will be unsuccessful if all preferred connections are honest (and there is at least one).
    


    ariard commented at 0:52 am on October 1, 2020:
    At least one what ? An attacker controlled-peer, a non-buggy honest preferred connection?

    sipa commented at 2:19 am on October 1, 2020:
    I’ve clarified this, I think.
  100. in src/txrequest.h:134 in ee3a74a4e5 outdated
    129+    /** Adds a new CANDIDATE entry.
    130+     *
    131+     * Does nothing if one already exists for that (txhash, peer) combination (whether it's CANDIDATE, REQUESTED, or
    132+     * COMPLETED). Note that this means a second INV with the same txhash from the same peer will be ignored, even
    133+     * if one is a txid and the other is wtxid (but that shouldn't happen, as BIP339 requires that all announced
    134+     * inventory is exclusively using MSG_WTX). The new entry is given the specified preferred and reqtime values,
    


    ariard commented at 1:01 am on October 1, 2020:
    How does entry uniqueness is enforced w.r.t to wtxid/txid ? For segwit txn, the ByPeer can’t dissociate between a txid and wtxid announcement, it’s different hash ?

    sipa commented at 2:20 am on October 1, 2020:

    I think that’s explained exactly in this paragraph? “if one already exists for that (txhash, peer) combination”. Uniqueness is on that combination.

    If the txhash is the same, it’s the same. If they’re not, they’re not.


    ariard commented at 0:12 am on October 2, 2020:
    I think I hurted on “Note that this means a second INV with the same txhash from the same peer will be ignored, even if one is a txid and the other is wtxid”. I interpreted it “As if first is txid, does nothing even if a second announcement is wtxid”. That’s my English here, nevermind.

    sipa commented at 0:18 am on October 2, 2020:

    Just so we’re on the same page. The scenario is:

    • Peer P announces txid H
    • Peer P announces wtxid H

    In this case, the second announcement is ignored, because one already exists for peer/txhash combination (P, H). This is harmless for two reasons:

    • The txid and wtxid being identical implies it’s a non-segwit transaction, so it doesn’t matter how we fetch.
    • BIP339 prescribes that all announcements have to be wtxid when enabled (though I now realize that orphan fetching could actually mean this can actually still occur; I’ll improve the comment).

    sipa commented at 0:40 am on October 2, 2020:
    I’ve rewritten the comment.

    ariard commented at 2:13 pm on October 2, 2020:
    Yes clearer.
  101. ariard commented at 1:05 am on October 1, 2020: member

    Thanks for the great new rational comments. Mostly minor points so far, I’ll focus on the testing/fuzzing part.

    I swept back over my comments in #19184, everything sounds fixed. This answer here aims to light my confusions but doesn’t need a fix but for this one I’m still unsure.

  102. sipa force-pushed on Oct 1, 2020
  103. in src/txrequest.cpp:167 in 5411f1fe30 outdated
    162+    EntryTxHashExtractor(const PriorityComputer& computer) : m_computer(computer) {}
    163+    using result_type = EntryTxHash;
    164+    result_type operator()(const Entry& entry) const
    165+    {
    166+        const State state = entry.GetState();
    167+        const uint64_t prio = (state == State::CANDIDATE_READY) ? m_computer(entry) : 0;
    


    glozow commented at 11:30 am on October 1, 2020:
    Question: why wasn’t entry.IsSelectable() used instead?

    sipa commented at 5:57 pm on October 1, 2020:
    That would also compute the priority for entries in the CANDIDATE_BEST state. That would be harmless as it wouldn’t change behavior, but the computation of priority isn’t exactly free either, so better avoid it when not needed.
  104. in src/txrequest.h:127 in 5411f1fe30 outdated
    117+     *
    118+     * It should be called when a peer goes offline.
    119+     */
    120+    void DisconnectedPeer(uint64_t peer);
    121+
    122+    /** Deletes all entries for a given txhash (both txid and wtxid ones).
    


    glozow commented at 3:41 pm on October 1, 2020:
    I interpret this comment to mean that ForgetTxHash i.e. the TxRequestTracker is responsible for deleting the transaction by txid and wtxid, but I don’t think this is the case… that’s PeerManager’s job right?

    sipa commented at 6:07 pm on October 1, 2020:

    TxRequestTracker is a data structure that manages the set of announced txids/wtxids for all peers, exposes ways to alter that set, and answers queries about it. Its responsibility is maintaining the consistency of that data structure, and doing as it’s told; it isn’t responsible for deciding what mutations to make and when; that’s net_processing’s job.

    This function is the mutator that removes all entries with a given txid or wtxid. It does just that. Net_processing calls it whenever it is no longer interested in a particular txid/wtxid.

  105. in src/net_processing.cpp:776 in 987b27176e outdated
    762@@ -763,7 +763,7 @@ void PeerManager::RequestTx(const CNode& node, const GenTxid& gtxid, std::chrono
    763     bool preferred = state->fPreferredDownload;
    764     bool overloaded = !node.HasPermission(PF_RELAY) && m_txrequest.CountInFlight(nodeid) >= MAX_PEER_TX_IN_FLIGHT;
    765     if (!preferred) delay += NONPREF_PEER_TX_DELAY;
    766-    if (!state->m_wtxid_relay && g_wtxid_relay_peers > 0) delay += TXID_RELAY_DELAY;
    767+    if (!gtxid.IsWtxid() && g_wtxid_relay_peers > 0) delay += TXID_RELAY_DELAY;
    


    glozow commented at 5:31 pm on October 1, 2020:

    In https://github.com/bitcoin/bitcoin/pull/19988/commits/987b27176e41f38b997f7b732f087a518fa678ca

    Please forgive me but I just want to make sure I’m understanding this commit correctly… we want to add a delay for transactions by txid just in case we’re able to get it by wtxid from another peer. But when we get an orphan, we only have the txid of the missing parent. -Using the logic of peer’s m_wtxid_relay doesn’t apply to requests for missing parents because we can only request by txid, even if the peer is wtxid relay? -But using transaction’s IsWtxid means we’re interested in potentially receiving the missing parent (from anyone) as a wtxid relay before we ask this peer for the missing parent?


    sipa commented at 6:24 pm on October 1, 2020:

    Imagine we have multiple announcements for the same witness transaction, and all peers have different witnesses for it. So the transaction will be known by one txid, but multiple distinct wtxids. We don’t know that these are all for the same transaction, as all the (non-txid) requests have different hashes.

    No matter who we ask, or what we receive, we’ll always be able to delete all txid-based announcements for it (we’ll compute the txid of the received transaction, which will be the same for all). However, we can’t delete all wtxid announcements, only the one for the wtxid announcement for the witness we actually received. Thus we want to prioritize fetching by wtxid, as a successful response to that will allow us to delete at least one wtxid-based announcements + plus all txid-based announcements (while a txid based request only guarantees deleting all txid-based announcements).

    Now imagine that instead all our peers are BIP339 (wtxid) peers, and there are transactions A and B, where B spends one of A’s outputs. For most peers, we receive A first and then B. But from one peer, we only receive B (e.g. because we just connected and weren’t connected when A was sent), and B is fetched first. That makes it an orphan, and its “prevout” entry in its input for A is treated as a txid announcement (we don’t have its wtxid), despite being from a wtxid peer. Thus we end up with a txid announcement for A, and a bunch of wtxid announcements for it (from the other peers). The same reasoning applies here: we want to fetch by one of the wtxid ones first, because it is guaranteed to allow us to forget the txid announcement PLUS at least one wtxid one.

  106. glozow commented at 5:38 pm on October 1, 2020: member
    Concept ACK, I have a few questions so far
  107. sipa force-pushed on Oct 2, 2020
  108. in src/net_processing.cpp:85 in 9c311d0e20 outdated
    85 /** How many microseconds to delay requesting transactions via txids, if we have wtxid-relaying peers */
    86 static constexpr std::chrono::microseconds TXID_RELAY_DELAY{std::chrono::seconds{2}};
    87-/** How many microseconds to delay requesting transactions from inbound peers */
    88-static constexpr std::chrono::microseconds INBOUND_PEER_TX_DELAY{std::chrono::seconds{2}};
    89+/** How many microseconds to delay requesting transactions from non-preferred peers */
    90+static constexpr std::chrono::microseconds NONPREF_PEER_TX_DELAY{std::chrono::seconds{2}};
    


    MarcoFalke commented at 6:55 am on October 2, 2020:
    0static constexpr std::chrono::seconds NONPREF_PEER_TX_DELAY{2};
    

    This can be written shorter, as the compiler will do the chrono conversion for you

    (Same below)


    sipa commented at 8:05 pm on October 6, 2020:
    Done. I’ve changed it to static constexpr auto VARNAME = std::chrono::seconds{2};, as I think it’s slightly more readable to have the unit and the magnitude together.
  109. in src/txrequest.cpp:210 in 9c311d0e20 outdated
    205+    size_t m_total = 0; //!< Total number of entries for this peer.
    206+    size_t m_completed = 0; //!< Number of COMPLETED entries for this peer.
    207+    size_t m_requested = 0; //!< Number of REQUESTED entries for this peer.
    208+
    209+    /** Compare two PeerInfo objects. Only used for sanity checking. */
    210+    friend bool operator==(const PeerInfo& a, const PeerInfo& b)
    


    jnewbery commented at 8:58 am on October 2, 2020:

    ==() doesn’t need to be a friend, since it’s not accessing any private/protected members:

     0diff --git a/src/txrequest.cpp b/src/txrequest.cpp
     1index bef3460dd6..b7347d8d34 100644
     2--- a/src/txrequest.cpp
     3+++ b/src/txrequest.cpp
     4@@ -206,14 +206,15 @@ struct PeerInfo {
     5     size_t m_completed = 0; //!< Number of COMPLETED entries for this peer.
     6     size_t m_requested = 0; //!< Number of REQUESTED entries for this peer.
     7 
     8-    /** Compare two PeerInfo objects. Only used for sanity checking. */
     9-    friend bool operator==(const PeerInfo& a, const PeerInfo& b)
    10-    {
    11-        return std::tie(a.m_total, a.m_completed, a.m_requested) ==
    12-               std::tie(b.m_total, b.m_completed, b.m_requested);
    13-    }
    14 };
    15 
    16+/** Compare two PeerInfo objects. Only used for sanity checking. */
    17+bool operator==(const PeerInfo& a, const PeerInfo& b)
    18+{
    19+    return std::tie(a.m_total, a.m_completed, a.m_requested) ==
    20+           std::tie(b.m_total, b.m_completed, b.m_requested);
    21+}
    22+
    

    Feels slightly clearer to me, but obviously doesn’t make much difference.


    sipa commented at 6:27 pm on October 2, 2020:
    Done.
  110. in src/txrequest.cpp:316 in 9c311d0e20 outdated
    288+    Index m_index;
    289+
    290+    //! Map with this tracker's per-peer statistics.
    291+    std::unordered_map<uint64_t, PeerInfo> m_peerinfo;
    292+
    293+public:
    


    jnewbery commented at 9:15 am on October 2, 2020:
    This is really just a stylistic thing, but is there a reason to make any members of an impl class private? By definition it doesn’t have an exposed interface, so theoretically everything could just be public. The first two examples at https://en.cppreference.com/w/cpp/language/pimpl are actually declared as structs (although the third example does have a private data member)

    sipa commented at 4:51 pm on October 2, 2020:

    It may be matter of personal taste.

    In general I think it still makes sense to have non-exposed classes with private/public fields/members. It’s not there to prevent external code from messing with the internals, obviously, but it does still kind of define a layer between what is part of the class’s representation and what is its (even just internally) exposed interface.

    In this case with the pimpl pattern… maybe that’s silly, as the glue layer to forward calls on TxRequestTracker to TxRequestTracker::Impl is super thin, and it’s already obvious what the exposed part is.

    Happy to change it if there are strong opinions.


    jnewbery commented at 5:14 pm on October 2, 2020:
    No strong opinion. Feel free to mark this resolved.
  111. in src/txrequest.cpp:62 in 9c311d0e20 outdated
    57+    /** Txid or wtxid that was announced. */
    58+    const uint256 m_txhash;
    59+    /** For CANDIDATE_{DELAYED,BEST,READY} the reqtime; for REQUESTED the expiry. */
    60+    std::chrono::microseconds m_time;
    61+    /** What peer the request was from. */
    62+    const uint64_t m_peer;
    


    jnewbery commented at 9:29 am on October 2, 2020:
    Is there a reason not to use NodeId throughout instead of uint64_t? It seems to me that NodeId would be more consistent with net/net_processing, and more readable in function signatures/returns types/typedefs.

    sipa commented at 4:28 pm on October 2, 2020:
    The only reason not to is avoiding a dependency on net.h.

    jnewbery commented at 4:49 pm on October 2, 2020:
    You mean in the header file? Could you make the public functions take int64_t and use NodeId internally in the cpp file (which already includes net.h)?

    sipa commented at 4:54 pm on October 2, 2020:
    I think the observable “c++ file dependencies” are only an approximation for what actually matters in terms of code organization: “conceptual dependencies between modules”. What you’re suggesting is just as much a dependency of txrequest on net in my view - just slightly more hidden.

    sipa commented at 5:00 pm on October 2, 2020:

    All of this to say: I’d like to avoid a dependency on net, and am therefore using uint64_t as opaque peer identifier rather than NodeId.

    But if we feel that’s not worth the cost, it should be changed to use NodeId everywhere (in both the .h and the .cpp).


    jnewbery commented at 5:17 pm on October 2, 2020:

    I do feel like we should use the same type to represent node ids in all components. Perhaps the purest way to do it would be to move the typedef NodeId int64_t somewhere outside net.h, but that seems a bit overkill.

    Even if you don’t use NodeId, is there a reason that you’re using uint64_t rather than int64_t like everywhere else?


    sipa commented at 6:29 pm on October 2, 2020:

    I’ve bitten the bullet and converted everything to NodeId.

    I believe it being a uint64_t dates back to a time when I was trying to squeeze out bits by using less than 64 bits for the peer (and signed bitfields are implementation defined, IIRC). This was a bad idea.

  112. in src/txrequest.h:139 in 9c311d0e20 outdated
    134+     *
    135+     * Does nothing if one already exists for that (txhash, peer) combination (whether it's CANDIDATE, REQUESTED, or
    136+     * COMPLETED). Note that the txid/wtxid property is ignored for determining uniqueness, so if an announcement
    137+     * is added for a wtxid H, while one for txid H from the same peer already exists, it will be ignored. This is
    138+     * harmless as the txhashes being equal implies it is a non-segwit transaction, so it doesn't matter how it is
    139+     * fetched. The new announcement is given the specified preferred and reqtime values, and takes it is_wtxid from
    


    jnewbery commented at 9:38 am on October 2, 2020:
    s/takes it/takes its/

    sipa commented at 6:29 pm on October 2, 2020:
    Done.
  113. in src/txrequest.cpp:587 in 9c311d0e20 outdated
    582+
    583+    void RequestedTx(uint64_t peer, const GenTxid& gtxid, std::chrono::microseconds expiry)
    584+    {
    585+        auto it = m_index.get<ByPeer>().find(EntryPeer{peer, true, gtxid.GetHash()});
    586+        // RequestedTx can only be called on CANDIDATE_BEST entries (this is implied by its condition that it can
    587+        // only be called on GenTxids returned by GetRequestable (and only AlreadyHave and RequestedTx can be called
    


    jnewbery commented at 9:47 am on October 2, 2020:
    s/AlreadyHave/ForgetTxHash/

    sipa commented at 6:29 pm on October 2, 2020:
    Done.
  114. in src/txrequest.cpp:599 in 9c311d0e20 outdated
    594+        });
    595+
    596+        // Update the per-txhash data (of the last Entry for this txhash) to reflect that new ones are no longer
    597+        // eligible for the "first" marker.
    598+        auto it_last = std::prev(m_index.get<ByTxHash>().lower_bound(EntryTxHash{gtxid.GetHash(), State::TOO_LARGE, 0}));
    599+        assert(it_last->m_txhash == gtxid.GetHash());
    


    jnewbery commented at 9:51 am on October 2, 2020:
    Delete all this. It’s no longer needed now that there’s no ‘first’ marker.

    sipa commented at 6:29 pm on October 2, 2020:
    Done.
  115. in src/txrequest.cpp:278 in 9c311d0e20 outdated
    264+        if (entry.GetState() == State::CANDIDATE_READY) {
    265+            info.m_priority_best_candidate_ready = std::min(info.m_priority_best_candidate_ready, computer(entry));
    266+        }
    267+        // Also keep track of which peers this txhash has a Entry for (so we can detect duplicates).
    268+        info.m_peers.push_back(entry.m_peer);
    269+        // Track preferred/first.
    


    jnewbery commented at 9:57 am on October 2, 2020:
    Delete

    sipa commented at 6:29 pm on October 2, 2020:
    Done.

    jnewbery commented at 9:37 am on October 5, 2020:
    It’s still here :)

    sipa commented at 8:10 pm on October 5, 2020:
    Really done.
  116. in src/txrequest.h:148 in 9c311d0e20 outdated
    143+        std::chrono::microseconds reqtime);
    144+
    145+    /** Converts the CANDIDATE announcement for the provided peer and gtxid into a REQUESTED one.
    146+     *
    147+     * Expiry is set to the specified value. This can ONLY be called immediately after GetRequestable was called
    148+     * (for the same peer), with only ForgetTx and other RequestedTx calls (both for other txhashes) in
    


    jnewbery commented at 10:06 am on October 2, 2020:
    s/ForgetTx/ForgetTxHash/

    sipa commented at 6:29 pm on October 2, 2020:
    Done.
  117. in src/txrequest.h:151 in 9c311d0e20 outdated
    146+     *
    147+     * Expiry is set to the specified value. This can ONLY be called immediately after GetRequestable was called
    148+     * (for the same peer), with only ForgetTx and other RequestedTx calls (both for other txhashes) in
    149+     * between. Any other non-const operation removes the ability to call RequestedTx.
    150+     */
    151+    void RequestedTx(uint64_t peer, const GenTxid& gtxid, std::chrono::microseconds expiry);
    


    jnewbery commented at 10:12 am on October 2, 2020:

    Possible changes to this interface:

    1. Change the GenTxid argument to uint256. The function doesn’t make any distinction between txids and wtxids, so why make it part of the interface?
    2. Pass a const std::vector<uint256>&, with all txs that have been requested, rather than repeatedly calling the same function with different tx hashes.
    3. Also pass in a const std::vector<uint256>& hashes_to_forget, with all txs to forget.

    Doing all of those would change the condition “This can ONLY be called immediately after GetRequestable was called (for the same peer), with only ForgetTxHash and other RequestedTx calls (both for other txhashes) in between.” to the much stronger condition “This can ONLY be called immediately after a GetRequestable call with no other calls in between”.


    ajtowns commented at 2:08 pm on October 2, 2020:

    Constructing vectors instead of repeatedly calling the function seems worse to me, fwiw, but agree that it would be nice to have a simpler description of the constraint.

    Maybe tying it to the return value of GetRequestable might be better: “each value returned by GetRequestable may used with either ForgetTxHash or RequestedTx for the given peer, but not both, and at most only once. Any other calls to non-const methods on TxRequestTracker invalidate all the results from GetRequestable” ?


    sipa commented at 5:07 pm on October 2, 2020:

    It wouldn’t be too hard to just drop the constraint, and instead specify it as:

    • If no announcement with the specified (peer, txhash) exists, or it isn’t in CANDIDATE state, the call has no effect.
    • The specified announcement is changed from CANDIDATE to REQUESTED.
    • If another announcement for the same txhash was already in REQUESTED state, it is marked COMPLETED.

    It’d be a bit more code, but would make the function fully specified. I’ve avoided it, as there is really no point for that functionality in practice, but it does make things a bit nicer. WDYT?

    If we don’t do that, I like @ajtowns’s way of describing the functionality.


    jnewbery commented at 5:21 pm on October 2, 2020:

    Either is fine. I also like @ajtowns’s description.

    Whichever way we go, I still think the function should be changed to take a txhash - I should have left that as a separate comment.


    sipa commented at 6:29 pm on October 2, 2020:
    Changed to AJ’s description.

    sipa commented at 7:38 pm on October 2, 2020:

    Here is the patch to make RequestedTx fully specified:

     0diff --git a/src/txrequest.cpp b/src/txrequest.cpp
     1index af4b59755b..581b498180 100644
     2--- a/src/txrequest.cpp
     3+++ b/src/txrequest.cpp
     4@@ -590,11 +590,29 @@ public:
     5     void RequestedTx(NodeId peer, const uint256& txhash, std::chrono::microseconds expiry)
     6     {
     7         auto it = m_index.get<ByPeer>().find(ByPeerView{peer, true, txhash});
     8-        // RequestedTx can only be called on CANDIDATE_BEST announcements (this is implied by its condition that it
     9-        // can only be called on GenTxids returned by GetRequestable (and only ForgetTxHash and RequestedTx can be
    10-        // called in between, which preserve the state of other GenTxids).
    11-        assert(it != m_index.get<ByPeer>().end());
    12-        assert(it->GetState() == State::CANDIDATE_BEST);
    13+        if (it == m_index.get<ByPeer>().end()) {
    14+            // There is no CANDIDATE_BEST entry, look for _READY or _DELAYED instead.
    15+            it = m_index.get<ByPeer>().find(ByPeerView{peer, false, txhash});
    16+            if (it == m_index.get<ByPeer>().end() || (it->GetState() != State::CANDIDATE_DELAYED &&
    17+                it->GetState() != State::CANDIDATE_READY)) {
    18+                // There is no CANDIDATE_* entry at all, give up.
    19+                return;
    20+            }
    21+
    22+            // Look for an existing CANDIDATE_BEST to demote to _READY, or REQUESTED to make COMPLETED.
    23+            auto it_old = m_index.get<ByTxHash>().lower_bound(ByTxHashView{txhash, State::CANDIDATE_BEST, 0});
    24+            if (it_old != m_index.get<ByTxHash>().end() && it_old->m_txhash == txhash) {
    25+                if (it_old->GetState() == State::CANDIDATE_BEST) {
    26+                    // It doesn't matter whether we pick CANDIDATE_READY or _DELAYED here, as SetTimePoint()
    27+                    // will correct it at GetRequestable() time. If time only goes forward, it will always be
    28+                    // _READY, so pick that to avoid extra work in SetTimePoint().
    29+                    Modify<ByTxHash>(it_old, [](Announcement& ann) { ann.SetState(State::CANDIDATE_READY); });
    30+                } else if (it_old->GetState() == State::REQUESTED) {
    31+                    Modify<ByTxHash>(it_old, [](Announcement& ann) { ann.SetState(State::COMPLETED); });
    32+                }
    33+            }
    34+        }
    35+
    36         Modify<ByPeer>(it, [expiry](Announcement& ann) {
    37             ann.SetState(State::REQUESTED);
    38             ann.m_time = expiry;
    

    When combined with the corresponding changes in src/test/fuzz/txrequest.cpp it’s actually a net reduction in code. WDYT?


    sipa commented at 2:22 am on October 3, 2020:
    Added as an extra commit on top.

    ajtowns commented at 8:54 am on October 3, 2020:

    I think that without following this change all the remaining asserts are redundant internal checks, rather than “invalid use of API” ones, which I think is a plus. Also makes perfect sense (in retrospect) that that would then make the fuzz tester a bunch simpler.

    I think that means that you can call TxRequestTracker methods with any params and the behaviours well-defined, reasonably logical, and as efficient as possible. I think this even maintains the amortized complexity, which surprises me.

    So yeah, I like this! Maybe change the “There is no CANDIDATE_BEST entry” comment to note that this path only occurs if the API caller requests an txid that wasn’t just returned from GetRequestable – ie, they either have a bug due to a race condition of not dealing with the results of GetRequestable for one peer before calling it again later, or they’re doing their own selection of which announcement to request beyond relying on TxRequestTracker’s algorithm?

    Maybe the “give up” case would be clearer if there was a bit more of the “why” spelled out too?

    0            if (it == m_index.get<ByPeer>().end()) {
    1                // This txid wasn't tracked for this peer; caller should have called ReceivedInv() first.
    2                return;
    3            } else if (it->GetState() != State::CANDIDATE_DELAYED && it->GetState() != State::CANDIDATE_READY)) {
    4                // Only progress to REQUESTED from CANDIDATE to ensure the state machine is finite
    5                return;
    6           }
    

    sipa commented at 7:03 pm on October 3, 2020:
    Done, switched to the fully-defined version (squashed into the appropriate commits). I added some of @ajtowns’s comments, and more.

    jnewbery commented at 9:57 am on October 5, 2020:
    I also like this. Good change!
  118. in src/net_processing.cpp:756 in 9c311d0e20 outdated
    810-
    811-    peer_download_state.m_tx_process_time.emplace(process_time, gtxid);
    812+    auto state = State(nodeid);
    813+
    814+    // Decide the TxRequestTracker parameters for this announcement:
    815+    // - "overloaded": if at least MAX_PEER_TX_IN_FLIGHT requests are in flight (and no PF_RELAY).
    


    jnewbery commented at 10:20 am on October 2, 2020:
    overloaded isn’t a TxRequestTracker parameter, so I think it’s fine just to have the OVERLOADED_PEER_TX_DELAY comment in the sub-bullet below

    sipa commented at 6:30 pm on October 2, 2020:
    Done.
  119. in src/net_processing.cpp:760 in 9c311d0e20 outdated
    814+    // Decide the TxRequestTracker parameters for this announcement:
    815+    // - "overloaded": if at least MAX_PEER_TX_IN_FLIGHT requests are in flight (and no PF_RELAY).
    816+    // - "preferred": if fPreferredDownload is set (= outbound, or PF_NOBAN permission)
    817+    // - "reqtime": current time plus delays for:
    818+    //   - NONPREF_PEER_TX_DELAY for announcements from non-preferred connections
    819+    //   - TXID_RELAY_DELAY for announcements from txid peers while wtxid peers are available
    


    jnewbery commented at 10:20 am on October 2, 2020:
    s/announcements from txid peers/txid announcements/ (since requesting an orphan tx from a wtxid peer counts as a txid announcement)

    sipa commented at 6:30 pm on October 2, 2020:
    Done.
  120. in src/net_processing.cpp:84 in 9c311d0e20 outdated
    84+static constexpr int32_t MAX_PEER_TX_ANNOUNCEMENTS = 5000;
    85 /** How many microseconds to delay requesting transactions via txids, if we have wtxid-relaying peers */
    86 static constexpr std::chrono::microseconds TXID_RELAY_DELAY{std::chrono::seconds{2}};
    87-/** How many microseconds to delay requesting transactions from inbound peers */
    88-static constexpr std::chrono::microseconds INBOUND_PEER_TX_DELAY{std::chrono::seconds{2}};
    89+/** How many microseconds to delay requesting transactions from non-preferred peers */
    


    jnewbery commented at 10:30 am on October 2, 2020:
    s/How many microseconds/How long/ (commenting that a std::chrono::microseconds constant is microseconds is redundant)

    sipa commented at 6:30 pm on October 2, 2020:
    Done.
  121. in src/txrequest.cpp:576 in 9c311d0e20 outdated
    542+            it_last--;
    543+        } else {
    544+            // No entry for this txhash exists yet.
    545+            it_last = m_index.get<ByTxHash>().end();
    546+        }
    547+
    


    jnewbery commented at 10:38 am on October 2, 2020:
    Delete all this. No longer needed since you removed the ‘first’ marker.

    sipa commented at 6:30 pm on October 2, 2020:
    Done.
  122. in src/txrequest.cpp:533 in 9c311d0e20 outdated
    528+
    529+    void ReceivedInv(uint64_t peer, const GenTxid& gtxid, bool preferred,
    530+        std::chrono::microseconds reqtime)
    531+    {
    532+        // Bail out if we already have a CANDIDATE_BEST entry for this (txhash, peer) combination. The case where
    533+        // there is a non-CANDIDATE_BEST entry already will be caught by the uniqueness property of the ByPeer index
    


    jnewbery commented at 10:39 am on October 2, 2020:

    I don’t understand this comment about catching a non-CANDIDATE_BEST entry automatically. m_index.get<ByPeer>().count() will return 0 in that case.

    Edit: Ah! I’ve just seen the comment below. Perhaps update this comment to say “will be caught by the uniqueness property when we try to emplace the new Entry object”.


    sipa commented at 6:30 pm on October 2, 2020:
    Done.
  123. in src/txrequest.cpp:133 in 9c311d0e20 outdated
    128+// Each index has a By* type to identify it, a Entry* data type to represent the view of Entry it is sorted by,
    129+// and an Entry*Extractor type to convert an Entry into the Entry* view type.
    130+// See https://www.boost.org/doc/libs/1_54_0/libs/multi_index/doc/reference/key_extraction.html#key_extractors
    131+// for more information about the key extraction concept.
    132+
    133+// The ByPeer index is sorted by (peer, state == CANDIDATE_BEST, txhash)
    


    jnewbery commented at 10:47 am on October 2, 2020:
    Why not just index by (peer, txhash), and then in GetRequestable() just add an if (it_peer->GetState() == State::CANDIDATE_BEST) before adding the hash to selected? It seems like in all other places (DisconnectedPeer(), ReceivedInv() and ReceivedResponse()), we’re actually interested in all the outstanding Entry objects for a given peer.

    ajtowns commented at 2:11 pm on October 2, 2020:
    That would make GetRequestable be O(nr_announcements) instead of O(nr_best) which I think makes the complexity blow out in worst case scenarios.

    jnewbery commented at 2:48 pm on October 2, 2020:
    Yes, makes sense. Thanks!

    sipa commented at 4:12 pm on October 2, 2020:
    Yes, that’s exactly the reason.
  124. in src/txrequest.cpp:428 in 9c311d0e20 outdated
    399+
    400+    //! Change the state of an entry to something non-IsSelected(). If it was IsSelected(), the next best entry will
    401+    //! be marked CANDIDATE_BEST.
    402+    void ChangeAndReselect(Iter<ByTxHash> it, State new_state)
    403+    {
    404+        if (it->IsSelected()) {
    


    jnewbery commented at 11:09 am on October 2, 2020:
    Maybe assert that it is not end() at the top of this function?

    sipa commented at 6:44 pm on October 2, 2020:
    Done (and in a few more places).
  125. in src/txrequest.h:158 in 9c311d0e20 outdated
    153+    /** Converts any CANDIDATE or REQUESTED announcement to a COMPLETED one, if one exists.
    154+     *
    155+     * It should be called whenever a transaction or NOTFOUND was received from a peer. When a good transaction is
    156+     * received, ForgetTx should be called instead of (or in addition to) this operation.
    157+     */
    158+    void ReceivedResponse(uint64_t peer, const GenTxid& gtxid);
    


    jnewbery commented at 11:46 am on October 2, 2020:
    This can be changed to take a hash instead of a GenTxid. The function doesn’t do anything with whether it’s a txid or wtxid, so it shouldn’t be part of the interface.

    sipa commented at 6:31 pm on October 2, 2020:
    Done (also for RequestedTx).
  126. in src/net_processing.cpp:2901 in 9c311d0e20 outdated
    2897@@ -2994,9 +2898,7 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    2898         TxValidationState state;
    2899 
    2900         for (const GenTxid& gtxid : {GenTxid(false, txid), GenTxid(true, wtxid)}) {
    2901-            nodestate->m_tx_download.m_tx_announced.erase(gtxid.GetHash());
    2902-            nodestate->m_tx_download.m_tx_in_flight.erase(gtxid.GetHash());
    2903-            EraseTxRequest(gtxid);
    2904+            m_txrequest.ReceivedResponse(pfrom.GetId(), gtxid);
    


    jnewbery commented at 11:49 am on October 2, 2020:
    No point in calling this twice. ReceivedResponse() only cares about the hash.

    sipa commented at 5:45 pm on October 2, 2020:
    The hash is different in both calls.

    jnewbery commented at 9:32 am on October 5, 2020:

    Oops. Confused myself with the gtxid constructors. These are indeed different!

    Thanks for removing the for loop. Makes it a lot clearer what’s going on.

    It would be better if these lines were above the TxValidationState state; declaration, but that’s maybe unrelated to this PR, since the code you’re replacing should also have been above that declaration.


    sipa commented at 8:10 pm on October 5, 2020:
    Yeah, trying to minimize diff - this shouldn’t matter much.

    jnewbery commented at 9:25 am on October 6, 2020:
    Agree it doesn’t matter at all. It just offends my aesthetic sensibilities. Marking resolved.
  127. in src/net_processing.cpp:4510 in 9c311d0e20 outdated
    4513+                m_txrequest.RequestedTx(pto->GetId(), gtxid, current_time + GETDATA_TX_INTERVAL);
    4514             } else {
    4515                 // We have already seen this transaction, no need to download.
    4516-                state.m_tx_download.m_tx_announced.erase(gtxid.GetHash());
    4517-                state.m_tx_download.m_tx_in_flight.erase(gtxid.GetHash());
    4518+                m_txrequest.ForgetTxHash(gtxid.GetHash());
    


    jnewbery commented at 11:55 am on October 2, 2020:
    Testing my understanding here: do we ever expect to hit this? I think that every action that causes a tx to become AlreadyHave will also cause us to ForgetTxHash, and we won’t add an AlreadyHave tx back into TxRequestTracker. If I’m right, perhaps just add a comment here saying that we don’t expect to hit this and it’s here for belt-and-suspenders.

    sipa commented at 6:31 pm on October 2, 2020:
    Added a comment.
  128. in src/net_processing.cpp:4450 in 9c311d0e20 outdated
    4472-            const GenTxid gtxid = tx_process_time.begin()->second;
    4473-            // Erase this entry from tx_process_time (it may be added back for
    4474-            // processing at a later time, see below)
    4475-            tx_process_time.erase(tx_process_time.begin());
    4476+        for (const GenTxid& gtxid : m_txrequest.GetRequestable(pto->GetId(), current_time)) {
    4477             CInv inv(gtxid.IsWtxid() ? MSG_WTX : (MSG_TX | GetFetchFlags(*pto)), gtxid.GetHash());
    


    jnewbery commented at 12:03 pm on October 2, 2020:
    This CInv only needs to be constructed inside the !AlreadyHaveTx block. It could even be emplaced directly into vGetData.

    sipa commented at 6:31 pm on October 2, 2020:
    Indeed! Done.
  129. in src/txrequest.cpp:591 in 9c311d0e20 outdated
    556+        ++m_sequence;
    557+    }
    558+
    559+    //! Find the GenTxids to request now from peer.
    560+    std::vector<GenTxid> GetRequestable(uint64_t peer, std::chrono::microseconds now)
    561+    {
    


    jnewbery commented at 12:09 pm on October 2, 2020:
    This function is now on the hot path and is called for every peer on every SendMessages() loop. Have you done any profiling to see how much time we’ll spend in here on a normally loaded system? I think all of the lookups in here are O(1) in the size of the index, but is the constant factor important?

    ajtowns commented at 2:41 pm on October 2, 2020:

    m_index.get<ByX>.begin() should be O(log_2(n)), ++it should be O(1), .emplace() should be O(log_2(n)) with about 3x constant factor overhead, .modify() should the same except maybe 6x constant factor overhead. n is limited to about 600k, so log_2(n) is below about 20. With max peers and max announcements, the entire index could be something like 100MB I think, more if you’re removing the bounds for some peers via the relay permission.

    I think worst case complexity is that in SetTimePoint you end up adjusting the state of every announcement , then you find the first BEST for the given peer, which turns out to be every tx for that peer, then iterate through the remaining BEST for that peer adding them to a vector, then sort the vector, and construct the result. I think that’s about:

    600000 * (20 + 2*20) + 20 + 5000 + 5000*12 + 5000 ~= 36M ops
    

    but assuming time doesn’t go backwards, then amortized across multiple calls to GetRequstable, each announcement only causes 7 transitions over its lifetime (delayed, ready, best(+demotion of other), requested, completed(+promotion of other), and only 4 of those occur via GetRequestable so amortized cost per tx is something like 4*60+20+1+1*12+1 = 274 ops, so max average cost of calling GetRequestable should be about 1.4M ops.

    On a normally loaded system, you should be getting no more than 35-70 txs from each call to GetRequestable, and I think SetTimePoint should likewise be dealing with (far) fewer than 7000 txs in each call, so load should be about 100 to 1000 times less, I think.


    sipa commented at 6:31 pm on October 2, 2020:
    I’ll do some benchmarks.

    sipa commented at 8:15 am on October 3, 2020:

    In a TxRequestTracker with N peers and 5000 (unique) txids each, in a stable state (where GetRequestable returns nothing):

    • N=10, around 50 ns per GetRequestable.
    • N=20, 60 ns
    • N=50, 80 ns
    • N=100, 90 ns
    • N=200, 180 ns
    • N=500, 230 ns
    • N=1000, 240 ns
    • N=2000, 500 ns.
    • N=20000, 1500 ns.

    (these numbers grow much faster than what would be expected from O(log n) growth, but I suspect memory hierarchy effects play a role)

    FWIW, calling ReceivedInv 500000 times in a row takes around 1us per call.


    sipa commented at 9:12 am on October 4, 2020:

    Another benchmark: I tweaked the unit test:

    • Run all tests 12000 times
    • Sanity checks turned off
    • Randomly removed 90% of GetRequestable calls (there is a far from representative number of them otherwise).

    … and then timed the actual execution of the queued actions, and while gathering statistics about the number of calls, and the Size() of the data structure:

    • Runtime: 7.38015 s
    • Size: 620626 +- 177327
    • ReceivedInvs calls: 6494712
    • RequestedTx calls: 678552
    • ReceivedResponse calls: 2664704
    • DisconnectedPeer calls: 3490732
    • ForgetTxHash calls: 145404
    • GetRequestable calls: 3581653

    So the size is relatively close to 625000 (the limit at 5000 tx/peer, 125 peers). The scenarios themselves are not representative for this, as they have far fewer transactions per peer. However, I believe there is reason to assume that the number of tx/peer doesn’t matter (except for the peak latency of DisconnectedPeer, but averaged out, it’s still amortized).

    So this means that the average time per announcement (summed over all calls affecting that announcement) in these scenarios (which may be nonrepresentative) is around 1.1 us/announcement.


    jnewbery commented at 9:56 am on October 5, 2020:

    These seem like good numbers.

    I was going to suggest movingSetTimePoint() out of GetRequestable() and only calling it once per message handler thread loop (e.g. in a global tasks function like https://github.com/bitcoin/bitcoin/pull/19364/commits/0ea9abf9b4c3694dede390b759df01c1ce0d3166), since it doesn’t need to be called in the context of an individual peer, and it doesn’t need to be called frequently. However, it doesn’t seem like there’s a good reason to do that from a performance standpoint.


    ajtowns commented at 11:04 am on October 5, 2020:

    Moving SetTimePoint() out of GetRequestable() might have better worst-case behaviour in the event of arbitrarily non-monotonic clocks In particular, currently you could have, say, two peers, peer=1 with no announcements, and peer=2 with N announcements all with a reqtime = t. If you call GetRequestable(peer=1, now=t+1) then you’ll get an empty vector as the result, but update all N announcements to CANDIDATE_BEST, but if time goes backwards, and you then call GetRequestable(peer=2, now=t-1) you’ll update all the announcements back to CANDIDATE_DELAYED, and all get an empty vector, and make no progress. If you did SetTimePoint() first, then called GetRequestable() for each peer without updating the timepoint, you’d either have now=t-1 and not do any work, or you’d have now=t+1 and would make forward progress. I think the constraint would be “call SetTimePoint, then for each peer call GetRequestable, and for each resulting tx, call RequestedTx for it” with the result being something along the lines of “amortized time complexity is O(log(max_concurrent_announcements))”.

    FWIW, I think something along those lines is worth considering as a post-merge TODO – perhaps as part of doing a fuzz tester for how net_processing uses txrequest or something, but don’t think there is a need to hold things up for it.


    sipa commented at 6:39 pm on October 5, 2020:

    Hmm, that’s interesting. Before the RequestedTx change I’d be hesitant about such a change, as it’d make specifying the conditions under which it can be called even more cumbersome. There isn’t much concern about that anymore, though. @ajtowns Right, or more generally: if your clock is arbitrarily jittery, increasing the frequency of SetTimePoint calls will result in up to proportionally more work.

    There may be some impact on how strong the “first non-delayed announcement” effect is, though (which was the reason for dropping the “first” marker rule): if SetTimePoint is only called once per loop, the request will go to any of the peers that announced a txhash in one loop, rather than the actual first one. This may need some investigation.

    So indeed, I’d keep this for a follow-up.


    jnewbery commented at 9:22 am on October 6, 2020:
    Marking as resolved.
  130. in src/txrequest.cpp:64 in 9c311d0e20 outdated
    59+    /** For CANDIDATE_{DELAYED,BEST,READY} the reqtime; for REQUESTED the expiry. */
    60+    std::chrono::microseconds m_time;
    61+    /** What peer the request was from. */
    62+    const uint64_t m_peer;
    63+    /** What sequence number this announcement has. */
    64+    const uint64_t m_sequence : 59;
    


    jnewbery commented at 12:13 pm on October 2, 2020:
    If you don’t change the m_peer to be NodeId, consider making a typedef SeqNo for the sequence number, so that it’s obvious which types are using sequence numbers and which are using peer ids.

    sipa commented at 6:32 pm on October 2, 2020:
    Added a SequenceNumber and Priority type alias for uint64_t.
  131. in src/txrequest.cpp:566 in 9c311d0e20 outdated
    561+    {
    562+        // Move time.
    563+        SetTimePoint(now);
    564+
    565+        // Find all CANDIDATE_BEST entries for this peer.
    566+        std::vector<std::pair<uint64_t, const Entry*>> selected;
    


    jnewbery commented at 12:17 pm on October 2, 2020:
    Is there a reason not to just construct the GenTxids directly into selected?

    ajtowns commented at 2:46 pm on October 2, 2020:
    It needs to be sorted by sequence before being returned to ensure txs are requested in order of announcement, and the sequence isn’t needed in the return value, so having a temporary vector’s sensible, I think. And sorting a vector of 16-byte seq/pointer pairs should be more cache efficient than for 48-byte seq/is_wtxid/uint256 tuples (or 40 bytes if you combined is_wtxid into seq).

    sipa commented at 6:33 pm on October 2, 2020:
    Yeah, the reason is that we want to sort by sequence number, but those aren’t included in the output itself, so we need some kind of proxy. It could be a list of (sequence, gtxid) pairs that are sorted, but that would still require an extraction step to convert it to just gtxids. This seems simplest.

    jnewbery commented at 9:17 am on October 5, 2020:
    Yeah, I understood the sorting step, and was just wondering why not create a vector of (seq, gtxid) pairs. As you point out, you’d still need an extraction (copy) step afterwards, so that seems less efficient.
  132. in src/txrequest.cpp:56 in 9c311d0e20 outdated
    51+    /** An invalid State value that's larger than all valid ones. */
    52+    TOO_LARGE,
    53+};
    54+
    55+/** An announcement entry. This is the data we track for each txid or wtxid that is announced to us. */
    56+struct Entry {
    


    jnewbery commented at 12:24 pm on October 2, 2020:
    Are “entry” and “announcement” synonymous? If so, would it make sense to rename Entry to Announcement and drop the “entry” terminology?

    amitiuttarwar commented at 2:35 pm on October 2, 2020:
    +1 was wondering the same

    sipa commented at 6:35 pm on October 2, 2020:

    Done, renamed a few things:

    • Entry -> Announcement
    • Entry{TxHash,Peer,Time} -> By{TxHash,Peer,Time}View
    • Entry{TxHash,Peer,Time}Extractor -> By{TxHash,Peer,Time}ViewExtractor

    No more “entr” anywhere in txrequest (I did not make the same change in the fuzz test, though).

  133. jnewbery commented at 12:26 pm on October 2, 2020: member

    I’ve reviewed the implementation. Haven’t yet looked at the sanity checkers, tests or fuzzers.

    No real issues found, just a bunch of style suggestions.

  134. in src/txrequest.h:33 in 9c311d0e20 outdated
    27+ * - What the earliest permitted time is that that transaction can be requested from that peer (called "reqtime").
    28+ * - Whether it's from a "preferred" peer or not. Which announcements get this flag is determined by the caller, but
    29+ *   this is designed for outbound peers, or other peers that we have a higher level of trust in. Even when the
    30+ *   peers' preferredness changes, the preferred flag of existing announcements from that peer won't change.
    31+ * - Whether or not the transaction was requested already, and if so, when it times out (called "expiry").
    32+ * - Whether or not the transaction request failed already (timed out, or NOTFOUND was received).
    


    sr-gi commented at 1:35 pm on October 2, 2020:
    Is there any specific flag for this? I guess it is implicitly store in m_state, but also alongside any other state the transaction can be at.

    sipa commented at 6:36 pm on October 2, 2020:
    State::COMPLETED. That doesn’t just cover failure, but there is no observable difference between failed and otherwise completed announcements.
  135. in src/txrequest.cpp:597 in 9c311d0e20 outdated
    592+            entry.SetState(State::REQUESTED);
    593+            entry.m_time = expiry;
    594+        });
    595+
    596+        // Update the per-txhash data (of the last Entry for this txhash) to reflect that new ones are no longer
    597+        // eligible for the "first" marker.
    


    sr-gi commented at 1:35 pm on October 2, 2020:
    There is no longer a “first” maker for the entires, is there?

    sipa commented at 6:36 pm on October 2, 2020:
    Gone.
  136. sr-gi commented at 1:39 pm on October 2, 2020: none
    I still need to go way deeper into this. A couple of small comments so far.
  137. in src/test/txrequest_tests.cpp:63 in 9c311d0e20 outdated
    58+    std::string m_testname;
    59+
    60+public:
    61+    Scenario(Runner& runner, std::chrono::microseconds starttime) : m_runner(runner)
    62+    {
    63+        m_now = starttime + RandomTime() + RandomTime() + RandomTime();
    


    ariard commented at 3:29 pm on October 2, 2020:
    If you assume the return 23-bit integers are statically uniform across samples (RandomTime()) I don’t understand why adding them increase randomness ?

    sipa commented at 2:04 am on October 3, 2020:
    The sum of 3 uniformly random values is not uniformly random. I think the sum matches better with the distribution of timestamps that end up being generated during the scenarios (which are also sums of uniformly random values).
  138. in src/test/txrequest_tests.cpp:511 in 9c311d0e20 outdated
    506+
    507+    Runner runner;
    508+    auto starttime = RandomTime(44);
    509+    // Construct many scenarios, and run (up to) 10 randomly-chosen tests consecutively in each.
    510+    while (builders.size()) {
    511+        Scenario scenario(runner, starttime);
    


    ariard commented at 3:39 pm on October 2, 2020:
    You may draw a new starttime for every new scenario such increasing the space of starting time covered at each TestInterleavedScenarios ?

    sipa commented at 2:07 am on October 3, 2020:
    Some variation in start time was achieved in the Scenario constructor. I’ve moved it here.
  139. in src/test/txrequest_tests.cpp:490 in 9c311d0e20 outdated
    433+}
    434+
    435+/** Add to scenario a test thats verifies behavior related to both txid and wtxid with the same
    436+    hash being announced.
    437+*/
    438+void BuildWtxidTest(Scenario& scenario, int config)
    


    ariard commented at 4:31 pm on October 2, 2020:
    Does this test check that the non-preferred announcement is requested after expiration of the first one to verify that wtxidness doesn’t interfere with promotion of CANDIDATEs left to _BEST ?

    sipa commented at 2:23 am on October 3, 2020:

    I assume not, as this test only looks at what is being requested.

    Can you write out the actual scenario you have in mind (or even better, write a commit that adds it)?


    sipa commented at 5:58 pm on October 6, 2020:
    Marking this as resolved, as @ariard wrote a test that was squashed in.
  140. in src/txrequest.cpp:282 in 9c311d0e20 outdated
    277+
    278+/** Actual implementation for TxRequestTracker's data structure. */
    279+class TxRequestTracker::Impl {
    280+    //! The current sequence number. Increases for every announcement. This is used to sort txhashes returned by
    281+    //! GetRequestable in announcement order.
    282+    uint64_t m_sequence{0};
    


    ariard commented at 4:50 pm on October 2, 2020:
    What about m_cur_sequence to dissociate clearly from the per-Entry m_sequence ? Also comment could be clearer that the the request are ordered per-peer, not globally. I had a doubt while reviewing BuildRequestOrderTest

    sipa commented at 1:58 am on October 3, 2020:
    Done.
  141. in src/test/txrequest_tests.cpp:377 in 9c311d0e20 outdated
    316+    // We will have N peers announce the same transaction.
    317+    std::map<uint64_t, bool> preferred;
    318+    std::vector<uint64_t> pref_order, npref_order;
    319+    std::vector<uint64_t> pref_peers, npref_peers;
    320+    int num_pref = InsecureRandRange(peers + 1) ; // Some preferred, ...
    321+    int num_npref = peers - num_pref; // some not preferred.
    


    ariard commented at 4:54 pm on October 2, 2020:
    Maybe add a config where they’re all preferred/non-preferred ? InsecureRandRange is really unlikely to return min and max values when you have 7 peers ?

    sipa commented at 1:58 am on October 3, 2020:
    Not really, one in 8. Given that every number of peers runs 30 times, I think that’s plenty.
  142. in src/test/txrequest_tests.cpp:350 in 9c311d0e20 outdated
    345+
    346+    // Determine the announcement order randomly.
    347+    std::vector<uint64_t> announce_order = request_order;
    348+    Shuffle(announce_order.begin(), announce_order.end(), g_insecure_rand_ctx);
    349+
    350+    // Find a gtxid that has a prioritization consistent with the required pref_order and npref_order.
    


    ariard commented at 5:17 pm on October 2, 2020:
    If I understand NewTxHash correctly it will return a uint256 which is guaranteed to respect the priority order of peers for both preferred and non-preferred classes ? And those NewTxHash checks are needed as peer identifier is part of siphash message. It’s more find a txhash rather than a gtxid as wtxidness shouldn’t interfere with priority.

    sipa commented at 1:57 am on October 3, 2020:
    Yeah, I’ve added some comments.
  143. in src/test/txrequest_tests.cpp:353 in 9c311d0e20 outdated
    348+    Shuffle(announce_order.begin(), announce_order.end(), g_insecure_rand_ctx);
    349+
    350+    // Find a gtxid that has a prioritization consistent with the required pref_order and npref_order.
    351+    auto gtxid = scenario.NewGTxid({pref_order, npref_order});
    352+
    353+    // Decide reqtimes in opposite order of the eventual request order. So every time time jumps
    


    ariard commented at 5:32 pm on October 2, 2020:

    “Decide reqtimes in opposite order of the expected request order which is function of the announcement order and peer preferredness”. Clearer to underscore there are two orders, and you’re deliberately tweaking the second one.

    Maybe, “The lowest priority peer will get the soonest reqtime. It will be the to-be-requested-from peer until the time (Scenario.m_now) is jumped above reqtime of next priority peer.”


    sipa commented at 2:23 am on October 3, 2020:
    I’ve changed the comments here a bit.
  144. in src/test/txrequest_tests.cpp:370 in 9c311d0e20 outdated
    365+    }
    366+    for (const auto peer : announce_order) {
    367+        scenario.Check(peer, {}, 1, 0, 0, "b1");
    368+    }
    369+
    370+    // Let time pass and observe the to-be-requested-from peer change.
    


    ariard commented at 5:44 pm on October 2, 2020:
    “We pin back current time under checked peer reqtime. We observe it’s not the current to-be-requested-from peer. We advance forward current time beyond checked peer reqtime. We observe it’s henceforth the new to-be-requested-from peer”

    sipa commented at 6:40 pm on October 5, 2020:
    I’ve changed some comments here, let me know if it’s better.
  145. in src/test/txrequest_tests.cpp:379 in 9c311d0e20 outdated
    374+        scenario.AdvanceTime(MICROSECOND);
    375+        scenario.Check(request_order[i], {gtxid}, 1, 0, 0, "b3");
    376+    }
    377+
    378+    // Peers now in random order go offline, or send NOTFOUNDs. Observe the to-be-requested-peer
    379+    // change whenever the previous best one disappears.
    


    ariard commented at 5:46 pm on October 2, 2020:
    “Observe the to-be-requested peer change for the remaining peer with the highest priority”

    sipa commented at 6:40 pm on October 5, 2020:
    I’ve changed some comments.
  146. in src/test/txrequest_tests.cpp:265 in 9c311d0e20 outdated
    260+    auto gtxid = prio1 ? scenario.NewGTxid({{peer1, peer2}}) : scenario.NewGTxid({{peer2, peer1}});
    261+    bool pref1 = config & 2, pref2 = config & 4;
    262+
    263+    scenario.ReceivedInv(peer1, gtxid, pref1, MIN_TIME);
    264+    scenario.Check(peer1, {gtxid}, 1, 0, 0, "p1");
    265+    if (InsecureRandBool()) scenario.AdvanceTime(RandomTime());
    


    ariard commented at 5:55 pm on October 2, 2020:
    This is just to test that advancing time doesn’t change the “requestability” of the transaction without a call to RequestedTx ?

    sipa commented at 1:57 am on October 3, 2020:
    Yes.
  147. in src/test/txrequest_tests.cpp:297 in 9c311d0e20 outdated
    292+    if (config & 16) {
    293+        scenario.DisconnectedPeer(priopeer);
    294+    } else {
    295+        scenario.ReceivedResponse(priopeer, gtxid);
    296+    }
    297+    if (InsecureRandBool()) scenario.AdvanceTime(RandomTime());
    


    ariard commented at 6:04 pm on October 2, 2020:
    I think advancing time should be a config alternative of its own, otherwise can you dissociate the state transition REQUESTED -> COMPLETED triggered by a ReceivedResponse/DisconnectPeer from a GetRequestable one ?

    sipa commented at 1:57 am on October 3, 2020:
    Turned it into a config bit.
  148. in src/test/txrequest_tests.cpp:276 in 9c311d0e20 outdated
    215+        auto expiry = RandomTime();
    216+        scenario.Check(peer, {gtxid}, 1, 0, 0, "s5");
    217+        scenario.RequestedTx(peer, gtxid, scenario.Now() + expiry);
    218+        scenario.Check(peer, {}, 0, 1, 0, "s6");
    219+
    220+        if ((config >> 3) == 1) { // The request will time out
    


    ariard commented at 6:07 pm on October 2, 2020:
    I think using a AND here will get you higher coverage of this case ?

    sipa commented at 1:39 am on October 3, 2020:
    Not sure what you mean here. There are four paths through the cascade of ifs here and each value of config >> 3 selects one of them.
  149. in src/test/txrequest_tests.cpp:261 in 9c311d0e20 outdated
    232+                return;
    233+            }
    234+        }
    235+    }
    236+
    237+    scenario.AdvanceTime(RandomTime());
    


    ariard commented at 6:08 pm on October 2, 2020:
    Isn’t advancing the time blurring the further observance that the transaction state has been moved ? Compared to only relying on DisconnectPeer/ForgetTxHash ?

    sipa commented at 8:11 pm on October 5, 2020:
    I’ve made it conditional with an if InsecureRandBool() around it.
  150. ariard commented at 6:10 pm on October 2, 2020: member
    Reviewed the unit tests, apart of few questions, it seems they enforce what they announce to verify.
  151. sipa force-pushed on Oct 2, 2020
  152. sipa force-pushed on Oct 3, 2020
  153. sipa force-pushed on Oct 3, 2020
  154. in src/txrequest.cpp:602 in 6fa4512fb4 outdated
    597+            // returned by GetRequestable always correspond to CANDIDATE_BEST announcements).
    598+
    599+            it = m_index.get<ByPeer>().find(ByPeerView{peer, false, txhash});
    600+            if (it == m_index.get<ByPeer>().end() || (it->GetState() != State::CANDIDATE_DELAYED &&
    601+                it->GetState() != State::CANDIDATE_READY)) {
    602+                // The txhash was not tracked for this peer, so we have nothing to do. The caller should have called
    


    jnewbery commented at 9:37 am on October 5, 2020:
    I think we’ll also enter this branch if there’s an announcement for this peer/txhash that’s REQUESTED or COMPLETE. If that’s true, I think we should update the comment to “The txhash was not tracked for this peer or has already been requested from this peer…”

    sipa commented at 8:11 pm on October 5, 2020:
    Agreed. I’ve rewritten this a bit.
  155. in src/txrequest.cpp:339 in 6fa4512fb4 outdated
    334+
    335+            // Looking up the last ByTxHash announcement with the given txhash must return an Announcement with that
    336+            // txhash or the multi_index is very bad.
    337+            auto it_last = std::prev(m_index.get<ByTxHash>().lower_bound(
    338+                ByTxHashView{item.first, State::TOO_LARGE, 0}));
    339+            assert(it_last != m_index.get<ByTxHash>().end() && it_last->m_txhash == item.first);
    


    jnewbery commented at 10:20 am on October 5, 2020:
    0            assert(it_last != m_index.get<ByTxHash>().begin() && it_last->m_txhash == item.first);
    

    it_last is the return value of a std::prev() call, so I think by definition it can’t be the end iterator.

    Alternatively you could do something like:

    0            auto it_last = m_index.get<ByTxHash>().upper_bound(
    1                ByTxHashView{item.first, State::COMPLETED, <max uint256>}));
    2            assert(it_last != m_index.get<ByTxHash>().end() && it_last->m_txhash == item.first);
    3            auto it_next = std::next(it_last);
    4            assert(it_next == m_index.get<ByTxHash>().end() || it_next->m_txhash != item.first);
    

    (which would remove the need for a dummy TOO_LARGE entry in the State enum).


    ajtowns commented at 12:33 pm on October 5, 2020:

    ByTxHashView takes the uint256 as the first param, the last one is just a uint64_t Priority, so replacing with COMPLETED, std::numeric_limits<Priority>::max() should be straightforward (1 would also be fine, since COMPLETED means priority is 0). Seems to pass the fuzz tester okay. Adding a static constexpr auto PRIORITY_MAX alias and using it for m_priority_best_candidate_ready as well may make sense.

    But… is this test even useful? it_last was needed in order to sanity check the first markers, but with them gone now…


    sipa commented at 8:12 pm on October 5, 2020:
    I’ve just deleted this (as well as the TOO_LARGE enum value). It was only needed when the SanityCheck code actually needed it_last for the per-txhash flags which are gone now.
  156. in src/net_processing.cpp:755 in 6fa4512fb4 outdated
    796-    return process_time;
    797-}
    798+} // namespace
    799 
    800-void RequestTx(CNodeState* state, const GenTxid& gtxid, std::chrono::microseconds current_time) EXCLUSIVE_LOCKS_REQUIRED(cs_main)
    801+void PeerManager::RequestTx(const CNode& node, const GenTxid& gtxid, std::chrono::microseconds current_time)
    


    jnewbery commented at 10:39 am on October 5, 2020:
    I think AddTxAnnouncement or similar would be a better name for this function. Calling this function does not imply that we’ll ever request the transaction from this peer.

    sipa commented at 8:12 pm on October 5, 2020:
    Ok, renamed.
  157. in src/net_processing.cpp:774 in 6fa4512fb4 outdated
    828+    //   - TXID_RELAY_DELAY for txid announcements while wtxid peers are available
    829+    //   - OVERLOADED_PEER_TX_DELAY for announcements from peers which have at least
    830+    //     MAX_PEER_TX_IN_FLIGHT requests in flight (and don't have PF_RELAY).
    831+    auto delay = std::chrono::microseconds{0};
    832+    bool preferred = state->fPreferredDownload;
    833+    bool overloaded = !node.HasPermission(PF_RELAY) && m_txrequest.CountInFlight(nodeid) >= MAX_PEER_TX_IN_FLIGHT;
    


    jnewbery commented at 10:43 am on October 5, 2020:

    Move this two lines down (so it’s immediately above the one place where overloaded is used).

    Could also mark these two bools as const, but meh.


    sipa commented at 8:12 pm on October 5, 2020:
    Done, also the meh.
  158. in src/net_processing.cpp:3694 in 6fa4512fb4 outdated
    3691-        if (vInv.size() <= MAX_PEER_TX_IN_FLIGHT + MAX_BLOCKS_IN_TRANSIT_PER_PEER) {
    3692+        if (vInv.size() <= MAX_PEER_TX_ANNOUNCEMENTS + MAX_BLOCKS_IN_TRANSIT_PER_PEER) {
    3693             for (CInv &inv : vInv) {
    3694                 if (inv.IsGenTxMsg()) {
    3695                     // If we receive a NOTFOUND message for a txid we requested, erase
    3696                     // it from our data structures for this peer.
    


    jnewbery commented at 10:46 am on October 5, 2020:
    This comment is now wrong.

    sipa commented at 8:13 pm on October 5, 2020:
    Fixed.
  159. in src/net_processing.cpp:3710 in 6fa4512fb4 outdated
    3684@@ -3772,22 +3685,14 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    3685     if (msg_type == NetMsgType::NOTFOUND) {
    3686         // Remove the NOTFOUND transactions from the peer
    3687         LOCK(cs_main);
    3688-        CNodeState *state = State(pfrom.GetId());
    3689         std::vector<CInv> vInv;
    3690         vRecv >> vInv;
    3691-        if (vInv.size() <= MAX_PEER_TX_IN_FLIGHT + MAX_BLOCKS_IN_TRANSIT_PER_PEER) {
    3692+        if (vInv.size() <= MAX_PEER_TX_ANNOUNCEMENTS + MAX_BLOCKS_IN_TRANSIT_PER_PEER) {
    


    jnewbery commented at 10:54 am on October 5, 2020:

    A peer can cause us to do up to 5016 calls into ReceivedResponse() per notfound message, where each RecievedResponse() call results in two find() operations into the index that can be up to 625k. Is that likely to be costly?

    Previously, the limit on notfound processing was 116 find() calls into a map of maxsize 100.


    ajtowns commented at 5:08 pm on October 5, 2020:
    Could just send 44 NOTFOUND messages with 116 entries instead of one with 5016 for almost exactly the same overhead; and they should be plenty fast anyway, I think, especially compared to looking up the UTXO database when we receive a transaction. Since we’ve increased the in flight limit, I think it makes sense to increase the notfound limit correspondingly.

    sipa commented at 7:10 pm on October 5, 2020:

    Right, I don’t think this is a DoS vector as it’s fairly cheap per byte compared to many other requests.

    The only concern would be latency, e.g. might it cause a significant interruption of a net processing loop to process 5016 NOTFOUNDs at once? I think not, but if it is, we may use the split processing technique used for getdata and orphan processing - independent of what the limit here is.


    jnewbery commented at 9:19 am on October 6, 2020:
    Yes, seems reasonable. Perhaps at some point in the future we might also want to have a return value from ReceivedResponse() so net_processing can know whether the peer is sending spurious notfound entries.
  160. in src/net_processing.cpp:3686 in 6fa4512fb4 outdated
    3684@@ -3772,22 +3685,14 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    3685     if (msg_type == NetMsgType::NOTFOUND) {
    3686         // Remove the NOTFOUND transactions from the peer
    


    jnewbery commented at 10:56 am on October 5, 2020:
    Delete this comment.

    sipa commented at 8:13 pm on October 5, 2020:

    What a wasted opportunity to say “Remove the STALE comment from the pr” instead.

    Done.

  161. in src/net_processing.cpp:3687 in 6fa4512fb4 outdated
    3684@@ -3772,22 +3685,14 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    3685     if (msg_type == NetMsgType::NOTFOUND) {
    3686         // Remove the NOTFOUND transactions from the peer
    3687         LOCK(cs_main);
    


    jnewbery commented at 10:56 am on October 5, 2020:
    It doesn’t make much difference, but this cs_main scope can now move down into the if block.

    sipa commented at 8:14 pm on October 5, 2020:
    Done. I haven’t moved it into the loop as I expect common iterations of the loop to be faster or at least in the same order of magnitude as grabbing a lock.
  162. in src/txrequest.cpp:157 in a60ad12fb9 outdated
    152+    {
    153+        return ByPeerView{ann.m_peer, ann.GetState() == State::CANDIDATE_BEST, ann.m_txhash};
    154+    }
    155+};
    156+
    157+// The ByTxHash index is sorted by (txhash, state, priority [CANDIDATE_READY]; 0 [otherwise])
    


    ajtowns commented at 11:19 am on October 5, 2020:
    (txhash, state, priority) and “Note, priority == 0 whenever state != CANDIDATE_READY” below might be clearer.

    sipa commented at 8:15 pm on October 5, 2020:
    Done, that’s more readable indeed.
  163. in src/txrequest.cpp:425 in a60ad12fb9 outdated
    290+
    291+    //! Change the state of an announcement to something non-IsSelected(). If it was IsSelected(), the next best
    292+    //! announcement will be marked CANDIDATE_BEST.
    293+    void ChangeAndReselect(Iter<ByTxHash> it, State new_state)
    294+    {
    295+        assert(it != m_index.get<ByTxHash>().end());
    


    ajtowns commented at 11:35 am on October 5, 2020:
    assert(new_state == COMPLETED || new_state == CANDIDATE_DELAYED); ? The existing assert(!it->IsSelected()); at the end would also allow new_state == CANDIDATE_READY which I don’t think would be handled correctly (in that if it were the first READY, it should be assigned BEST but will not be).

    sipa commented at 8:17 pm on October 5, 2020:

    Done.

    It’s borderline, I think, as you could read the comments for this function do not claim they’ll do anything but change the state of the passed iterator to the passed value - only about other announcements. Still, better to be explicit so added an assert.

  164. in src/txrequest.cpp:51 in a60ad12fb9 outdated
    45+    /** A REQUESTED announcement. */
    46+    REQUESTED,
    47+    /** A CANDIDATE announcement that's not CANDIDATE_DELAYED or CANDIDATE_BEST. */
    48+    CANDIDATE_READY,
    49+    /** A COMPLETED announcement. */
    50+    COMPLETED,
    


    ajtowns commented at 12:40 pm on October 5, 2020:

    I think the constraints on the ordering here are:

    • COMPLETED comes last, so that looping it = Erase<ByTxHash>(it) starting from the only non-completed entry for a txhash will erase all the entries for that txhash
    • BEST and REQUESTED are next to each other so that it’s easy to enforce the “only one best or requestable per txhash”
    • BEST/REQUESTED comes immediately before/after READY so that it’s easy to compare a new best READY with the existing BEST

    But I think there’s no longer anything preventing changing the ordering to match the expected transitions, ie DELAYED, READY, BEST, REQUESTED, COMPLETED (and changing the best priority to be the highest value rather than the lowest). I’m presuming that’s due to no longer doing first markers, but maybe I’ve missed something?


    sipa commented at 8:18 pm on October 5, 2020:
    That would be a more logical ordering. Let me try what would be needed to make this work later.

    sipa commented at 6:57 pm on October 6, 2020:

    This was indeed not a problem at all. I’ve applied the following diff:

      0diff --git a/src/test/fuzz/txrequest.cpp b/src/test/fuzz/txrequest.cpp
      1index 0ff00d23e0..ff32de25eb 100644
      2--- a/src/test/fuzz/txrequest.cpp
      3+++ b/src/test/fuzz/txrequest.cpp
      4@@ -129,7 +129,7 @@ class Tester
      5             if (ann.m_state == State::REQUESTED) return -1;
      6             // If it's a viable candidate, see if it has lower priority than the best one so far.
      7             if (ann.m_state == State::CANDIDATE && ann.m_time <= m_now) {
      8-                if (ret == -1 || ann.m_priority < ret_priority) {
      9+                if (ret == -1 || ann.m_priority > ret_priority) {
     10                     std::tie(ret, ret_priority) = std::tie(peer, ann.m_priority);
     11                 }
     12             }
     13@@ -252,7 +252,7 @@ public:
     14             }
     15             // And delete txids with only COMPLETED announcements left.
     16             Cleanup(txhash);
     17-            // CANDIDATEs for which this announcement has the lowest priority get returned.
     18+            // CANDIDATEs for which this announcement has the highest priority get returned.
     19             const Announcement& ann = m_announcements[txhash][peer];
     20             if (ann.m_state == State::CANDIDATE && GetSelected(txhash) == peer) {
     21                 result.emplace_back(ann.m_sequence, txhash, ann.m_is_wtxid);
     22diff --git a/src/test/txrequest_tests.cpp b/src/test/txrequest_tests.cpp
     23index a84dfe67f9..39a4f86ea8 100644
     24--- a/src/test/txrequest_tests.cpp
     25+++ b/src/test/txrequest_tests.cpp
     26@@ -162,8 +162,8 @@ public:
     27     /** Generate a random txhash, whose priorities for certain peers are constrained.
     28      *
     29      * For example, NewTxHash({{p1,p2,p3},{p2,p4,p5}}) will generate a txhash T such that both:
     30-     *  - priority(p1,T) < priority(p2,T) < priority(p3,T)
     31-     *  - priority(p2,T) < priority(p4,T) < priority(p5,T)
     32+     *  - priority(p1,T) > priority(p2,T) > priority(p3,T)
     33+     *  - priority(p2,T) > priority(p4,T) > priority(p5,T)
     34      * where priority is the predicted internal TxRequestTracker's priority, assuming all announcements
     35      * are within the same preferredness class.
     36      */
     37@@ -178,7 +178,7 @@ public:
     38                 for (size_t pos = 1; pos < order.size(); ++pos) {
     39                     uint64_t prio_prev = m_runner.txrequest.ComputePriority(ret, order[pos - 1], true);
     40                     uint64_t prio_cur = m_runner.txrequest.ComputePriority(ret, order[pos], true);
     41-                    if (prio_prev >= prio_cur) {
     42+                    if (prio_prev <= prio_cur) {
     43                         ok = false;
     44                         break;
     45                     }
     46diff --git a/src/txrequest.cpp b/src/txrequest.cpp
     47index d07a5203cd..a5455e2305 100644
     48--- a/src/txrequest.cpp
     49+++ b/src/txrequest.cpp
     50@@ -38,14 +38,14 @@ namespace {
     51 enum class State : uint8_t {
     52     /** A CANDIDATE announcement whose reqtime is in the future. */
     53     CANDIDATE_DELAYED,
     54+    /** A CANDIDATE announcement that's not CANDIDATE_DELAYED or CANDIDATE_BEST. */
     55+    CANDIDATE_READY,
     56     /** The best CANDIDATE for a given txhash; only if there is no REQUESTED announcement already for that txhash.
     57-     *  The CANDIDATE_BEST is the lowest-priority announcement among all CANDIDATE_READY (and _BEST) ones for that
     58+     *  The CANDIDATE_BEST is the highest-priority announcement among all CANDIDATE_READY (and _BEST) ones for that
     59      *  txhash. */
     60     CANDIDATE_BEST,
     61     /** A REQUESTED announcement. */
     62     REQUESTED,
     63-    /** A CANDIDATE announcement that's not CANDIDATE_DELAYED or CANDIDATE_BEST. */
     64-    CANDIDATE_READY,
     65     /** A COMPLETED announcement. */
     66     COMPLETED,
     67 };
     68@@ -106,7 +106,7 @@ using Priority = uint64_t;
     69 
     70 /** A functor with embedded salt that computes priority of an announcement.
     71  *
     72- * Lower priorities are selected first.
     73+ * Higher priorities are selected first.
     74  */
     75 class PriorityComputer {
     76     const uint64_t m_k0, m_k1;
     77@@ -118,7 +118,7 @@ public:
     78     Priority operator()(const uint256& txhash, NodeId peer, bool preferred) const
     79     {
     80         uint64_t low_bits = CSipHasher(m_k0, m_k1).Write(txhash.begin(), txhash.size()).Write(peer).Finalize() >> 1;
     81-        return low_bits | uint64_t{!preferred} << 63;
     82+        return low_bits | uint64_t{preferred} << 63;
     83     }
     84 
     85     Priority operator()(const Announcement& ann) const
     86@@ -244,10 +244,10 @@ struct TxHashInfo
     87     size_t m_candidate_best = 0;
     88     //! Number of REQUESTED announcements for this txhash.
     89     size_t m_requested = 0;
     90-    //! The priority of the CANDIDATE_BEST announcement if one exists, or min() otherwise.
     91-    Priority m_priority_candidate_best = std::numeric_limits<Priority>::min();
     92-    //! The lowest priority of all CANDIDATE_READY announcements (or max() if none exist).
     93-    Priority m_priority_best_candidate_ready = std::numeric_limits<Priority>::max();
     94+    //! The priority of the CANDIDATE_BEST announcement if one exists, or max() otherwise.
     95+    Priority m_priority_candidate_best = std::numeric_limits<Priority>::max();
     96+    //! The lowest priority of all CANDIDATE_READY announcements (or min() if none exist).
     97+    Priority m_priority_best_candidate_ready = std::numeric_limits<Priority>::min();
     98     //! All peers we have an announcement for this txhash for.
     99     std::vector<NodeId> m_peers;
    100 };
    101@@ -288,7 +288,7 @@ std::map<uint256, TxHashInfo> ComputeTxHashInfo(const Index& index, const Priori
    102             info.m_priority_candidate_best = computer(ann);
    103         }
    104         if (ann.GetState() == State::CANDIDATE_READY) {
    105-            info.m_priority_best_candidate_ready = std::min(info.m_priority_best_candidate_ready, computer(ann));
    106+            info.m_priority_best_candidate_ready = std::max(info.m_priority_best_candidate_ready, computer(ann));
    107         }
    108         // Also keep track of which peers this txhash has an announcement for (so we can detect duplicates).
    109         info.m_peers.push_back(ann.m_peer);
    110@@ -341,7 +341,7 @@ public:
    111             // If there is both a CANDIDATE_READY and a CANDIDATE_BEST announcement, the CANDIDATE_BEST one must be
    112             // at least as good (equal or lower priority) as the best CANDIDATE_READY.
    113             if (info.m_candidate_ready && info.m_candidate_best) {
    114-                assert(info.m_priority_candidate_best <= info.m_priority_best_candidate_ready);
    115+                assert(info.m_priority_candidate_best >= info.m_priority_best_candidate_ready);
    116             }
    117 
    118             // No txhash can have been announced by the same peer twice.
    119@@ -399,21 +399,21 @@ private:
    120         // Convert CANDIDATE_DELAYED to CANDIDATE_READY first.
    121         Modify<ByTxHash>(it, [](Announcement& ann){ ann.SetState(State::CANDIDATE_READY); });
    122         // The following code relies on the fact that the ByTxHash is sorted by txhash, and then by state (first
    123-        // _DELAYED, then _BEST/REQUESTED, then _READY). Within the _READY announcements, the best one (lowest
    124-        // priority) comes first. Thus, if an existing _BEST exists for the same txhash that this announcement may
    125-        // be preferred over, it must immediately precede the newly created _READY.
    126-        if (it == m_index.get<ByTxHash>().begin() || std::prev(it)->m_txhash != it->m_txhash ||
    127-            std::prev(it)->GetState() == State::CANDIDATE_DELAYED) {
    128+        // _DELAYED, then _READY, then _BEST/REQUESTED). Within the _READY announcements, the best one (highest
    129+        // priority) comes last. Thus, if an existing _BEST exists for the same txhash that this announcement may
    130+        // be preferred over, it must immediately follow the newly created _READY.
    131+        auto it_next = std::next(it);
    132+        if (it_next == m_index.get<ByTxHash>().end() || it_next->m_txhash != it->m_txhash ||
    133+            it_next->GetState() == State::COMPLETED) {
    134             // This is the new best CANDIDATE_READY, and there is no IsSelected() announcement for this txhash
    135             // already.
    136             Modify<ByTxHash>(it, [](Announcement& ann){ ann.SetState(State::CANDIDATE_BEST); });
    137-        } else if (std::prev(it)->GetState() == State::CANDIDATE_BEST) {
    138-            Priority priority_old = m_computer(*std::prev(it));
    139+        } else if (it_next->GetState() == State::CANDIDATE_BEST) {
    140+            Priority priority_old = m_computer(*it_next);
    141             Priority priority_new = m_computer(*it);
    142-            if (priority_new < priority_old) {
    143+            if (priority_new > priority_old) {
    144                 // There is a CANDIDATE_BEST announcement already, but this one is better.
    145-                auto new_ready_it = std::prev(it);
    146-                Modify<ByTxHash>(new_ready_it, [](Announcement& ann){ ann.SetState(State::CANDIDATE_READY); });
    147+                Modify<ByTxHash>(it_next, [](Announcement& ann){ ann.SetState(State::CANDIDATE_READY); });
    148                 Modify<ByTxHash>(it, [](Announcement& ann){ ann.SetState(State::CANDIDATE_BEST); });
    149             }
    150         }
    151@@ -425,14 +425,13 @@ private:
    152     {
    153         assert(new_state == State::COMPLETED || new_state == State::CANDIDATE_DELAYED);
    154         assert(it != m_index.get<ByTxHash>().end());
    155-        if (it->IsSelected()) {
    156-            auto it_next = std::next(it);
    157-            // The next best CANDIDATE_READY, if any, immediately follows the REQUESTED or CANDIDATE_BEST
    158+        if (it->IsSelected() && it != m_index.get<ByTxHash>().begin()) {
    159+            auto it_prev = std::prev(it);
    160+            // The next best CANDIDATE_READY, if any, immediately preceeds the REQUESTED or CANDIDATE_BEST
    161             // announcement in the ByTxHash index.
    162-            if (it_next != m_index.get<ByTxHash>().end() && it_next->m_txhash == it->m_txhash &&
    163-                it_next->GetState() == State::CANDIDATE_READY) {
    164+            if (it_prev->m_txhash == it->m_txhash && it_prev->GetState() == State::CANDIDATE_READY) {
    165                 // If one such CANDIDATE_READY exists (for this txhash), convert it to CANDIDATE_BEST.
    166-                Modify<ByTxHash>(it_next, [](Announcement& ann){ ann.SetState(State::CANDIDATE_BEST); });
    167+                Modify<ByTxHash>(it_prev, [](Announcement& ann){ ann.SetState(State::CANDIDATE_BEST); });
    168             }
    169         }
    170         Modify<ByTxHash>(it, [new_state](Announcement& ann){ ann.SetState(new_state); });
    
  165. in src/txrequest.cpp:181 in a60ad12fb9 outdated
    176+        return ByTxHashView{ann.m_txhash, state, prio};
    177+    }
    178+};
    179+
    180+// The ByTime index is sorted by (0 [CANDIDATE_DELAYED,REQUESTED]; 1 [COMPLETED];
    181+// 2 [CANDIDATE_READY,CANDIDATE_BEST], time)
    


    ajtowns commented at 1:02 pm on October 5, 2020:

    Maybe

    0enum class WaitState {
    1    WAITING,
    2    COMPLETED,
    3    SELECTABLE,
    4};
    5static inline WaitState GetWaitState(const Announcement& ann) {
    6    return (ann.IsWaiting() ? WaitState::WAITING : ann.IsSelectable() ? WaitState::SELECTABLE, WaitState::COMPLETED);
    7}
    

    and say sorted by (wait_state, time), and describe it is easy to find all the WAITING announcements prior to a time point (for when the event they were waiting for has happened), and all the SELECTABLE announcements after a time point (for when time goes backwards)? (Alternatively, FUTURE_EVENT, NO_EVENT, and PAST_EVENT would be more generic wait states)


    sipa commented at 8:18 pm on October 5, 2020:
    Nice, done.
  166. in src/txrequest.cpp:482 in a60ad12fb9 outdated
    348+
    349+        return true;
    350+    }
    351+
    352+    //! Make the data structure consistent with a given point in time:
    353+    //! - REQUESTED annoucements with expiry <= now are turned into COMPLETED.
    


    ajtowns commented at 1:18 pm on October 5, 2020:
    At some point in the future it would probably be good to expose expiring requests somehow – peers might want to deprioritise those peers or disconnect them, eg.

    sipa commented at 8:18 pm on October 5, 2020:
    Agree, but that sounds like really a follow-up (or even a follow-up issue to investigate the use of that first).
  167. in src/txrequest.cpp:610 in a60ad12fb9 outdated
    457+            ++it_peer;
    458+        }
    459+
    460+        // Return them, sorted by sequence number.
    461+        std::sort(selected.begin(), selected.end());
    462+        std::vector<GenTxid> ret;
    


    ajtowns commented at 1:39 pm on October 5, 2020:
    ret.reserve(selected.size()) ?

    sipa commented at 8:18 pm on October 5, 2020:
    Done.
  168. in src/txrequest.cpp:670 in a60ad12fb9 outdated
    513+        // We need to search the ByPeer index for both (peer, false, txhash) and (peer, true, txhash).
    514+        auto it = m_index.get<ByPeer>().find(ByPeerView{peer, false, txhash});
    515+        if (it == m_index.get<ByPeer>().end()) {
    516+            it = m_index.get<ByPeer>().find(ByPeerView{peer, true, txhash});
    517+        }
    518+        if (it != m_index.get<ByPeer>().end()) MakeCompleted(m_index.project<ByTxHash>(it));
    


    ajtowns commented at 2:10 pm on October 5, 2020:
    In future, might be useful to return it->GetState() == REQUESTED ? it->m_time : min() here to allow the caller to estimate relay latency for a peer, in case they wanted to use that info to adjust that peer’s expiry delay.

    sipa commented at 8:19 pm on October 5, 2020:
    Yeah, future work…
  169. in src/txrequest.h:49 in a60ad12fb9 outdated
    44+ * - The same transaction is never requested twice from the same peer, unless the announcement was forgotten in
    45+ *   between, and re-announced. Announcements are forgotten only:
    46+ *   - If a peer goes offline, all its announcements are forgotten.
    47+ *   - If a transaction has been successfully received, or is otherwise no longer needed, the caller can call
    48+ *     ForgetTxHash, which removes all announcements across all peers with the specified txhash.
    49+ *   - If for a given txhash only already-requested announcements remain, they are all forgotten.
    


    ajtowns commented at 2:13 pm on October 5, 2020:
    Should be “already-failed” not “already-requested” per the terminology above (“Whether or not the transaction request failed already (timed out, or NOTFOUND was received)”)

    sipa commented at 8:19 pm on October 5, 2020:
    Fixed.
  170. in src/test/txrequest_tests.cpp:48 in 744eb10ecf outdated
    43+
    44+    /** Which txhashes have been assigned already (to prevent reuse). */
    45+    std::set<uint256> txhashset;
    46+};
    47+
    48+std::chrono::microseconds RandomTime(int bits=23) { return std::chrono::microseconds{1 + InsecureRandBits(bits)}; }
    


    ajtowns commented at 2:55 pm on October 5, 2020:
    Maybe RandomTime_15s() and RandomTime_1y() (assuming I’ve done the math right for bits=23 and bits=44 which are the only ways this is called) ?

    sipa commented at 8:19 pm on October 5, 2020:
    Done. Your math was almost right (it’s 8s and half a year).
  171. in src/test/txrequest_tests.cpp:65 in 744eb10ecf outdated
    49+
    50+/** A proxy for a Runner that helps build a sequence of consecutive test actions on a TxRequestTracker.
    51+ *
    52+ * Multiple Scenarios are used with a single Runner, resulting in interleaved actions.
    53+ */
    54+class Scenario
    


    ajtowns commented at 2:56 pm on October 5, 2020:

    I’m not really following the unit testing strategy here; in particular what all the interleaving ends up doing, and what cases are really covered, and what the effect of doing some things exhaustively and other things randomly is . Feels like there’s a lot of novel scaffolding hiding what’s actual tested.

    It might be good to create some template magic for the Action/Runner/Scenario pattern, and use that more consistently? At the very least the versionbits tests feel to me like they suffer a bit from the same sort of problem?

    I think this test setup only simulates a monotonic clock (vs txrequest.cpp’s note that supporting time going backwards “makes it much easier to specify and test TxRequestTracker::Impl’s behaviour”)


    sipa commented at 8:27 pm on October 5, 2020:

    I’ve added this comment blob to explain it better:

    Each Scenario is a proxy through which actions for the (sequential) execution of various tests are added to a Runner. The actions from multiple scenarios are then run concurrently, resulting in these tests being performed against a TxRequestTracker in parallel. Every test has its own unique txhashes and NodeIds which are not reused in other tests, and thus they should be independent from each other. Running them in parallel however means that we verify the behavior (w.r.t. one test’s txhashes and NodeIds) even when the state of the data structure is more complicated due to the presence of other tests.

    Does that help?

    Abstracting out this scaffolding seems potentially useful, but for a follow-up.

    I think this test setup only simulates a monotonic clock (vs txrequest.cpp’s note that supporting time going backwards “makes it much easier to specify and test TxRequestTracker::Impl’s behaviour”)

    The tests are indeed restricted to a clock that is increasing monotonically between subsequent calls to GetRequestable. I had some infrastructure here earlier to allow a scenario to have both a “real clock” and a “jittered clock”, where the first one is used to order Actions, and the second one determines the timestamps passed to GetRequestable, but I feared the mental overhead to that isn’t worth it. Whether TxRequestTracker does something reasonable under insane call sequences is tested by the fuzzer already, so I think this one can be restricted to more normal situations, and test that what’s done isn’t just reasonable, but also matches more high-level expected behavior.

  172. in src/txrequest.cpp:235 in 744eb10ecf outdated
    230+    //! The priority of the CANDIDATE_BEST announcement if one exists, or 0 otherwise.
    231+    uint64_t m_priority_candidate_best = 0;
    232+    //! The lowest priority of all CANDIDATE_READY entries (or max() if none exist).
    233+    uint64_t m_priority_best_candidate_ready = std::numeric_limits<uint64_t>::max();
    234+    //! All peers we have an announcement for this txhash for.
    235+    std::vector<uint64_t> m_peers;
    


    ajtowns commented at 3:10 pm on October 5, 2020:
    NodeId and Priority instead of uint64_t and ::min() instead of m_priority_candidate_best = 0?

    sipa commented at 8:27 pm on October 5, 2020:
    Done.
  173. in src/test/fuzz/txrequest.cpp:102 in 9fb42eeb59 outdated
     97+
     98+    //! Information about all txhash/peer combination.
     99+    Announcement m_announcements[MAX_TXHASHES][MAX_PEERS];
    100+
    101+    //! The current time; can move forward and backward.
    102+    std::chrono::microseconds m_now{112223333};
    


    ajtowns commented at 3:22 pm on October 5, 2020:
    244466666 (pronounced “one 2, three 4, five 6”) or 1123581321 are also amusing. Super helpful review comments ‘r us!

    sipa commented at 8:27 pm on October 5, 2020:
    Major improvement, done!
  174. in src/test/fuzz/txrequest.cpp:111 in 9fb42eeb59 outdated
    106+    {
    107+        bool all_nothing = true;
    108+        for (int peer = 0; peer < MAX_PEERS; ++peer) {
    109+            const Announcement& ann = m_announcements[txhash][peer];
    110+            if (ann.m_state == State::CANDIDATE || ann.m_state == State::REQUESTED) return;
    111+            if (ann.m_state != State::NOTHING) all_nothing = false;
    


    ajtowns commented at 3:27 pm on October 5, 2020:

    Maybe:

    0if (ann.m_state != NOTHING) {
    1    if (ann.m_state != COMPLETED) return;
    2    all_nothing = false;
    3}
    

    Had to double take to realise that not being CANDIDATE, REQUESTED or NOTHING here meant it was definitely COMPLETED. No big deal.


    sipa commented at 8:28 pm on October 5, 2020:
    Done.
  175. in src/net_processing.cpp:760 in 89a01b8912 outdated
    814-    // Calculate the time to try requesting this transaction. Use
    815-    // fPreferredDownload as a proxy for outbound peers.
    816-    const auto process_time = CalculateTxGetDataTime(gtxid, current_time, !state->fPreferredDownload, !state->m_wtxid_relay && g_wtxid_relay_peers > 0);
    817-
    818-    peer_download_state.m_tx_process_time.emplace(process_time, gtxid);
    819+    auto state = State(nodeid);
    


    ajtowns commented at 4:27 pm on October 5, 2020:
    auto* state = ? Was wondering if we were returning a reference and making an unnecessary copy.

    sipa commented at 8:28 pm on October 5, 2020:
    Clearly this shouldn’t be an auto type if it’s confusing. Just made the type explicit.
  176. in src/net_processing.cpp:2901 in 89a01b8912 outdated
    2901-            nodestate->m_tx_download.m_tx_announced.erase(gtxid.GetHash());
    2902-            nodestate->m_tx_download.m_tx_in_flight.erase(gtxid.GetHash());
    2903-            EraseTxRequest(gtxid);
    2904-        }
    2905+        m_txrequest.ReceivedResponse(pfrom.GetId(), txid);
    2906+        m_txrequest.ReceivedResponse(pfrom.GetId(), wtxid);
    


    ajtowns commented at 4:34 pm on October 5, 2020:
    if (tx.HasWitness()) ReceivedResponse(wtxid); ? Or is the idea that txrequest is so cheap it’s better to let it check if the extra call is redundant?

    sipa commented at 8:29 pm on October 5, 2020:
    I don’t think it matters much as it’s fairly cheap, plus doesn’t change worst case behaviour (attacker can use witness-carrying transactions). Still, done.
  177. in src/net_processing.cpp:857 in 89a01b8912 outdated
    847@@ -956,6 +848,7 @@ void PeerManager::FinalizeNode(NodeId nodeid, bool& fUpdateConnectionTime) {
    848         mapBlocksInFlight.erase(entry.hash);
    849     }
    850     EraseOrphansFor(nodeid);
    851+    m_txrequest.DisconnectedPeer(nodeid);
    


    ajtowns commented at 4:39 pm on October 5, 2020:
    Might be worth adding some sort of sanity check to see that there aren’t any code paths where we fail to call DisconnectedPeer and end up leaving CANDIDATE_BEST entries in the tracker that will never get cleared?

    sipa commented at 8:29 pm on October 5, 2020:
    Good idea, done.
  178. ajtowns approved
  179. ajtowns commented at 5:22 pm on October 5, 2020: member

    ACK 6fa4512fb450d709006fd6b242975fd3ce2f5340

    Some more comments, and I haven’t really reviewed the unit test adequately or the fuzz test thoroughly, but I’m happy with any further fixes/changes being post-merge at this point.

    My logging changes are at https://github.com/ajtowns/bitcoin/commits/202010-txrequest-logging . I use, uh,

    0watch 'grep txrequest.requested ~btc/.bitcoin/debug.log | cut -d\  -f2,5,7,8 | sed "s/[[]0,/[-,/g;s/,0]/,-]/g;s/[[][0-9]*,/[Y,/g;s/,[0-9]*]/,Y]/g" | column -t | sort | (uniq -c;echo) | sed "s/^/111 -/;p;s/111/222/;p;s/222/333/" | LC_ALL=C sort -s -k1 | grep -v "^[12][12][12].*candidate=.-,-" | grep -v "^333.*candidate=.[-,]*Y" | grep -v '^222.*preferred=0' | grep -v '^111.*preferred=1' | sed "s/^... -//" | tac'
    

    for an overview.

    EDIT: oh, I didn’t review the “mockable time” commit here, since I assume the idea is to merge #20027 (and/or #20044) first?

  180. in src/txrequest.h:88 in a60ad12fb9 outdated
    82+ *   attacker can be the first to announce through a preferred connection in this scenario, which very likely means
    83+ *   they get the first request.
    84+ * - If all P preferred connections are to the attacker, and there are NP non-preferred connections of which NPh>=1
    85+ *   are honest, where we assume that the attacker can disconnect and reconnect those connections, the distribution
    86+ *   becomes k ~ P + NB(p=1-NP/NPh,r=1) (where NB stands for Negative Binomial distribution), which has mean
    87+ *   P-1+NP/NPh.
    


    ajtowns commented at 5:33 pm on October 5, 2020:
    Random idea that I don’t think we should do right now: I think we could limit this attack further if we added a bool m_horribly_delayed : 1; bit to each announcement. Change the ordering for choosing BEST to be (preferred, horribly_delayed, priority) and flip the horribly_delayed bit as part of SetTimePoint() when req_time + txrequest.m_horrible_delay_time < now, where m_horrible_delay_time is on the order of 10 to 60 minutes. That might also be roughly reasonable logic to do some debug logging so it’s possible to catch weird behaviour in the wild.

    sipa commented at 8:30 pm on October 5, 2020:
    Nice idea. I was thinking in a similar direction, but couldn’t figure out how to make it efficient. Updating the flag after the fact is a neat solution.
  181. sipa force-pushed on Oct 5, 2020
  182. in src/txrequest.h:175 in 8ef5953e47 outdated
    170+    void RequestedTx(NodeId peer, const uint256& txhash, std::chrono::microseconds expiry);
    171+
    172+    /** Converts any CANDIDATE or REQUESTED announcement to a COMPLETED one, if one exists.
    173+     *
    174+     * It should be called whenever a transaction or NOTFOUND was received from a peer. When a good transaction is
    175+     * received, ForgetTxHash should be called instead of (or in addition to) this operation.
    


    ariard commented at 11:53 pm on October 5, 2020:

    I think that describing ForgetTxHash to be called when a good transaction is received doesn’t match its usage.

    We call it ForgetTxHash (8ef5953), in net_processing, at transaction reception :

    • L2933, when a transaction has been accepted to the mempool
    • L2996, when an orphan transaction is added to the orphan pool
    • L3014, when an orphan transaction is added to the rejection filter for cause of invalid parent
    • L3034, when a non-witness stripped transaction has failed mempool acceptance and is added to the rejection filter

    In the 3 later cases, the transaction can’t be qualified to be mempool-valid. Beyond block connection, it might be better to say, “ForgetTxHash should be called once the transaction has been evaluated at least once by the mempool”. What we do with this evaluation result is not matter of the TxRequestTracker ?


    sipa commented at 1:46 am on October 6, 2020:

    You’re right that the description doesn’t match the usage in net_processing here, but “has been evaluation by the mempool” is perhaps incorrect as well. If it’s seen in a block, it can be forgotten as well, for example.

    Really the right criterion is: has it been made AlreadyHaveTx()==true. If we’d delete something that isn’t AlreadyHaveTx(), it means we risk fetching it again if it’s announced by another peer, despite already having seen it presumably. If we don’t delete something that is AlreadyHaveTx(), we’re risking fetching something from the same peer despite the same.

    I’ve adapted the comments a bit and tried to formulate it in a way that doesn’t depend on net_processing specifics.

  183. in src/test/fuzz/txrequest.cpp:305 in 8ef5953e47 outdated
    289+            assert(m_tracker.CountInFlight(peer) == inflight);
    290+            assert(m_tracker.CountCandidates(peer) == candidates);
    291+            total += tracked;
    292+        }
    293+        // Compare Size.
    294+        assert(m_tracker.Size() == total);
    


    ariard commented at 0:32 am on October 6, 2020:
    Announcements sizes are only compared at fuzzer input exhaustion not after every ReceivedInv. It would add little value to check them after every ReceivedInv to catch Tester/TxRequestTracker divergence at that point ? They may converge at the end and thus a divergence during the running wouldn’t be detected ?

    sipa commented at 1:23 am on October 6, 2020:

    In that case, a shorter fuzzer input would have caught the issue instead, I think.

    Perhaps it’s worth seeing how much performance impact doing it all the time has; if it’s not too much maybe it’s worth changing.

  184. ariard commented at 0:59 am on October 6, 2020: member

    Reviewed the fuzzer. The naive implementation methods seems to enforce the requirements of the reference TxRequestTracker one.

    For the unit test improvement mentioned here see https://github.com/ariard/bitcoin/commit/7293480a71b4d120a515ae8c4f8f6db1ffa69c06

    The new comments in unit tests look good.

  185. sipa force-pushed on Oct 6, 2020
  186. sipa commented at 2:36 am on October 6, 2020: member
    @ariard @jnewbery @ajtowns As the size of this PR’s page is growing, would you mind going over your older un-“resolved” comments and resolve if you feel they’ve been addressed? For obvious fixes I’ve been doing that, but in cases where it’s not clear to me I’ve left them open. @ariard Nice, I’ve squashed your test into the relevent commit here (mentioning you in the commit message).
  187. in src/txrequest.cpp:71 in ee4f89a44b outdated
    66+    /** Whether the request is preferred. */
    67+    const bool m_preferred : 1;
    68+    /** Whether this is a wtxid request. */
    69+    const bool m_is_wtxid : 1;
    70+
    71+    /** What state this announcement is in. This is a uint8_t instead of a State to silence a GCC warning. */
    


    MarcoFalke commented at 6:51 am on October 6, 2020:

    In commit ee4f89a44b:

    There is no warning for me. If there is a bug or a false positive warning in an ancient gcc version, we often ignore that instead of “crippling” the code.

    Mind sharing what kind of warning this is, and what gcc version was used? Maybe even add it to that comment.


    sipa commented at 8:06 pm on October 6, 2020:
    It was in GCC 9.3, but I believe it was in a much older iteration of this code, where there was a switch on State, and GCC would always complain about unhandled cases. I see no more warning, so I’ve dropped this (along with GetState/SetState).
  188. in src/txrequest.cpp:299 in ee4f89a44b outdated
    231+    size_t m_total = 0; //!< Total number of announcements for this peer.
    232+    size_t m_completed = 0; //!< Number of COMPLETED announcements for this peer.
    233+    size_t m_requested = 0; //!< Number of REQUESTED announcements for this peer.
    234+};
    235+
    236+const uint256 UINT256_ZERO;
    


    MarcoFalke commented at 7:33 am on October 6, 2020:

    in commit ee4f89a44b

    Any reason to have ONE defined in uint256.h, but ZERO here?


    sipa commented at 8:06 pm on October 6, 2020:
    None at all. I’ve moved it to uint256.h.
  189. jnewbery commented at 9:26 am on October 6, 2020: member

    would you mind going over your older un-“resolved” comments and resolve if you feel they’ve been addressed?

    Done all mine. Thanks for being so responsive to review comments!

  190. in src/txrequest.cpp:452 in ee4f89a44b outdated
    334+        // If this announcement's predecessor exists, and belongs to the same txhash, it can't be COMPLETED either.
    335+        if (it != m_index.get<ByTxHash>().begin() && std::prev(it)->m_txhash == it->m_txhash) return false;
    336+
    337+        // If this announcement's successor exists, belongs to the same txhash, and isn't COMPLETED, fail.
    338+        if (std::next(it) != m_index.get<ByTxHash>().end() && std::next(it)->m_txhash == it->m_txhash &&
    339+            std::next(it)->GetState() != State::COMPLETED) return false;
    


    MarcoFalke commented at 10:51 am on October 6, 2020:

    in commit ee4f89a44b:

    The comment says “fail”, the code says “return false”, this seems inconsistent.

    This can’t happen in normal operation. If this happens, it would be a logic error, so assert seems appropriate? Or maybe even remove the dead code?

    Edit: According to the coverage report this is hit, so I might have to take another look here.


    jnewbery commented at 2:31 pm on October 6, 2020:
    ‘fail’ here means ’there exists another announcement for this txhash which isn’t COMPLETED’

    sipa commented at 8:09 pm on October 6, 2020:

    I’ve updated the comments a bit here.

    It’s indeed possible: e.g. if you have two CANDIDATE_DELAYED Announcements with the same txhash (and thus distinct peers), and the first of the two goes offline (DisconnectedPeer), then this function would be called to determine if that Announcement was the last non-COMPLETED one. std::next(it) isn’t end(), has the same m_txhash, and isn’t COMPLETED.

  191. in src/txrequest.cpp:438 in ea1c69ac0f outdated
    434+                // If one such CANDIDATE_READY exists (for this txhash), convert it to CANDIDATE_BEST.
    435+                Modify<ByTxHash>(it_next, [](Announcement& ann){ ann.SetState(State::CANDIDATE_BEST); });
    436+            }
    437+        }
    438+        Modify<ByTxHash>(it, [new_state](Announcement& ann){ ann.SetState(new_state); });
    439+        assert(!it->IsSelected());
    


    jnewbery commented at 11:58 am on October 6, 2020:
    This assert seems redundant now that you have the assert(new_state == State::COMPLETED || new_state == State::CANDIDATE_DELAYED); at the top. It’s not doing any harm, but it seems unnecessary.

    sipa commented at 8:11 pm on October 6, 2020:
    Done.
  192. in src/txrequest.h:152 in ee4f89a44b outdated
    147+    /** Find the txids to request now from peer.
    148+     *
    149+     * It does the following:
    150+     *  - Convert all REQUESTED announcements (for all txhashes/peers) with (expiry <= now) to COMPLETED ones.
    151+     *  - Requestable announcements are selected: CANDIDATE announcements from the specified peer with
    152+     *    (reqtime <= now) for which no existing REQUESTED announcement with the same txhash exists, and for which
    


    MarcoFalke commented at 12:09 pm on October 6, 2020:

    in commit ee4f89a44b41a860f2bda98957db2d6871a8e8ea:

    The same peer can’t have duplicate hashes, so this can be clarified to say “different peer”

    0     *    (reqtime <= now) for which no existing REQUESTED announcement with the same txhash exists from a different peer, and for which
    

    sipa commented at 8:09 pm on October 6, 2020:
    Done.
  193. in src/test/txrequest_tests.cpp:93 in a9a0504f12 outdated
    88+            runner.txrequest.SanityCheck();
    89+        });
    90+    }
    91+
    92+    /** Schedule a ReceivedInv call at the Scheduler's current time. */
    93+    void ReceivedInv(NodeId peer, const GenTxid& gtxid, bool pref, std::chrono::microseconds reqtime = MIN_TIME)
    


    MarcoFalke commented at 1:05 pm on October 6, 2020:

    in commit a9a0504f12cb3c301bc56cc5f8f59ca57c1dc433:

    The default seems to be unused

    0    void ReceivedInv(NodeId peer, const GenTxid& gtxid, bool pref, std::chrono::microseconds reqtime)
    

    sipa commented at 8:09 pm on October 6, 2020:
    Done.
  194. in src/test/txrequest_tests.cpp:284 in a9a0504f12 outdated
    252+            scenario.Check(peer, {}, 0, 1, 0, "s7");
    253+            scenario.AdvanceTime(MICROSECOND);
    254+            scenario.Check(peer, {}, 0, 0, 0, "s8");
    255+            return;
    256+        } else {
    257+            scenario.AdvanceTime(std::chrono::microseconds{InsecureRandRange(expiry.count())});
    


    MarcoFalke commented at 1:12 pm on October 6, 2020:

    in commit a9a0504f12cb3c301bc56cc5f8f59ca57c1dc433:

    Can be written shorter

    0            scenario.AdvanceTime(GetRandMicros(expiry));
    

    sipa commented at 8:10 pm on October 6, 2020:
    That would use the real RNG rather than the faster test-only g_insecure_rand_ctx.
  195. in src/test/txrequest_tests.cpp:285 in a9a0504f12 outdated
    280+ */
    281+void BuildPriorityTest(Scenario& scenario, int config)
    282+{
    283+    scenario.SetTestName(strprintf("Priority(config=%i)", config));
    284+
    285+    // Three peers. They will announce in order.
    


    MarcoFalke commented at 1:39 pm on October 6, 2020:

    same commit:

    0    // Two peers. They will announce in order.
    

    Am I missing something here? There are only two (and a reference to either of them)


    sipa commented at 8:11 pm on October 6, 2020:

    Fixed.

    Leftover from an old version of the code. The situation with 3 peers (and more) is now covered by BuildBigPriorityTest.

  196. in src/test/txrequest_tests.cpp:308 in a9a0504f12 outdated
    303+        // - It is preferred and peer1 is not
    304+        (pref2 && !pref1) ||
    305+        // - They're in the same preference class,
    306+        //   and the randomized priority favors peer2 over peer1.
    307+        (pref1 == pref2 && !prio1);
    308+    uint64_t priopeer = stage2_prio ? peer2 : peer1, otherpeer = stage2_prio ? peer1 : peer2;
    


    MarcoFalke commented at 1:40 pm on October 6, 2020:

    same commit:

    0    NodeId priopeer = stage2_prio ? peer2 : peer1, otherpeer = stage2_prio ? peer1 : peer2;
    

    sipa commented at 8:11 pm on October 6, 2020:
    Done.
  197. MarcoFalke commented at 2:11 pm on October 6, 2020: member
    Started reviewing this today. Left some minor suggestions/questions for the early commits. There is a coverage report by DrahtBot in #19988 (comment) and it looks good.
  198. in src/txrequest.cpp:583 in ea1c69ac0f outdated
    579+    {
    580+        // Move time.
    581+        SetTimePoint(now);
    582+
    583+        // Find all CANDIDATE_BEST announcements for this peer.
    584+        std::vector<std::pair<SequenceNumber, const Announcement*>> selected;
    


    jnewbery commented at 2:51 pm on October 6, 2020:

    We can avoid using an awkward pair here by constructing a vector of pointers and then use a custom comparator function to sort:

     0--- a/src/txrequest.cpp
     1+++ b/src/txrequest.cpp
     2@@ -101,6 +101,9 @@ struct Announcement {
     3         m_is_wtxid(gtxid.IsWtxid()), m_state(uint8_t(State::CANDIDATE_DELAYED)) {}
     4 };
     5 
     6+bool seq_comparator(const Announcement* ann1, const Announcement* ann2) { return ann1->m_sequence < ann2->m_sequence; }
     7+GenTxid gtxid_extractor(const Announcement* ann) { return GenTxid{ann->m_is_wtxid, ann->m_txhash}; }
     8+
     9 //! Type alias for priorities.
    10 using Priority = uint64_t;
    11 
    12@@ -581,21 +584,19 @@ public:
    13         SetTimePoint(now);
    14 
    15         // Find all CANDIDATE_BEST announcements for this peer.
    16-        std::vector<std::pair<SequenceNumber, const Announcement*>> selected;
    17+        std::vector<const Announcement*> selected;
    18         auto it_peer = m_index.get<ByPeer>().lower_bound(ByPeerView{peer, true, UINT256_ZERO});
    19         while (it_peer != m_index.get<ByPeer>().end() && it_peer->m_peer == peer &&
    20             it_peer->GetState() == State::CANDIDATE_BEST) {
    21-            selected.emplace_back(it_peer->m_sequence, &*it_peer);
    22+            selected.emplace_back(&*it_peer);
    23             ++it_peer;
    24         }
    25 
    26         // Return them, sorted by sequence number.
    27-        std::sort(selected.begin(), selected.end());
    28+        std::sort(selected.begin(), selected.end(), seq_comparator);
    29         std::vector<GenTxid> ret;
    30         ret.reserve(selected.size());
    31-        for (const auto& item : selected) {
    32-            ret.emplace_back(item.second->m_is_wtxid, item.second->m_txhash);
    33-        }
    34+        std::transform(selected.begin(), selected.end(), std::back_inserter(ret), gtxid_extractor);
    35         return ret;
    36     }
    

    I’m not sure if that’s more or less clear code, but thought I’d throw it out there as a suggestion.


    sipa commented at 8:12 pm on October 6, 2020:
    Done, with lambdas instead of separate functions (as they’re only used in one place).

    ajtowns commented at 8:51 pm on October 6, 2020:
    Hmm, in bad cases (where GetRequestable is returning a lot of txhashes, there are a lot of announcements, and the relevant announcements are fragmented) that would be a fair bit less cache friendly than the old code (sort would be re-accessing each entry an additional O(log(selected.size())) times). But in reality the 5000 ann limit means at worst it just moves from sorting an 80kB array in L1 to sorting a 40kB array based on 800kB of data in L2 cache, so it probably doesn’t matter. (Or rather, it’s probably not even worth gathering the performance data that would be needed to make a decision based on one being faster or not)

    sipa commented at 8:59 pm on October 6, 2020:

    Or rather, it’s probably not even worth gathering the performance data that would be needed to make a decision based on one being faster or not.

    Yeah.


    jnewbery commented at 9:59 pm on October 6, 2020:

    Done, with lambdas instead of separate functions (as they’re only used in one place).

    Ha. I went the other way - did this with lambdas first and then moved them to named functions because the lines were getting a bit long. Either way seems fine.

    it’s probably not even worth gathering the performance data that would be needed to make a decision based on one being faster or not

    Yeah, I can see this being incrementally worse performance in the worst case, but I think readability wins.

  199. jnewbery added this to the "Blockers" column in a project

  200. sipa force-pushed on Oct 6, 2020
  201. sipa force-pushed on Oct 6, 2020
  202. sipa closed this on Oct 6, 2020

  203. sipa reopened this on Oct 6, 2020

  204. in src/test/fuzz/txrequest.cpp:7 in 8d0dd46b4f outdated
     5+#include <primitives/transaction.h>
     6+#include <random.h>
     7+#include <txrequest.h>
     8+#include <test/fuzz/fuzz.h>
     9+#include <crypto/common.h>
    10+#include <crypto/siphash.h>
    


    hebasto commented at 6:00 am on October 7, 2020:
    8d0dd46b4fcdf4c664747e638eda046f8d51079c nit: Lexicographic order for headers?

    sipa commented at 1:32 am on October 8, 2020:
    Done.
  205. in src/test/fuzz/txrequest.cpp:22 in 8d0dd46b4f outdated
    17+namespace {
    18+
    19+constexpr int MAX_TXHASHES = 16;
    20+constexpr int MAX_PEERS = 16;
    21+
    22+//! Randomly generated GenTxids used in this test (length is MAX_TX).
    


    hebasto commented at 6:01 am on October 7, 2020:

    8d0dd46b4fcdf4c664747e638eda046f8d51079c

    0//! Randomly generated GenTxids used in this test (length is MAX_TXHASHES).
    

    sipa commented at 1:32 am on October 8, 2020:
    Done.
  206. in src/test/fuzz/txrequest.cpp:38 in 8d0dd46b4f outdated
    33+        // Non-determinism hurts fuzzing.
    34+        FastRandomContext rng(true);
    35+        for (int txhash = 0; txhash < MAX_TXHASHES; txhash += 1) {
    36+            do {
    37+                TXHASHES[txhash] = rng.rand256();
    38+            } while (*(TXHASHES[txhash].begin() + 31) != txhash || *(TXHASHES[txhash].begin()) != txhash);
    


    hebasto commented at 6:04 am on October 7, 2020:

    8d0dd46b4fcdf4c664747e638eda046f8d51079c

    What is the reason for these conditions? Why does any rand256() not suit?


    sipa commented at 1:33 am on October 8, 2020:
    It does. This just helped debugging as it meant I could look at a hex txid and know which #txhash it was. Removed it.
  207. in src/test/fuzz/txrequest.cpp:46 in 8d0dd46b4f outdated
    43+        for (int i = 16; i < 128; ++i) {
    44+            DELAYS[i] = DELAYS[i - 1] + std::chrono::microseconds{1 + rng.randbits(((i - 10) * 2) / 9)};
    45+        }
    46+        for (int i = 128; i < 256; ++i) {
    47+            DELAYS[i] = -DELAYS[255 - i];
    48+        }
    


    hebasto commented at 6:09 am on October 7, 2020:
    8d0dd46b4fcdf4c664747e638eda046f8d51079c Could the only int i{0}; before all of the for loops be more readable and maintainable?

    sipa commented at 1:33 am on October 8, 2020:
    Done.
  208. in src/test/fuzz/txrequest.cpp:243 in 8d0dd46b4f outdated
    238+    }
    239+
    240+    void GetRequestable(int peer)
    241+    {
    242+        // Implement using naive structure:
    243+        std::vector<std::tuple<uint64_t, int, bool>> result; //!< list of (sequence number, txhash, is_wtxid) pairs.
    


    hebasto commented at 6:43 am on October 7, 2020:

    8d0dd46b4fcdf4c664747e638eda046f8d51079c

    0        std::vector<std::tuple<uint64_t, int, bool>> result; //!< list of (sequence number, txhash, is_wtxid) tuples.
    

    sipa commented at 1:33 am on October 8, 2020:
    Done.
  209. in src/test/fuzz/txrequest.cpp:313 in 8d0dd46b4f outdated
    308+    auto it = buffer.begin();
    309+    while (it != buffer.end()) {
    310+        int cmd = *(it++) % 11;
    311+        int peer, txidnum, delaynum;
    312+        switch (cmd) {
    313+        case 0: // Make time jump to the next event (m_time of PENDING or REQUESTED)
    


    hebasto commented at 6:44 am on October 7, 2020:

    8d0dd46b4fcdf4c664747e638eda046f8d51079c

    PENDING ?


    glozow commented at 1:51 pm on October 7, 2020:
    CANDIDATE

    sipa commented at 1:33 am on October 8, 2020:
    Done. Indeed, CANDIDATE.

    jnewbery commented at 9:32 am on October 8, 2020:
    You replaced the wrong word. It should be CANDIDATE or REQUESTED.

    sipa commented at 6:52 pm on October 8, 2020:

    Details.

    Fixed.

  210. in src/test/txrequest_tests.cpp:259 in ea7839c888 outdated
    254+            scenario.Check(peer, {}, 0, 0, 0, "s8");
    255+            return;
    256+        } else {
    257+            scenario.AdvanceTime(std::chrono::microseconds{InsecureRandRange(expiry.count())});
    258+            scenario.Check(peer, {}, 0, 1, 0, "s9");
    259+            if ((config >> 3) == 3) { // A reponse will arrive for the transaction
    


    hebasto commented at 6:47 am on October 7, 2020:

    ea7839c8889864be174b85a1f4de326f5e12e430, typo:

    0            if ((config >> 3) == 3) { // A response will arrive for the transaction
    

    sipa commented at 1:33 am on October 8, 2020:
    Done.
  211. in src/test/txrequest_tests.cpp:457 in ea7839c888 outdated
    452+
    453+    scenario.DisconnectedPeer(peer);
    454+    scenario.Check(peer, {}, 0, 0, 0, "o5");
    455+}
    456+
    457+/** Add to scenario a test thats verifies behavior related to both txid and wtxid with the same
    


    hebasto commented at 6:48 am on October 7, 2020:

    ea7839c8889864be174b85a1f4de326f5e12e430, typo:

    0/** Add to scenario a test that verifies behavior related to both txid and wtxid with the same
    

    sipa commented at 1:33 am on October 8, 2020:
    Done.
  212. in src/txrequest.cpp:308 in d9dc98b73d outdated
    303+    {
    304+        assert(new_state == State::COMPLETED || new_state == State::CANDIDATE_DELAYED);
    305+        assert(it != m_index.get<ByTxHash>().end());
    306+        if (it->IsSelected() && it != m_index.get<ByTxHash>().begin()) {
    307+            auto it_prev = std::prev(it);
    308+            // The next best CANDIDATE_READY, if any, immediately preceeds the REQUESTED or CANDIDATE_BEST
    


    hebasto commented at 6:50 am on October 7, 2020:

    d9dc98b73d3d3b7979e76514e822e74814f8ff6d, typo:

    0            // The next best CANDIDATE_READY, if any, immediately precedes the REQUESTED or CANDIDATE_BEST
    

    sipa commented at 1:33 am on October 8, 2020:
    Done.
  213. hebasto commented at 6:51 am on October 7, 2020: member

    As were suggested by the PR author, I started reviewing from fuzz tests :)

    Concept ACK.

  214. vasild commented at 7:57 am on October 7, 2020: member

    Filtered code coverage report (files not modified by this PR are omitted and not modified lines in files that are otherwise modified are dimmed).

    List of modified and not covered lines.

  215. in test/functional/p2p_tx_download.py:128 in 4e3e2f6a29 outdated
    127         self.sync_mempools(timeout=timeout)
    128 
    129     def test_in_flight_max(self):
    130-        self.log.info("Test that we don't request more than {} transactions from any peer, every {} minutes".format(
    131-            MAX_GETDATA_IN_FLIGHT, TX_EXPIRY_INTERVAL / 60))
    132+        self.log.info("Test that we don't load peers with more than {} transactions immediately".format(MAX_GETDATA_IN_FLIGHT))
    


    jnewbery commented at 9:38 am on October 7, 2020:
    s/transactions/transaction requests/ ?

    sipa commented at 1:34 am on October 8, 2020:
    Done.
  216. in src/test/fuzz/txrequest.cpp:75 in 4e3e2f6a29 outdated
    70+
    71+    //! States for txid/peer combinations in the naive data structure.
    72+    enum class State {
    73+        NOTHING, //!< Absence of this txid/peer combination
    74+
    75+        // Note that this implementation does not distinguish between BEST/NEW/OTHER variants of CANDIDATE.
    


    jnewbery commented at 9:39 am on October 7, 2020:
    DELAYED/READY/BEST

    sipa commented at 1:34 am on October 8, 2020:
    Fixed.
  217. jnewbery commented at 9:51 am on October 7, 2020: member

    utACK 4e3e2f6a29bb2f530960d5ef0402dae9664c3a1a

    I’ve only reviewed the unit and fuzz tests lightly.

  218. in src/test/fuzz/txrequest.cpp:102 in 4e3e2f6a29 outdated
    94+        bool m_is_wtxid;
    95+        uint64_t m_priority; //!< Precomputed priority.
    96+    };
    97+
    98+    //! Information about all txhash/peer combination.
    99+    Announcement m_announcements[MAX_TXHASHES][MAX_PEERS];
    


    jnewbery commented at 9:58 am on October 7, 2020:
    More out of curiosity than a suggestion to change: is there any reason to use C arrays here and elsewhere rather than std::arrays? std::array would allow you to replace many of the hand-written loops below with stl algorithms (e.g. Cleanup() becomes a single std::any() call and a single std::transform() call).

    sipa commented at 1:34 am on October 8, 2020:
    All those things work with arrays as well?

    jnewbery commented at 9:26 am on October 8, 2020:

    Ah ok. I guess it’s more of a style question really. Is there ever a reason to prefer C arrays over std::array?

    Marking this as resolved since it’s not important.

  219. in src/test/fuzz/txrequest.cpp:118 in 4e3e2f6a29 outdated
    110+            if (ann.m_state != State::NOTHING) {
    111+                if (ann.m_state != State::COMPLETED) return;
    112+                all_nothing = false;
    113+            }
    114+        }
    115+        if (all_nothing) return;
    


    hebasto commented at 1:09 pm on October 7, 2020:

    I believe that all_nothing == true means that all txhashes have the State::NOTHING. Furthermore, re-assigning State::NOTHING to them will be effectively noop, right? So one could comment out this LOC. Or not?

    UPDATE: ignore it, sorry for noise.


    sipa commented at 1:31 am on October 8, 2020:
    No worries. It’s just avoiding the work in case there is nothing to do.
  220. in src/test/txrequest_tests.cpp:430 in ea7839c888 outdated
    421+        scenario.Check(peer, {}, 0, 0, 0, "b7");
    422+    }
    423+}
    424+
    425+/** Add to scenario a test with one peer announcing two transactions, to verify they are
    426+ *  fetched in announcement order. */
    


    MarcoFalke commented at 1:29 pm on October 7, 2020:

    in commit ea7839c888:

    Could mention that config is [0,4) like in the other tests.


    sipa commented at 8:23 pm on October 9, 2020:
    Done.
  221. in src/test/txrequest_tests.cpp:489 in ea7839c888 outdated
    454+    scenario.Check(peer, {}, 0, 0, 0, "o5");
    455+}
    456+
    457+/** Add to scenario a test thats verifies behavior related to both txid and wtxid with the same
    458+    hash being announced.
    459+*/
    


    MarcoFalke commented at 1:36 pm on October 7, 2020:
    same commit, config is also [0,4)

    sipa commented at 8:23 pm on October 9, 2020:
    Done.
  222. in src/test/txrequest_tests.cpp:333 in ea7839c888 outdated
    324+    if (config & 16) {
    325+        scenario.DisconnectedPeer(priopeer);
    326+    } else {
    327+        scenario.ReceivedResponse(priopeer, gtxid.GetHash());
    328+    }
    329+    if (config & 32) scenario.AdvanceTime(RandomTime8s());
    


    MarcoFalke commented at 1:55 pm on October 7, 2020:

    in commit ea7839c8889864be174b85a1f4de326f5e12e430:

    I am wondering why config is used here. Generally, it seems that config is used to enumerate observable behaviour changes, whereas randomness is used to augment the test with noise that shouldn’t change behavior. Is this correct? If yes, it could make sense to replace config here with randomness.


    sipa commented at 8:24 pm on October 9, 2020:
    Changed to a random one.
  223. in src/Makefile.test.include:1221 in 8d0dd46b4f outdated
    1215@@ -1215,6 +1216,12 @@ test_fuzz_txoutcompressor_deserialize_LDADD = $(FUZZ_SUITE_LD_COMMON)
    1216 test_fuzz_txoutcompressor_deserialize_LDFLAGS = $(FUZZ_SUITE_LDFLAGS_COMMON)
    1217 test_fuzz_txoutcompressor_deserialize_SOURCES = test/fuzz/deserialize.cpp
    1218 
    1219+test_fuzz_txrequest_CPPFLAGS = $(AM_CPPFLAGS) $(BITCOIN_INCLUDES)
    1220+test_fuzz_txrequest_CXXFLAGS = $(AM_CXXFLAGS) $(PIE_FLAGS)
    1221+test_fuzz_txrequest_LDADD = $(FUZZ_SUITE_LD_COMMON)
    1222+test_fuzz_txrequest_LDFLAGS = $(RELDFLAGS) $(AM_LDFLAGS) $(LIBTOOL_APP_LDFLAGS)
    


    MarcoFalke commented at 2:21 pm on October 7, 2020:

    in commit 8d0dd46b4fcdf4c664747e638eda046f8d51079c

    0test_fuzz_txrequest_LDFLAGS = $(FUZZ_SUITE_LDFLAGS_COMMON)
    

    sipa commented at 8:24 pm on October 9, 2020:
    Done.
  224. in src/net_processing.cpp:3012 in 4e3e2f6a29 outdated
    2986@@ -3081,10 +2987,14 @@ void PeerManager::ProcessMessage(CNode& pfrom, const std::string& msg_type, CDat
    2987                     // protocol for getting all unconfirmed parents.
    2988                     const GenTxid gtxid{/* is_wtxid=*/false, parent_txid};
    2989                     pfrom.AddKnownTx(parent_txid);
    2990-                    if (!AlreadyHaveTx(gtxid, m_mempool)) RequestTx(State(pfrom.GetId()), gtxid, current_time);
    2991+                    if (!AlreadyHaveTx(gtxid, m_mempool)) AddTxAnnouncement(pfrom, gtxid, current_time);
    


    ariard commented at 3:34 pm on October 7, 2020:

    Note that the other call to AdTxAnnouncement (L2657) is conditional on !m_chainman.ActiveChainState().IsInitialBlockDownload().

    I think it’s not worthy to download orphan parents when still in IBD as :

    • a) we may not have the UTXO(s) from which they spend and thus they will be considered again as orphans
    • b) we may not have already seen the block in which they’re already spent and thus will be useless when arrived at current network tip
    • c) even if we assume the UTXO stays unspent at current network tip, we aren’t uncertain about the accuracy of its feerate

    That said, it was a behavior before this patchset, and instead it might be better to address this in its own follow-up.


    sipa commented at 1:09 am on October 8, 2020:
    This is a good point, but perhaps left best for an orthogonal PR?
  225. in src/net_processing.cpp:3710 in 4e3e2f6a29 outdated
    3687-        LOCK(cs_main);
    3688-        CNodeState *state = State(pfrom.GetId());
    3689         std::vector<CInv> vInv;
    3690         vRecv >> vInv;
    3691-        if (vInv.size() <= MAX_PEER_TX_IN_FLIGHT + MAX_BLOCKS_IN_TRANSIT_PER_PEER) {
    3692+        if (vInv.size() <= MAX_PEER_TX_ANNOUNCEMENTS + MAX_BLOCKS_IN_TRANSIT_PER_PEER) {
    


    ariard commented at 4:51 pm on October 7, 2020:

    Should we ensure that a honest sender’s NOTFOUND stays strictly under MAX_PEER_TX_ANNOUNCEMENTS in ProcessGetData ?

    Let’s say you have the following topology : Mallory <–> Alice <–> Bob.

    1. Mallory send 5000 txn (MAX_PEER_TX_ANNOUNCEMENTS) to Alice. This set of txn conflict with the rest of the network minus Alice and Bob. Bob can only learn then from Alice.
    2. Alice relays this set of 5000 txn towards Bob. It’s under MAX_INV_SZ=50_000, Bob receives this inv (AddTxAnnouncement, src/net_processing.cpp L2658) and increments Alice announcement counter to max (ReceivedInv, src/txrequest.cpp, L549), then request then shortly (GetRequestable, src/net_processing.cpp, L4459) at next message sending
    3. At the same time, Mallory replaces-by-fee this set of 5000 txn in Alice’s mempool
    4. When Bob’s GETDATA of MAX_GETDATA_SZ reaches Alice, she will process it (ProcessGetData, src/net_processing.cpp, L2689) and return a NOTFOUND to Bob
    5. If this NOTFOUND is bigger than MAX_PEER_TX_ANNOUNCEMENTS, it’s not going to be processed by Bob (ReceivedResponse, src/net_processing.cpp, L3694)
    6. Tx-relay is severed between Alice and Bob for a GETDATA_TX_INTERVAL cycle, until Alice’s REQUESTED are expired

    I think step 4) might fail as the original set of transaction can still be in mapRelay. But I wonder if Mallory can also interfere with it by forcing Alice to hit its Bob’s send buffer (in ProcessGetData, src/net_processing, L1674) by asking to relay large transactions.

    I think this attack should be also bounded by INVENTORY_MAX_RECENT_RELAY but I wonder if you can bypass it by playing with orphans. Mallory can force Alice to relay transactions known to be orphans by Bob mempools and thus triggering a call to AddTxAnnouncement (src/net_processing, L2991) on the Bob-side. This doesn’t strictly enforce Alice’s inv rate-limiting policy and will inflate Alice’s count of in-flight announcement towards Bob if orphans have a lot of missing parents (my point around #19184 (review))

    Let me know if this scenario might hold, there are a lot of assumptions on message max sizes and timers.


    sipa commented at 1:17 am on October 8, 2020:

    I’m not sure there is really a problem here, because:

    • In step 2, that relay will take several minutes (we relay at most 35 tx invs per message, INVENTORY_BROADCAST_MAX, and around 1 inv message every 2s on outbound connections, so 17.5 tx/s), so announcements will come in gradually.
    • In step 5, Bob will expire requests after 1 min anyway, even without NOTFOUNDs.

    ariard commented at 1:51 pm on October 8, 2020:

    In step 2, that relay will take several minutes (we relay at most 35 tx invs per message, INVENTORY_BROADCAST_MAX, and around 1 inv message every 2s on outbound connections, so 17.5 tx/s), so announcements will come in gradually.

    Yes but I’m wondering if you can bypass this by forcing to relay orphans with a lot of missing parents, thus the receiver marking them as announcements from the sender.

    • Alice announces 17 txn to Bob
    • Bob requires them all
    • Alice replies with the 17 txn
    • Bob receives the txn, all of them are orphans with 295 missing parents. Bob mark all parents as AddTxAnnouncement (src/net_processing.cpp, L2991) as coming from Alice
    • Alice can’t announce new transactions to Bob until her MAX_PEER_TX_ANNOUNCEMENTS is dried-up

    I’ll write a test to get certainty on behavior. Feel free to mark as resolved.

    In step 5, Bob will expire requests after 1 min anyway, even without NOTFOUNDs.

    Right, but that’s still 1min of potential tx-relay severance…


    sipa commented at 5:38 pm on October 8, 2020:

    Yes but I’m wondering if you can bypass this by forcing to relay orphans with a lot of missing parents, thus the receiver marking them as announcements from the sender.

    Mallory can’t cause Alice to relay orphans to Bob - they’re only relayed when they enter the mempool, which implies the parents are known.

    I’ll write a test to get certainty on behavior. Feel free to mark as resolved.

    Please do - though this sounds like something you’d need a functional test for, as it’s not purely TxRequestTracker logic.

    Right, but that’s still 1min of potential tx-relay severance…

    Yeah, that would be a problem if an attacker can cause it with high probability, on many peers of a victim at once.


    ariard commented at 1:31 am on October 9, 2020:

    Mallory can’t cause Alice to relay orphans to Bob - they’re only relayed when they enter the mempool, which implies the parents are known

    What is an orphan for Bob might not be an orphan for Alice if their mempools are divergent ? Mallory broadcast tx123 to Alice and tx123’ to Bob, then send children from tx123 to Alice which will be relayed to Bob ?

    fRejectedParents checks parents, so Mallory may need to another layer of transaction, so it would be tx123 <- tx456 <- tx789. I think tx789 will be considered as a valid orphan by Bob ?


    sipa commented at 3:41 am on October 9, 2020:

    Ok, so Mallory also knows the victim specifically, and is able to connect directly to them.

    I believe you may be right that if Mallory has connections to both Alice and Bob, he may be able to induce a stream of orphans between them. If that stream is resolved slow enough (due to processing speed, bandwidth, …), it may be possible to temporarily reach the 5000 MAX_PEER_TX_ANNOUNCEMENTS limit. I think in this situation:

    • This may be costly as the attacker has produce huge amounts of actually valid transactions to get Alice to relay them to Bob - transactions which may confirm.
    • The limit would only be reached very temporarily, as new announcements get dropped once the limit is reached, and receiving a tx or NOTFOUND will decrease it again. So this attack would need to be maintained in a rapid fire to keep the tracked announcements at the bound.
    • In order to actually block Bob from receiving hones transactions, Mallory has to do this with a significant fraction of Bob’s peers - not just Alice.
    • This seems harder to pull off, and possibly less effective, than simply mounting an invblock attack on Bob directly (race the network, announce honest transactions very quickly to Bob, and don’t respond to requests).

    Does that match your understanding?


    ariard commented at 11:47 pm on October 10, 2020:

    This may be costly as the attacker has produce huge amounts of actually valid transactions to get Alice to relay them to Bob - transactions which may confirm.

    Assuming a mass-connecting malicious node, I think you can mempool-partition from the rest of the honest network, Alice and Bob, by announcing yet-another different spend tx123’’. Thus not having to pay for the malicious stream of orphans.

    The limit would only be reached very temporarily, as new announcements get dropped once the limit is reached, and receiving a tx or NOTFOUND will decrease it again. So this attack would need to be maintained in a rapid fire to keep the tracked announcements at the bound.

    If you assume Mallory has a connection with Bob, can Mallory invblock on the malicious orphan stream by announcing it at the same time ? At least for an expiration period.

    In order to actually block Bob from receiving hones transactions, Mallory has to do this with a significant fraction of Bob’s peers - not just Alice.

    I was thinking the reverse, an attacker blocking Alice’s txn from propagating. E.g in a scenario where the attacker expect Alice to broadcast a time-sensitive tx after reception of a given block. But agree an attacker has to repeat the blocking with every Alice’s peers.

    Overall, I agree it sounds hard to pull off, an attacker already has to observe all victim’s tx-relay peers to effectively censor an expected broadcast. I think future p2p work like outbound tx-relay rotation should harden against this kind of attack scenario.

  226. in src/txrequest.cpp:33 in 4e3e2f6a29 outdated
    28+ * Also note that the sorting order of ByTxHashView relies on the specific order of values in this enum.
    29+ *
    30+ * Expected behaviour is:
    31+ *   - When first announced by a peer, the state is CANDIDATE_DELAYED until reqtime is reached.
    32+ *   - Announcements that have reached their reqtime but not been requested will be either CANDIDATE_READY or
    33+ *     CANDIDATE_BEST
    


    ariard commented at 5:54 pm on October 7, 2020:

    I didn’t figure this during previous reviews, but this doesn’t underscore that CANDIDATE_READY expiration time is never if it’s never promoted to REQUESTED or the expected transaction is never received. Just to prevent a future change introducing a expiration time for then and thus a strong dependency order where malicious announcement(s) request first could interfere with honest CANDIDATE_READY.

    “CANDIDATE_READY announcements may stay in this state until they’re promoted to REQUESTED or the announced transaction is no longer needed”


    sipa commented at 1:21 am on October 8, 2020:

    I can’t understand your first paragraph.

    Your suggested text would be incorrect, as they’re expected to first transition to CANDIDATE_BEST, before becoming REQUESTED. Feel free to suggest something that includes that.


    ariard commented at 1:59 pm on October 8, 2020:

    Just wanted to add a comment saying that _READY expiration time is never, if it doesn’t get promoted to _BEST or the announced transaction is no longer needed.

    Right forget a step, “CANDIDATE_READY announcements don’t expire until they’re promoted to _BEST or the announced transaction is no longer needed” better?


    sipa commented at 6:52 pm on October 8, 2020:
    Ok, added some more comments.
  227. in src/txrequest.cpp:243 in 4e3e2f6a29 outdated
    238+    size_t m_candidate_best = 0;
    239+    //! Number of REQUESTED announcements for this txhash.
    240+    size_t m_requested = 0;
    241+    //! The priority of the CANDIDATE_BEST announcement if one exists, or max() otherwise.
    242+    Priority m_priority_candidate_best = std::numeric_limits<Priority>::max();
    243+    //! The lowest priority of all CANDIDATE_READY announcements (or min() if none exist).
    


    ariard commented at 5:54 pm on October 7, 2020:
    I think this is “higher priority” after latest update.

    sipa commented at 1:35 am on October 8, 2020:
    Fixed.
  228. in src/txrequest.cpp:334 in 4e3e2f6a29 outdated
    329+            if (info.m_candidate_ready > 0) {
    330+                assert(info.m_candidate_best + info.m_requested == 1);
    331+            }
    332+
    333+            // If there is both a CANDIDATE_READY and a CANDIDATE_BEST announcement, the CANDIDATE_BEST one must be
    334+            // at least as good (equal or lower priority) as the best CANDIDATE_READY.
    


    ariard commented at 5:56 pm on October 7, 2020:
    Same “higher priority” ?

    sipa commented at 1:35 am on October 8, 2020:
    Fixed.
  229. ariard commented at 5:59 pm on October 7, 2020: member

    I’ve +1’ed the previously unresolved point here, it’s up to you to mark them as resolved.

    See #19988 (review), if you think it’s a working concern, I think it’s better to address this in a follow-up as the PR has already been reviewed a lot and introducing subtle changes at this point may break more stuff than achieving a fix.

    I’m finishing a Code Review parse hopefully today.

  230. in src/net_processing.cpp:76 in 4e3e2f6a29 outdated
    70@@ -71,22 +71,22 @@ static constexpr std::chrono::minutes PING_INTERVAL{2};
    71 static const unsigned int MAX_LOCATOR_SZ = 101;
    72 /** The maximum number of entries in an 'inv' protocol message */
    73 static const unsigned int MAX_INV_SZ = 50000;
    74-/** Maximum number of in-flight transactions from a peer */
    75+/** Maximum number of in-flight transactions from a peer. It is not a hard limit, but the threshold
    76+ *  at which point the OVERLOADED_PEER_TX_DELAY kicks in. */
    77 static constexpr int32_t MAX_PEER_TX_IN_FLIGHT = 100;
    


    ariard commented at 11:42 pm on October 7, 2020:

    I think MAX_PEER_TX_IN_FLIGHT and its comments could be clearer by being called MAX_PEER_TX_REQUEST_IN_FLIGHT.

    Everytime I’m re-reading this variable I wonder if its a superset of MAX_PEER_TX_ANNOUNCEMENTS (which is not ofc) or a disjunctive limit. Even, those 2 variables could leak their directionality, e.g MAX_FROM_PEER_TX_ANNOUNCEMENTS/MAX_TO_PEER_TX_REQUEST

    Please discard if you consider this as naming bikeshedding :)


    sipa commented at 1:23 am on October 8, 2020:
    I think “in flight” rather unambiguously means “requested but no response yet”.

    ariard commented at 1:54 pm on October 8, 2020:
    IMO it doesn’t mark enough directionality but yeah bikeshedding.

    sipa commented at 6:53 pm on October 8, 2020:
    If it’s unclear to you, better to improve it. Renamed.
  231. in src/net_processing.cpp:875 in 4e3e2f6a29 outdated
    868@@ -973,6 +869,7 @@ void PeerManager::FinalizeNode(NodeId nodeid, bool& fUpdateConnectionTime) {
    869         assert(nPeersWithValidatedDownloads == 0);
    870         assert(g_outbound_peers_with_protect_from_disconnect == 0);
    871         assert(g_wtxid_relay_peers == 0);
    872+        assert(m_txrequest.Size() == 0);
    


    ariard commented at 11:48 pm on October 7, 2020:
    As another consistency checks, at PeerManager::InitializeNode you can verify that m_txrequest.Count(nodeid)==0 ?

    sipa commented at 1:35 am on October 8, 2020:
    Added.
  232. in src/txrequest.cpp:494 in 4e3e2f6a29 outdated
    481+        // Iterate over all CANDIDATE_DELAYED and REQUESTED from old to new, as long as they're in the past,
    482+        // and convert them to CANDIDATE_READY and COMPLETED respectively.
    483+        while (!m_index.empty()) {
    484+            auto it = m_index.get<ByTime>().begin();
    485+            if (it->m_state == State::CANDIDATE_DELAYED && it->m_time <= now) {
    486+                PromoteCandidateReady(m_index.project<ByTxHash>(it));
    


    ariard commented at 0:36 am on October 8, 2020:

    On this diff, I’ve the new unit tests stalling, is this an expected property ? It would fail CI anyway so likely okay.

     0diff --git a/src/txrequest.cpp b/src/txrequest.cpp
     1index fb5f1e325..cf4d55713 100644
     2--- a/src/txrequest.cpp
     3+++ b/src/txrequest.cpp
     4@@ -483,7 +483,7 @@ private:
     5         while (!m_index.empty()) {
     6             auto it = m_index.get<ByTime>().begin();
     7             if (it->m_state == State::CANDIDATE_DELAYED && it->m_time <= now) {
     8-                PromoteCandidateReady(m_index.project<ByTxHash>(it));
     9+                //PromoteCandidateReady(m_index.project<ByTxHash>(it));
    10             } else if (it->m_state == State::REQUESTED && it->m_time <= now) {
    11                 MakeCompleted(m_index.project<ByTxHash>(it));
    12             } else {
    

    sipa commented at 1:29 am on October 8, 2020:
    Yes, that’s expected. The loop is expected to make progress in every step, or break. With the change you’re making, it will infinitely spin when reaching a CANDIDATE_DELAYED.
  233. in src/txrequest.cpp:393 in 4e3e2f6a29 outdated
    382+    }
    383+
    384+    //! Convert a CANDIDATE_DELAYED announcement into a CANDIDATE_READY. If this makes it the new best
    385+    //! CANDIDATE_READY (and no REQUESTED exists) and better than the CANDIDATE_BEST (if any), it becomes the new
    386+    //! CANDIDATE_BEST.
    387+    void PromoteCandidateReady(Iter<ByTxHash> it)
    


    ariard commented at 1:10 am on October 8, 2020:
    I think this function is right but I was confused for a while on the behavior if we can have concurrently two announcements, one _BEST the other _REQUESTED. We can’t but it would be better to document it as a friendly reminder.

    sipa commented at 1:30 am on October 8, 2020:
    It says “and no REQUESTED exists”, is that enough?

    ariard commented at 1:55 pm on October 8, 2020:
    Right there is an and marking the conjunction.
  234. ariard commented at 1:12 am on October 8, 2020: member
    Almost over
  235. sipa force-pushed on Oct 8, 2020
  236. sipa commented at 1:38 am on October 8, 2020: member
    Addressed a bunch of review comments by @hebasto, @jnewbery, and @ariard. I’ve also added some extra unit tests to hopefully cover the txrequest.cpp lines @vasild pointed out were uncovered (but I haven’t verified if they’re covered now).
  237. sipa force-pushed on Oct 8, 2020
  238. sipa force-pushed on Oct 8, 2020
  239. naumenkogs commented at 8:06 am on October 8, 2020: member

    Light code review ACK. It’s a really big change, and I’m a bit worried we’d break some existing non-obvious behaviour… The test and fuzzing stuff adds some confidence, and also the existing tests pass, so that’s good.

    I’ll spend more time in the next couple days doing more review. Perhaps I’ll try to use this new test framework to break this approach.

  240. sipa force-pushed on Oct 8, 2020
  241. jnewbery commented at 9:37 am on October 8, 2020: member
    Code review ACK bf3f99291b
  242. MarcoFalke referenced this in commit 9dd4de2832 on Oct 8, 2020
  243. jonatack commented at 9:46 am on October 8, 2020: member
    Looks like there is much to catch up with since my first pass over the code. Will circle back to review this after #19953 in the next 3-4 days.
  244. vasild commented at 10:02 am on October 8, 2020: member

    (re-generated on bf3f99291)

    Filtered code coverage report (files not modified by this PR are omitted and not modified lines in files that are otherwise modified are dimmed).

    List of modified and not covered lines.

  245. in src/txrequest.cpp:442 in bf3f99291b outdated
    437+
    438+        // This announcement has a predecessor that belongs to the same txhash. Due to ordering, and the
    439+        // fact that 'it' is not COMPLETED, its predecessor cannot be COMPLETED here.
    440+        if (it != m_index.get<ByTxHash>().begin() && std::prev(it)->m_txhash == it->m_txhash) return false;
    441+
    442+        // This announcement has predecessor that belongs to the same txhash, and is not COMPLETED.
    


    ariard commented at 2:15 pm on October 8, 2020:
    You mean “successor” instead of “predecessor”, given std::next() usage ?

    sipa commented at 6:53 pm on October 8, 2020:
    Indeed, done.
  246. ariard commented at 2:44 pm on October 8, 2020: member
    Code Review ACK bf3f992
  247. in src/txrequest.cpp:241 in bf3f99291b outdated
    235+    //! Number of CANDIDATE_READY announcements for this txhash.
    236+    size_t m_candidate_ready = 0;
    237+    //! Number of CANDIDATE_BEST announcements for this txhash (at most one).
    238+    size_t m_candidate_best = 0;
    239+    //! Number of REQUESTED announcements for this txhash.
    240+    size_t m_requested = 0;
    


    hebasto commented at 4:33 pm on October 8, 2020:
    0    //! Number of CANDIDATE_BEST announcements for this txhash (at most one).
    1    size_t m_candidate_best = 0;
    2    //! Number of REQUESTED announcements for this txhash (at most one).
    3    size_t m_requested = 0;
    

    This is checked in https://github.com/bitcoin/bitcoin/blob/bf3f99291b8f207a809d6fd3309ef99eb3711388/src/txrequest.cpp#L324-L325


    sipa commented at 6:53 pm on October 8, 2020:
    Done.
  248. in src/txrequest.h:103 in bf3f99291b outdated
     98+    class Impl;
     99+    const std::unique_ptr<Impl> m_impl;
    100+
    101+public:
    102+    //! Construct a TxRequestTracker.
    103+    TxRequestTracker(bool deterministic = false);
    


    hebasto commented at 5:29 pm on October 8, 2020:
    0    explicit TxRequestTracker(bool deterministic = false);
    

    sipa commented at 6:53 pm on October 8, 2020:
    Good idea, done.
  249. sipa force-pushed on Oct 8, 2020
  250. sipa commented at 6:54 pm on October 8, 2020: member
    Rebased on the now-merged #20027, addressed a few more comments, and added yet another unit test to increase coverage.
  251. laanwj added this to the milestone 0.21.0 on Oct 8, 2020
  252. jnewbery commented at 8:27 am on October 9, 2020: member
    Code review ACK f7f75da6dc
  253. in src/net_processing.cpp:4593 in 3227f15f9d outdated
    4440-        // Eventually we should consider disconnecting peers, but this is
    4441-        // conservative.
    4442-        if (state.m_tx_download.m_check_expiry_timer <= current_time) {
    4443-            for (auto it=state.m_tx_download.m_tx_in_flight.begin(); it != state.m_tx_download.m_tx_in_flight.end();) {
    4444-                if (it->second <= current_time - TX_EXPIRY_INTERVAL) {
    4445-                    LogPrint(BCLog::NET, "timeout of inflight tx %s from peer=%d\n", it->first.ToString(), pto->GetId());
    


    MarcoFalke commented at 10:54 am on October 9, 2020:

    in commit 3227f15f9d :

    This seems to indicate peer misbehaviour (to some extent), so I’d prefer if the log under the net category was kept.

    I am aware that this requires code changes more than just adding back the LogPrint, but I think it might be worth the additional code to keep this debug log statement.


    sipa commented at 8:25 pm on October 9, 2020:
    Added a commit at the end that re-introduces a way to account for expirations. It’s nontrivial, so if other reviewers prefer doing it separately, that’s ok by me.

    sdaftuar commented at 4:49 pm on October 13, 2020:

    Just wanted to note that this expiry timer in the old code is checking for no response (and logging it) after a much longer time period (10minutes or more) than in the new code, and the purpose was for re-enabling transaction relay with a peer that we might have stopped downloading from (which is not possible in the new code).

    I’m happy to have this new code that logs, as it should be helpful for debugging p2p behavior in the future and doesn’t appear too costly, but just wanted to mention as an FYI that I don’t think removing this was much of a regression either.

  254. in test/functional/p2p_tx_download.py:149 in 3227f15f9d outdated
    149+        for i in range(MAX_GETDATA_IN_FLIGHT, len(txids)):
    150+            p.send_message(msg_inv([CInv(t=MSG_WTX, h=txids[i])]))
    151+        p.sync_with_ping()
    152+        self.log.info("No more than {} requests should be seen within {} seconds after announcement".format(MAX_GETDATA_IN_FLIGHT, INBOUND_PEER_TX_DELAY + OVERLOADED_PEER_DELAY - 1))
    153+        self.nodes[0].setmocktime(mock_time + INBOUND_PEER_TX_DELAY + OVERLOADED_PEER_DELAY - 1)
    154+        time.sleep(2) # give some time to get requests for the last 2 INVs
    


    MarcoFalke commented at 11:54 am on October 9, 2020:

    in commit 3227f15f9d:

    Seems confusing to mix mocktime, but then wait for wall clock time. Is there any reason the process message loop will do anything different if called for two seconds in a loop but with fixed time? I think running it once is enough. You can do that with p.sync_with_ping().


    sipa commented at 8:25 pm on October 9, 2020:
    Ha, of course. Fixed.
  255. in test/functional/p2p_tx_download.py:144 in 3227f15f9d outdated
    153+        self.nodes[0].setmocktime(mock_time + INBOUND_PEER_TX_DELAY + OVERLOADED_PEER_DELAY - 1)
    154+        time.sleep(2) # give some time to get requests for the last 2 INVs
    155         with p2p_lock:
    156             assert_equal(p.tx_getdata_count, MAX_GETDATA_IN_FLIGHT)
    157-
    158-        self.log.info("Now check that if we send a NOTFOUND for a transaction, we'll get one more request")
    


    MarcoFalke commented at 11:55 am on October 9, 2020:

    Generally, it seems that the code has strong unit and fuzz coverage for the internal logic, but the net_processing “glue code” is relatively untested when it comes to edge cases. I’ve written some edge-case tests. Feel free to steal or ignore:

     0diff --git a/test/functional/p2p_tx_download.py b/test/functional/p2p_tx_download.py
     1index 01b0f0efa0..9cb92b77ae 100755
     2--- a/test/functional/p2p_tx_download.py
     3+++ b/test/functional/p2p_tx_download.py
     4@@ -46,6 +46,7 @@ INBOUND_PEER_TX_DELAY = 2  # seconds
     5 TXID_RELAY_DELAY = 2 # seconds
     6 OVERLOADED_PEER_DELAY = 2 # seconds
     7 MAX_GETDATA_IN_FLIGHT = 100
     8+MAX_PEER_TX_ANNOUNCEMENTS = 5000
     9 
    10 # Python test constants
    11 NUM_INBOUND = 10
    12@@ -153,14 +154,80 @@ class TxDownloadTest(BitcoinTestFramework):
    13         self.nodes[0].setmocktime(mock_time + INBOUND_PEER_TX_DELAY + OVERLOADED_PEER_DELAY)
    14         p.wait_until(lambda: p.tx_getdata_count == len(txids))
    15 
    16+    def test_expiry_fallback(self):
    17+        self.log.info('Check that expiry will select another peer for download')
    18+        WTXID = 0xffaa
    19+        peer1 = self.nodes[0].add_p2p_connection(TestP2PConn())
    20+        peer2 = self.nodes[0].add_p2p_connection(TestP2PConn())
    21+        for p in [peer1, peer2]:
    22+            p.send_message(msg_inv([CInv(t=MSG_WTX, h=WTXID)]))
    23+        # One of the peers is asked for the tx
    24+        peer2.wait_until(lambda: sum(p.tx_getdata_count for p in [peer1, peer2]) == 1)
    25+        peer_expiry, peer_fallback = (peer1, peer2) if peer1.tx_getdata_count == 1 else (peer2, peer1)
    26+        with p2p_lock:
    27+            assert_equal(peer_fallback.tx_getdata_count, 0)
    28+        self.nodes[0].setmocktime(int(time.time()) + GETDATA_TX_INTERVAL + 1)  # Wait for request to peer_expiry to expire
    29+        peer_fallback.sync_with_ping()
    30+        with p2p_lock:
    31+            assert_equal(peer_fallback.tx_getdata_count, 1)
    32+        self.restart_node(0)  # reset mocktime
    33+
    34+    def test_notfound_fallback(self):
    35+        self.log.info('Check that notfounds will select another peer for download immediately')
    36+        WTXID = 0xffdd
    37+        peer1 = self.nodes[0].add_p2p_connection(TestP2PConn())
    38+        peer2 = self.nodes[0].add_p2p_connection(TestP2PConn())
    39+        for p in [peer1, peer2]:
    40+            p.send_message(msg_inv([CInv(t=MSG_WTX, h=WTXID)]))
    41+        # One of the peers is asked for the tx
    42+        peer2.wait_until(lambda: sum(p.tx_getdata_count for p in [peer1, peer2]) == 1)
    43+        peer_notfound, peer_fallback = (peer1, peer2) if peer1.tx_getdata_count == 1 else (peer2, peer1)
    44+        with p2p_lock:
    45+            assert_equal(peer_fallback.tx_getdata_count, 0)
    46+        peer_notfound.send_and_ping(msg_notfound(vec=[CInv(MSG_WTX, WTXID)]))  # Send notfound, so that fallback peer is selected
    47+        peer_fallback.sync_with_ping()
    48+        with p2p_lock:
    49+            assert_equal(peer_fallback.tx_getdata_count, 1)
    50+
    51+    def test_preferred_inv(self):
    52+        self.log.info('Check that invs from preferred peers are downloaded immediately')
    53+        self.restart_node(0, extra_args=['-whitelist=noban@127.0.0.1'])
    54+        peer = self.nodes[0].add_p2p_connection(TestP2PConn())
    55+        peer.send_message(msg_inv([CInv(t=MSG_WTX, h=0xff00ff00)]))
    56+        peer.sync_with_ping()
    57+        with p2p_lock:
    58+            assert_equal(peer.tx_getdata_count, 1)
    59+
    60+    def test_large_inv_batch(self):
    61+        self.log.info('Test how large inv batches are handled with relay permission')
    62+        self.restart_node(0, extra_args=['-whitelist=relay@127.0.0.1'])
    63+        peer = self.nodes[0].add_p2p_connection(TestP2PConn())
    64+        peer.send_message(msg_inv([CInv(t=MSG_WTX, h=wtxid) for wtxid in range(MAX_PEER_TX_ANNOUNCEMENTS + 1)]))
    65+        peer.wait_until(lambda: peer.tx_getdata_count == MAX_PEER_TX_ANNOUNCEMENTS + 1)
    66+
    67+        self.log.info('Test how large inv batches are handled without relay permission')
    68+        self.restart_node(0)
    69+        peer = self.nodes[0].add_p2p_connection(TestP2PConn())
    70+        peer.send_message(msg_inv([CInv(t=MSG_WTX, h=wtxid) for wtxid in range(MAX_PEER_TX_ANNOUNCEMENTS + 1)]))
    71+        peer.wait_until(lambda: peer.tx_getdata_count == MAX_PEER_TX_ANNOUNCEMENTS)
    72+        peer.sync_with_ping()
    73+        with p2p_lock:
    74+            assert_equal(peer.tx_getdata_count, MAX_PEER_TX_ANNOUNCEMENTS)
    75+
    76     def test_spurious_notfound(self):
    77         self.log.info('Check that spurious notfound is ignored')
    78         self.nodes[0].p2ps[0].send_message(msg_notfound(vec=[CInv(MSG_TX, 1)]))
    79 
    80     def run_test(self):
    81+        # Run tests without mocktime that only need one peer-connection first, to avoid restarting the nodes
    82+        self.test_expiry_fallback()
    83+        self.test_notfound_fallback()
    84+        self.test_preferred_inv()
    85+        self.test_large_inv_batch()
    86+        self.test_spurious_notfound()
    87         # Run each test against new bitcoind instances, as setting mocktimes has long-term effects on when
    88         # the next trickle relay event happens.
    89-        for test in [self.test_spurious_notfound, self.test_in_flight_max, self.test_inv_block, self.test_tx_requests]:
    90+        for test in [self.test_in_flight_max, self.test_inv_block, self.test_tx_requests]:
    91             self.stop_nodes()
    92             self.start_nodes()
    93             self.connect_nodes(1, 0)
    

    sipa commented at 8:26 pm on October 9, 2020:
    This is great, I’ve squashed these tests in.
  256. in src/txrequest.h:87 in 1950db7598 outdated
    82+ *   P/(Ph+1) (where NHG stands for Negative Hypergeometric distribution). The "1 +" is due to the fact that the
    83+ *   attacker can be the first to announce through a preferred connection in this scenario, which very likely means
    84+ *   they get the first request.
    85+ * - If all P preferred connections are to the attacker, and there are NP non-preferred connections of which NPh>=1
    86+ *   are honest, where we assume that the attacker can disconnect and reconnect those connections, the distribution
    87+ *   becomes k ~ P + NB(p=1-NP/NPh,r=1) (where NB stands for Negative Binomial distribution), which has mean
    


    sdaftuar commented at 12:49 pm on October 9, 2020:
    Shouldn’t this say: k ~ P + NB(p=1-NPh/NP,r=1) ?

    sipa commented at 8:26 pm on October 9, 2020:
    Indeed, fixed.
  257. in src/txrequest.cpp:130 in 1950db7598 outdated
    125+
    126+// Definitions for the 3 indexes used in the main data structure.
    127+//
    128+// Each index has a By* type to identify it, a By*View data type to represent the view of announcement it is sorted
    129+// by, and an By*ViewExtractor type to convert an announcement into the By*View type.
    130+// See https://www.boost.org/doc/libs/1_54_0/libs/multi_index/doc/reference/key_extraction.html#key_extractors
    


    hebasto commented at 1:42 pm on October 9, 2020:

    1950db7598a426232aeb5b6e1114a1f7e1ab35a1, nit:

    0// See https://www.boost.org/doc/libs/1_58_0/libs/multi_index/doc/reference/key_extraction.html#key_extractors
    

    sipa commented at 8:23 pm on October 9, 2020:
    Done.
  258. hebasto approved
  259. hebasto commented at 1:43 pm on October 9, 2020: member

    ACK f7f75da6dcd3e83eec21e71120965208df1c02b1, tested on Linux Mint 20 (x86_64), running the node for 15 hours flawlessly, txs fill its mempool as expected.

    I did not verified probabilities in attack scenarios.

    Nit: s/cs_main/::cs_main/

  260. in src/txrequest.cpp:504 in 1950db7598 outdated
    377+                break;
    378+            }
    379+        }
    380+
    381+        while (!m_index.empty()) {
    382+            // If time went backwards, we may need to demote CANDIDATE_BEST and CANDIDATE_READY announcements back
    


    sdaftuar commented at 2:48 pm on October 9, 2020:
    I have long wondered whether we should just prevent time from going backwards in GetTime(), by enforcing that it can only go forward. Even if we didn’t do that globally in bitcoind, I think we could consider doing that for SetTimePoint()…?

    sipa commented at 8:29 pm on October 9, 2020:

    Yeah, that works. My earlier argument was that it wasn’t much code to deal with time going backwards, and it’s elegant that these time-based operations work symmetrically. I’ve since added a test for time-going-backward behavior, so perhaps it isn’t quite true anymore.

    I’ve added a new commit at the end that removes the time-going-backward-makes-candidates-unrequestable-again behavior by introducing a “max time seen” clock inside TxRequestTracker. I think it removes surprisingly little, but it’s something.

  261. in src/txrequest.cpp:117 in 1950db7598 outdated
    112+        m_k1{deterministic ? 0 : GetRand(0xFFFFFFFFFFFFFFFF)} {}
    113+
    114+    Priority operator()(const uint256& txhash, NodeId peer, bool preferred) const
    115+    {
    116+        uint64_t low_bits = CSipHasher(m_k0, m_k1).Write(txhash.begin(), txhash.size()).Write(peer).Finalize() >> 1;
    117+        return low_bits | uint64_t{preferred} << 63;
    


    sdaftuar commented at 3:19 pm on October 9, 2020:

    I was concerned whether converting the bool to a uint64_t was guaranteed to work as expected (ie map true to uint64_t(1) on all platforms/all compilers), so I tried to check. From https://en.cppreference.com/w/cpp/language/implicit_conversion:

    If the source type is bool, the value false is converted to zero and the value true is converted to the value one of the destination type (note that if the destination type is int, this is an integer promotion, not an integer conversion).

    So I think this is correct, but figured I’d mention it here in case anyone else was wondering, or in case I am misinterpreting this reference!


    laanwj commented at 8:15 pm on October 9, 2020:
    Yes, converting bool to int is (luckily!) fully defined behavior in C++!

    sipa commented at 8:30 pm on October 9, 2020:
    That’s what I was assuming - booleans are effectively integers that can only take on value 0 and 1. Good to verify in any case.
  262. MarcoFalke approved
  263. MarcoFalke commented at 3:23 pm on October 9, 2020: member

    Concept ACK f7f75da6dcd3e83eec21e71120965208df1c02b1 🍰

    Signature:

     0-----BEGIN PGP SIGNED MESSAGE-----
     1Hash: SHA512
     2
     3Concept ACK f7f75da6dcd3e83eec21e71120965208df1c02b1 🍰
     4-----BEGIN PGP SIGNATURE-----
     5
     6iQGzBAEBCgAdFiEE+rVPoUahrI9sLGYTzit1aX5ppUgFAlwqrYAACgkQzit1aX5p
     7pUgYvQwAsC6kCzTXSLPPxf2JWq/Gw9EpFclLj0LgVJjbtjgfPpGmmDBvpMuAWF4M
     8Ekg+w/g1Yr7CgUsSEFhekC4LKCMzRmou3MSc7yQkt1qAQUzoQjlmWGxevKv/RacQ
     9zdA/PzuMYt4EbjSgvzS6e1xjDbug8DFdSCLrZROOlNUETo+VYganEk8zJ7FzvTQC
    10IfESxxd0Lu60YPpw6Sd4+D1YMSN9eNNetGZDJNWX3Lu6npLIzdKMwcdgdLnOSoYw
    11y311fAMacQqEsHmPgQace10vzHV2ITJ51+TmNXz1QB0Iz0HHyHNDXYporZakoijG
    1268tFmKKwRJ7PrGzweTdgg75TGg8o+z19SxcbW+NjPX3yDau4x0CRW0nba65bwuub
    13JrXVVAsN/Z5XMhOl9KvxNhRs2lWbNwKzvXJsNUBF17YayBvWQaJrZJyob/9H74V7
    14quQ1eXpQ7krjnvxW36kcMu8LQ/3oyvHCFdUo2GXDKgxG0p5mSBG/QtUOuDSPpJSG
    15IfHeKBzi
    16=R1ga
    17-----END PGP SIGNATURE-----
    

    Timestamp of file with hash 5d10ad8b06999fc808347d5413aa3e41727bceb170f317e82824b8b729fd37f9 -

  264. sdaftuar commented at 5:55 pm on October 9, 2020: member
    Haven’t gotten very far on my review; will be picking this up again in a few days so just posting a few minor observations for now.
  265. sipa force-pushed on Oct 9, 2020
  266. sipa force-pushed on Oct 9, 2020
  267. sipa force-pushed on Oct 9, 2020
  268. sipa force-pushed on Oct 9, 2020
  269. in src/primitives/transaction.h:403 in ff290ca931 outdated
    398@@ -399,8 +399,8 @@ template <typename Tx> static inline CTransactionRef MakeTransactionRef(Tx&& txI
    399 /** A generic txid reference (txid or wtxid). */
    400 class GenTxid
    401 {
    402-    const bool m_is_wtxid;
    403-    const uint256 m_hash;
    404+    bool m_is_wtxid;
    405+    uint256 m_hash;
    


    MarcoFalke commented at 9:15 am on October 10, 2020:

    in commit ff290ca931f7131b3c1f7b7ac62b7a58ecc3c794:

    Any reason for this change? gcc and clang (c++17) compile fine for me without it.

    If you want to modify this class in this pull, you might add a ToString()const method that returns IsWtxid() ? "wtx " : "tx " + GetHash().ToString() and use it where appropriate.


    sipa commented at 3:58 pm on October 10, 2020:
    The std::sort (in unit tests) on pairs of NodeId, GenTxid failed without GenTxid having an assignment operator for me.

    MarcoFalke commented at 5:39 pm on October 10, 2020:
    Thanks. Didn’t compile the fuzz tests.

    hebasto commented at 6:26 pm on October 10, 2020:

    Both gcc 9.3.0 and clang 10.0.0 compile fine for me without this change.

    Thanks. Didn’t compile the fuzz tests.

    Indeed.

  270. in src/test/fuzz/txrequest.cpp:146 in 47fdac617a outdated
    142@@ -147,6 +143,7 @@ class Tester
    143 
    144     void AdvanceTime(std::chrono::microseconds offset)
    145     {
    146+        assert(offset.count() >= 0);
    


    MarcoFalke commented at 9:23 am on October 10, 2020:
    Does that mean Bitcoin Core will crash when the user adjusts the time?

    sipa commented at 3:59 pm on October 10, 2020:
    No, this is in the fuzz test. I wanted to verify that the test wasn’t wasting time by trying situations in which time go backwards (which are now pointless).

    MarcoFalke commented at 5:38 pm on October 10, 2020:
    Thanks. Didn’t realize this was in the fuzz test. :see_no_evil:
  271. MarcoFalke commented at 9:25 am on October 10, 2020: member

    Concept ACK 47fdac617abbadebfa3d8381eecef8f1b4a5c31e 🐋

    Signature:

     0-----BEGIN PGP SIGNED MESSAGE-----
     1Hash: SHA512
     2
     3Concept ACK 47fdac617abbadebfa3d8381eecef8f1b4a5c31e 🐋
     4-----BEGIN PGP SIGNATURE-----
     5
     6iQGzBAEBCgAdFiEE+rVPoUahrI9sLGYTzit1aX5ppUgFAlwqrYAACgkQzit1aX5p
     7pUhFLgwAqf4Kzcm2Dut69+/taVCzflql+a8TArMeOtzf2lhRQEZqvcptCTrTCspj
     8rI0xfgSWYicryPmf12VhJWvgShfzJfIK6R+u+3GeYoDlWGpMGXlbk5Ty+LUkIWuY
     9Cqtst/spsNv17za0HWnU2T0qaiYa69Ily+EFLUVWowFVjguTTBTSJHLSBa4TA1Kz
    10FuwISbcP2vCVX28S/kIBrdYL6ObcDV1xWBcb3gqkgHpjpQqoc21FkTu9oxydvag4
    11SUIe6m8UhAcit3BMGSzs8OwXxP8N9Mdz4WdL7refsyNPOBpNLo3XY1xMQKmPGLR6
    12WhEAuDRUThQqV5OBim7tnKURNFMtXo9ST6uspJekKG6iJlX1cLVDuCXdnLgKGPcu
    13eP2/ix+o1/rUIwkW6WG13xIoLIesEA6ye6QCY64LevrYSjCmnGNy+eUEnRajTYAu
    14UVfqoNbp/Wyfx0Ca9/phJK17jgv7La3T0MTtQb6AUgBAxxwd1ea5ByKBDWycvI6u
    15S6BFeUTd
    16=HDBF
    17-----END PGP SIGNATURE-----
    

    Timestamp of file with hash 532dba4663e25a6d664a80d843eb1bc34f822be591e3963f888946feaeecbc58 -

  272. in src/test/fuzz/txrequest.cpp:288 in 47fdac617a outdated
    280+        }
    281+    }
    282+
    283+    void Check()
    284+    {
    285+        // Compare CountTracked and CountLoad with naive structure.
    


    hebasto commented at 6:31 pm on October 10, 2020:
    0        // Compare Size(), Count(), CountInFlight() and CountCandidates() with naive structure.
    

    sipa commented at 8:18 am on October 12, 2020:
    Done.
  273. in src/test/fuzz/txrequest.cpp:66 in 47fdac617a outdated
    57+ * The output of GetRequestable is compared with the expected value
    58+ * as well.
    59+ *
    60+ * Check() calls the TxRequestTracker's sanity check, plus compares the
    61+ * output of the constant accessors (Size(), CountLoad(), CountTracked())
    62+ * with expected values.
    


    hebasto commented at 6:33 pm on October 10, 2020:
    0 * Check() calls the TxRequestTracker's sanity check, plus compares the
    1 * output of the constant accessors (Size(), Count(), CountInFlight() and
    2 * CountCandidates()) with expected values.
    

    sipa commented at 8:17 am on October 12, 2020:
    Done.
  274. hebasto approved
  275. hebasto commented at 7:06 pm on October 10, 2020: member
    re-ACK 47fdac617abbadebfa3d8381eecef8f1b4a5c31e, with the recent added commits and -debug=net I can observe “timeout of inflight tx…” messages in the log.
  276. in src/txrequest.cpp:449 in 0e9ac0841d outdated
    325+        // This announcement has a predecessor that belongs to the same txhash. Due to ordering, and the
    326+        // fact that 'it' is not COMPLETED, its predecessor cannot be COMPLETED here.
    327+        if (it != m_index.get<ByTxHash>().begin() && std::prev(it)->m_txhash == it->m_txhash) return false;
    328+
    329+        // This announcement has a successor that belongs to the same txhash, and is not COMPLETED.
    330+        if (std::next(it) != m_index.get<ByTxHash>().end() && std::next(it)->m_txhash == it->m_txhash &&
    


    naumenkogs commented at 7:53 am on October 12, 2020:

    But this will miss the case when a COMPLETED announcement is not the immediate next? {…. it-1, it, it+1, COMPLETED}.

    It seems like it actually works properly because it is called for CANDIDATE_BEST (implicit rule). Perhaps add this assert then?


    sipa commented at 8:09 am on October 12, 2020:

    {…. it-1, it, it+1, COMPLETED}.

    In that case it is not the only non-COMPLETED Announcement for this txhash (it-1 and it+1 are also non-COMPLETED), so false is the correct return value.


    naumenkogs commented at 8:13 am on October 12, 2020:
    Right, thanks!
  277. sipa force-pushed on Oct 12, 2020
  278. hebasto approved
  279. hebasto commented at 8:43 am on October 12, 2020: member
    re-ACK 93d6a00a2da5eace70afd2e794a772a7c9be541a, only suggested changes in comments since my previous review, and rebased.
  280. in src/txrequest.cpp:499 in 700a337daa outdated
    372+            if (it->m_state == State::CANDIDATE_DELAYED && it->m_time <= now) {
    373+                PromoteCandidateReady(m_index.project<ByTxHash>(it));
    374+            } else if (it->m_state == State::REQUESTED && it->m_time <= now) {
    375+                MakeCompleted(m_index.project<ByTxHash>(it));
    376+            } else {
    377+                break;
    


    naumenkogs commented at 9:30 am on October 12, 2020:

    So, let’s say we start iterating, and the very first one is CANDIDATE_READY or COMPLETED or CANDIDATE_BEST with it->m_time <= now. So, it’s possible that we have CANDIDATE_DELAYED and REQUESTED which should be updated, but we won’t reach them because we quit.

    Is it not possible for some reason?


    sipa commented at 5:13 pm on October 12, 2020:
    This is iterating the ByTime index, which is sorted by (state is CANDIDATE_DELAYED or REQUESTED, time). So in that index, all CANDIDATE_DELAYED and REQUESTED Announcements come before any others. If another one is reached, it means there are no CANDIDATE_DELAYED or REQUESTED left.

    naumenkogs commented at 7:00 am on October 13, 2020:
    Oh, yeah, it works since it’s sorted by state first, yeah. I just assumed it is sorted by time only..
  281. in src/txrequest.cpp:413 in 700a337daa outdated
    408+    void DisconnectedPeer(NodeId peer)
    409+    {
    410+        auto& index = m_index.get<ByPeer>();
    411+        auto it = index.lower_bound(ByPeerView{peer, false, uint256::ZERO});
    412+        while (it != index.end() && it->m_peer == peer) {
    413+            // Check what to continue with after this iteration. Note that 'it' may change position, and
    


    naumenkogs commented at 9:40 am on October 12, 2020:
    Is this safe? I guess as long as the change of position is moderate (happening within a peer). And perhaps can only go backward? I’m wondering if we may stuck in an endless loop here somehow…

    sipa commented at 7:19 pm on October 12, 2020:
    Yes, it’s safe. The key insight is that no other Announcement objects from the same peer are ever affected by this call (std::next(it) may be affected, but only if it’s from a different peer). I’ve elaborated the comments to hopefully clarify this.

    naumenkogs commented at 7:04 am on October 13, 2020:
    Thanks!
  282. in src/txrequest.cpp:303 in 700a337daa outdated
    230+
    231+}  // namespace
    232+
    233+/** Actual implementation for TxRequestTracker's data structure. */
    234+class TxRequestTracker::Impl {
    235+    //! The current sequence number. Increases for every announcement. This is used to sort txhashes returned by
    


    naumenkogs commented at 9:47 am on October 12, 2020:
    Why is this useful? I guess handling dependencies? Worth documenting I think.

    ariard commented at 3:17 pm on October 12, 2020:
    Yes see #19184 (review). It’s not documented on actual tip. I thought it was addressed on a previous tip but can’t find where.

    sipa commented at 7:19 pm on October 12, 2020:
    I’ve added a comment about this in txrequest.h (but not here; it’s just following the specification defined there).

    naumenkogs commented at 7:10 am on October 13, 2020:
    Thanks!
  283. in src/txrequest.cpp:502 in 700a337daa outdated
    497+                // txhash wasn't tracked at all (and the caller should have called ReceivedInv), or it was already
    498+                // requested and/or completed for other reasons and this is just a superfluous RequestedTx call.
    499+                return;
    500+            }
    501+
    502+            // Look for an existing CANDIDATE_BEST or REQUESTED.
    


    naumenkogs commented at 9:52 am on October 12, 2020:
    Shouldn’t this be outside the if (it == m_index.get<ByPeer>().end()) { condition and happen unconditionally?

    sipa commented at 7:21 pm on October 12, 2020:
    That’s not necessary as it’s only possible in case no CANDIDATE_BEST entry for the specified (peer, txhash) was found (if there was, then no other REQUESTED or CANDIDATE_BEST for the same txhash by another peer can exist, due to invariant that there is at most one of those per txhash). I’ve added more comments to hopefully clarify.

    naumenkogs commented at 7:19 am on October 13, 2020:
    Right.
  284. in src/txrequest.cpp:652 in 700a337daa outdated
    511+                    // _READY, so pick that to avoid extra work in SetTimePoint().
    512+                    Modify<ByTxHash>(it_old, [](Announcement& ann) { ann.m_state = State::CANDIDATE_READY; });
    513+                } else if (it_old->m_state == State::REQUESTED) {
    514+                    // As we're no longer waiting for a response to the previous REQUESTED announcement, convert it
    515+                    // to COMPLETED. This also helps guaranteeing progress.
    516+                    Modify<ByTxHash>(it_old, [](Announcement& ann) { ann.m_state = State::COMPLETED; });
    


    naumenkogs commented at 9:56 am on October 12, 2020:
    I guess this ideally should happen atomically with setting it to REQUESTED? Otherwise, some call in-between these two actions may, for example, make another request for the same tx? (as they see there is no currently REQUESTED)

    sipa commented at 5:59 pm on October 12, 2020:
    TxRequestTracker is thread-agnostic so that’s not allowed. The caller needs external synchronization if multiple threads are going to be accessing it.

    naumenkogs commented at 7:21 am on October 13, 2020:
    I see.
  285. in src/txrequest.cpp:665 in 700a337daa outdated
    524+        });
    525+    }
    526+
    527+    void ReceivedResponse(NodeId peer, const uint256& txhash)
    528+    {
    529+        // We need to search the ByPeer index for both (peer, false, txhash) and (peer, true, txhash).
    


    naumenkogs commented at 10:03 am on October 12, 2020:
    I guess this implementation should assume that either false or true exists? Otherwise I’m not sure what happens. Maybe worth documenting or adding an explicit assert?

    sipa commented at 7:22 pm on October 12, 2020:
    ReceivedResponse is supposed to have no having no effect in case no CANDIDATE or REQUESTED announcement with the specified peer/txhash combination exists. I’ve updated the comment in txrequest.h to clarify.

    naumenkogs commented at 7:24 am on October 13, 2020:
    Sorry, I probably meant that both cannot exist at the same time? Otherwise this call can only delete one. But since peer+tx_hash is supposed to be a unique tuple, I guess it is safe. Feel free to close this one.
  286. naumenkogs commented at 10:41 am on October 12, 2020: member

    utACK 93d6a00a2da5eace70afd2e794a772a7c9be541a basic scenarios modulo my comments above. I’m still planning to review couple corner cases: tx-dependencies, notfounds, reorgs.

    I don’t have a strong opinion on keeping time-going-backwards in the codebase.

  287. in test/functional/p2p_tx_download.py:165 in 93d6a00a2d outdated
    169+        peer1 = self.nodes[0].add_p2p_connection(TestP2PConn())
    170+        peer2 = self.nodes[0].add_p2p_connection(TestP2PConn())
    171+        for p in [peer1, peer2]:
    172+            p.send_message(msg_inv([CInv(t=MSG_WTX, h=WTXID)]))
    173+        # One of the peers is asked for the tx
    174+        peer2.wait_until(lambda: sum(p.tx_getdata_count for p in [peer1, peer2]) == 1)
    


    jnewbery commented at 12:13 pm on October 12, 2020:
    it’s a bit gross that you’re calling into an getter lambda function for peer1 through a peer2 method, but I guess it works, and now that wait_until() can’t take a lock parameter I can’t think of a cleaner way to do this.

    MarcoFalke commented at 2:17 pm on October 12, 2020:
    It would be possible to use self.wait_until and take the lock inside the lambda. Though, the lambda would no longer fit on one line then, which I didn’t like.

    jnewbery commented at 3:59 pm on October 12, 2020:
    yeah, I think this is ok, just not very aesthetically pleasing!
  288. in test/functional/p2p_tx_download.py:166 in 93d6a00a2d outdated
    170+        peer2 = self.nodes[0].add_p2p_connection(TestP2PConn())
    171+        for p in [peer1, peer2]:
    172+            p.send_message(msg_inv([CInv(t=MSG_WTX, h=WTXID)]))
    173+        # One of the peers is asked for the tx
    174+        peer2.wait_until(lambda: sum(p.tx_getdata_count for p in [peer1, peer2]) == 1)
    175+        peer_expiry, peer_fallback = (peer1, peer2) if peer1.tx_getdata_count == 1 else (peer2, peer1)
    


    jnewbery commented at 12:14 pm on October 12, 2020:
    This should be under the p2p_lock scope, since you’re accessing peer1.tx_getdata_count

    MarcoFalke commented at 2:17 pm on October 12, 2020:
    Good catch

    sipa commented at 7:22 pm on October 12, 2020:
    Fixed, here and in other places.
  289. in test/functional/p2p_tx_download.py:172 in 93d6a00a2d outdated
    183-        self.nodes[0].setmocktime(0)
    184+            assert_equal(peer_fallback.tx_getdata_count, 0)
    185+        self.nodes[0].setmocktime(int(time.time()) + GETDATA_TX_INTERVAL + 1)  # Wait for request to peer_expiry to expire
    186+        peer_fallback.sync_with_ping()
    187+        with p2p_lock:
    188+            assert_equal(peer_fallback.tx_getdata_count, 1)
    


    jnewbery commented at 12:36 pm on October 12, 2020:

    Could this be racy? The pong from peer_fallback will be sent at almost the exact same time as the getdata, and maybe before. If we take the p2p_lock as soon as we receive the pong, then the getdata might not have arrived yet.

    Here’s the log from a successful run

    0node0 2020-10-12T12:31:39.447800Z (mocktime: 2020-10-12T12:32:40Z) [msghand] timeout of inflight wtx 000000000000000000000000000000000000000000000000000000000000ffaa from peer=2 
    1 node0 2020-10-12T12:31:39.447862Z (mocktime: 2020-10-12T12:32:40Z) [msghand] received: ping (8 bytes) peer=1 
    2 node0 2020-10-12T12:31:39.447891Z (mocktime: 2020-10-12T12:32:40Z) [msghand] sending pong (8 bytes) peer=1 
    3 test  2020-10-12T12:31:39.448000Z TestFramework.p2p (DEBUG): Received message from 127.0.0.1:13915: msg_pong(nonce=00000002) 
    4 test  2020-10-12T12:31:39.448000Z TestFramework.p2p (DEBUG): Received message from 127.0.0.1:13915: msg_getdata(inv=[CInv(type=WTX hash=000000000000000000000000000000000000000000000000000000000000ffaa)]) 
    5 node0 2020-10-12T12:31:39.448005Z (mocktime: 2020-10-12T12:32:40Z) [msghand] Requesting wtx 000000000000000000000000000000000000000000000000000000000000ffaa peer=1 
    6 node0 2020-10-12T12:31:39.448028Z (mocktime: 2020-10-12T12:32:40Z) [msghand] sending getdata (37 bytes) peer=1 
    

    If there’s any delay between those two ‘Received message’ lines, then this test could fail this assert.

    Perhaps replace this with a wait_until()


    MarcoFalke commented at 2:22 pm on October 12, 2020:

    I didn’t use a wait_until because I wanted to show that the fallback is selected “immediately” after expiry. When this is switched to a wait_until, it should probably use a short delay. An alternative would be to sync with two pings? :grimacing:

    Note to myself: There might be other places in the functional tests where this could be racy, though at least no others in this test script?


    jnewbery commented at 4:00 pm on October 12, 2020:
    Yeah, I think a wait until with 1 second is fine. On a p2p network, 1s seems close enough to immediate.

    sipa commented at 7:23 pm on October 12, 2020:
    I’ve made some changes to this effect. Please have a look, I’m not super familiar with the p2p functionality in the python tests.

    jnewbery commented at 9:40 pm on October 12, 2020:
    The assert_equal(peer_fallback.tx_getdata_count, 1) calls after the wait_until() calls aren’t necessary (since if you’ve waited until something is true, then you don’t need to assert it’s true), but they’re not doing any harm.

    sipa commented at 10:13 pm on October 12, 2020:
    The wait_until tests for (>= 1), while the assertion tests for (== 1). So it would help catching the case where it errorneously instantly jumps to 2 or more.

    jnewbery commented at 8:47 am on October 13, 2020:

    Right, so it’s asserting that a getdata doesn’t contain multiple transaction CInvs. That doesn’t seem relevant to the test scenario (functionally I think we’re only interested that the transaction we’re interested in is requested).

    Marking as resolved since I don’t think it’s important, just not necessary.

  290. in src/txrequest.cpp:485 in 93d6a00a2d outdated
    478+    }
    479+
    480+    //! Move our clock forward and make the data structure consistent with the new time.
    481+    //! - REQUESTED annoucements with expiry <= now are turned into COMPLETED.
    482+    //! - CANDIDATE_DELAYED announcements with reqtime <= now are turned into CANDIDATE_{READY,BEST}.
    483+    void SetTimePoint(std::chrono::microseconds now, std::vector<std::pair<NodeId, GenTxid>>* expired)
    


    jnewbery commented at 12:58 pm on October 12, 2020:
    This new function signature where we’re returning pairs of <NodeId, GenTxid> makes me more convinced that there should be a follow-up PR to pull this function out of GetRequestable() and call it outside the context of a single peer (https://github.com/bitcoin/bitcoin/pull/19988/files#r499479782).

    sipa commented at 5:40 pm on October 12, 2020:

    I agree, and considered that, but it’s a pain to make that have “any order of calls has well-specified behavior” that introduced a while ago. The problem is that ReceivedInv and RequestedTx cause the “PostGetRequestable” invariants to be violated until the next SetTimePoint. That’s currently not a problem as the only way to observe that is through GetRequestable, which calls SetTimePoint. But if SetTimePoint is made a separate public function, we’d need to either:

    • Add a rule that GetRequestable can only be called after SetTimePoint, with no mutators in between - effectively forcing the API to do the same thing it’s doing now, but with the burden on the caller.
    • Add more state, remembering whether anything may have broken the invariants, and refusing GetRequestable in that condition.
    • Improve even more structure, and precompute the result of all possible GetRequestable (across all peers), and cache them inside TxRequestTracker, to be fetched/deleted whenever GetRequestable is called. This would significantly worsen worst-case memory usage, I think.
    • Add the ability to install a callback function for expirations, instead of returning them from GetRequestable.

    So all things considered, I think this is a decent approach. The callback idea seems reasonable, and perhaps can be extended to more than just expirations, though if that’s what we pick, I’d like to keep it as a follow-up.


    jnewbery commented at 9:35 pm on October 12, 2020:

    ReceivedInv and RequestedTx cause the “PostGetRequestable” invariants to be violated until the next SetTimePoint.

    How about calling PromoteCandidateReady() from ReceivedInv() if reqtime is before m_time and calling MakeCompleted() from RequestedTx() if expiry is before m_time? That means that the invariants in PostGetRequestableSanityCheck() are actually invariant all the time. Now that PostGetRequestableSanityCheck() doesn’t take a time argument, it’s only acting on internal TxRequestTracker data, so as long as all public methods maintain the invariants, it should be safe to call any time (and perhaps could be combined into SanityCheck())


    jnewbery commented at 9:44 pm on October 12, 2020:

    The callback idea seems reasonable, and perhaps can be extended to more than just expirations

    Yes, this could potentially be useful if there are other events that we want to expose to net_prociessing.

    I’d like to keep it as a follow-up.

    I think whatever we do should be kept as a follow-up. Time to freeze this PR and get it merged :)


    sipa commented at 10:08 pm on October 12, 2020:
    Agree.

    ajtowns commented at 10:17 pm on October 12, 2020:
    We could use clang’s lock annotations for the additional rule so the extra burden is on the compiler rather than the caller? Have SetTimePoint return an RAII mutex-like thing, make GetRequestable require the mutex-like is held, and make the other functions require that it not be held. cf #18017

    jnewbery commented at 8:51 am on October 13, 2020:
    I don’t like the idea of exporting more state and conditions to the caller, but we can discuss in the follow-up PR. Marking this resolved.
  291. in src/txrequest.cpp:183 in 93d6a00a2d outdated
    176+    FUTURE_EVENT,
    177+    //! Used for announcements whose timestamp is not relevant.
    178+    NO_EVENT,
    179+};
    180+
    181+WaitState GetWaitState(const Announcement& ann)
    


    jnewbery commented at 1:02 pm on October 12, 2020:
    The WaitState enum and this function seem superfluous now, since you can just replace the enum with a bool and use IsWaiting().

    sipa commented at 7:23 pm on October 12, 2020:
    Gone.
  292. in src/txrequest.cpp:61 in 93d6a00a2d outdated
    56+
    57+/** An announcement. This is the data we track for each txid or wtxid that is announced to us by each peer. */
    58+struct Announcement {
    59+    /** Txid or wtxid that was announced. */
    60+    const uint256 m_txhash;
    61+    /** For CANDIDATE_{DELAYED,BEST,READY} the reqtime; for REQUESTED the expiry. */
    


    jnewbery commented at 1:04 pm on October 12, 2020:
    I think we can just say “For CANDIDATE_DELAYED the reqtime; for REQUESTED the expiry; no meaning for CANDIDATE_READY, CANDIDATE_BEST and COMPLETED”

    sipa commented at 7:23 pm on October 12, 2020:
    Done.
  293. in src/txrequest.cpp:362 in 93d6a00a2d outdated
    356+                assert(ann.m_time > m_now);
    357+            } else if (ann.IsSelectable()) {
    358+                // CANDIDATE_READY and CANDIDATE_BEST cannot have a time in the future, as it would imply m_now
    359+                // going backwards.
    360+                assert(ann.m_time <= m_now);
    361+            }
    


    jnewbery commented at 1:06 pm on October 12, 2020:
    Just drop this now that m_time has no meaning for CANDIDATE_READY and CANDIDATE_BEST.

    sipa commented at 7:27 pm on October 12, 2020:
    Done.
  294. jnewbery commented at 1:10 pm on October 12, 2020: member

    No strong opinion about whether we should allow time to go backwards or not. If we keep the Remove support for time going backwards commit, there are a couple of additional small changes that I’d like to see.

    I’ve also left some comments on the new functional tests. I only left them on the first sub-test but they also apply to the other new sub-tests.

  295. in test/functional/p2p_tx_download.py:184 in 93d6a00a2d outdated
    195+        peer2 = self.nodes[0].add_p2p_connection(TestP2PConn())
    196+        for p in [peer1, peer2]:
    197+            p.send_message(msg_inv([CInv(t=MSG_WTX, h=WTXID)]))
    198+        # One of the peers is asked for the tx
    199+        peer2.wait_until(lambda: sum(p.tx_getdata_count for p in [peer1, peer2]) == 1)
    200+        peer_notfound, peer_fallback = (peer1, peer2) if peer1.tx_getdata_count == 1 else (peer2, peer1)
    


    MarcoFalke commented at 2:18 pm on October 12, 2020:
    Same here

    sipa commented at 7:27 pm on October 12, 2020:
    Done.
  296. MarcoFalke commented at 2:24 pm on October 12, 2020: member
    The test issues are my fault, because I wrote them
  297. in src/test/txrequest_tests.cpp:614 in 93d6a00a2d outdated
    609+    if (InsecureRandBool()) scenario.AdvanceTime(RandomTime8s());
    610+    scenario.ReceivedInv(peer1, gtxid2, true, MIN_TIME);
    611+    scenario.Check(peer1, {}, 1, 0, 1, "q16");
    612+    scenario.Check(peer2, {gtxid2, gtxid1}, 2, 0, 0, "q17");
    613+
    614+    // And request it from peer1 (weird).
    


    ariard commented at 3:53 pm on October 12, 2020:
    “Weird as peer2 should have the preference on gtxid2 request”.

    sipa commented at 7:28 pm on October 12, 2020:
    Done.
  298. in src/test/txrequest_tests.cpp:671 in 93d6a00a2d outdated
    615+    if (InsecureRandBool()) scenario.AdvanceTime(RandomTime8s());
    616+    scenario.RequestedTx(peer1, gtxid2.GetHash(), MAX_TIME);
    617+    scenario.Check(peer1, {}, 0, 1, 1, "q18");
    618+    scenario.Check(peer2, {gtxid1}, 2, 0, 0, "q19");
    619+
    620+    // If peer2 now (normally) requests gtxid2, the existing request by peer1 becomes COMPLETED.
    


    ariard commented at 3:56 pm on October 12, 2020:
    Shouldn’t this say “If gtxid2 is requested from peer2”.

    sipa commented at 6:56 pm on October 12, 2020:
    What is the difference?

    ariard commented at 8:36 pm on October 12, 2020:
    If I understand the test correctly, we (the unit test) request from the peer2, but peer2 is never a requester.

    sipa commented at 10:09 pm on October 12, 2020:
    Ok? I don’t see the difference between the two sentences, except subject and object are swapped.

    naumenkogs commented at 7:28 am on October 13, 2020:
    The difference is the role of peer2. Currently, peer2 is a requester. Antoine suggests that peer2 is a non-requester (it is a responder).

    jnewbery commented at 8:13 am on October 14, 2020:

    I think @ariard is correct here. See the comment above the previous code block:

    // and request it from peer1 ...

    The unit test is simulating that the node has requested the transaction from peer 1.

    Here:

    // If peer2 now (normally) requests gtxid2, the existing request by peer1 becomes COMPLETED.

    is incorrect. The unit test is simulating that the node has requested the transaction from peer 2, not that peer 2 has requested the transaction from us.

    0    // If gtxid2 is now requested from peer2, the existing request to peer1 becomes COMPLETED.
    
  299. in test/functional/p2p_tx_download.py:246 in 93d6a00a2d outdated
    238+        # Run tests without mocktime that only need one peer-connection first, to avoid restarting the nodes
    239+        self.test_expiry_fallback()
    240+        self.test_notfound_fallback()
    241+        self.test_preferred_inv()
    242+        self.test_large_inv_batch()
    243+        self.test_spurious_notfound()
    


    ariard commented at 5:07 pm on October 12, 2020:

    AFAICT, I think fallback in case of requestee disconnection isn’t covered :

     0diff --git a/test/functional/p2p_tx_download.py b/test/functional/p2p_tx_download.py
     1index 71877a6ee..92b327b36 100755
     2--- a/test/functional/p2p_tx_download.py
     3+++ b/test/functional/p2p_tx_download.py
     4@@ -172,6 +172,24 @@ class TxDownloadTest(BitcoinTestFramework):
     5             assert_equal(peer_fallback.tx_getdata_count, 1)
     6         self.restart_node(0)  # reset mocktime
     7 
     8+    def test_disconnect_fallback(self):
     9+        self.log.info('Check that disconnect will select another peer for download')
    10+        WTXID = 0xffbb
    11+        peer1 = self.nodes[0].add_p2p_connection(TestP2PConn())
    12+        peer2 = self.nodes[0].add_p2p_connection(TestP2PConn())
    13+        for p in [peer1, peer2]:
    14+            p.send_message(msg_inv([CInv(t=MSG_WTX, h=WTXID)]))
    15+        # One of the peers is asked for the tx
    16+        peer2.wait_until(lambda: sum(p.tx_getdata_count for p in [peer1, peer2]) == 1)
    17+        peer_disconnect, peer_fallback = (peer1, peer2) if peer1.tx_getdata_count == 1 else (peer2, peer1)
    18+        with p2p_lock:
    19+            assert_equal(peer_fallback.tx_getdata_count, 0)
    20+        peer_disconnect.peer_disconnect()
    21+        peer_disconnect.wait_for_disconnect()
    22+        peer_fallback.sync_with_ping()
    23+        with p2p_lock:
    24+            assert_equal(peer_fallback.tx_getdata_count, 1)
    25+
    26     def test_notfound_fallback(self):
    27         self.log.info('Check that notfounds will select another peer for download immediately')
    28         WTXID = 0xffdd
    29@@ -221,6 +239,7 @@ class TxDownloadTest(BitcoinTestFramework):
    30     def run_test(self):
    31         # Run tests without mocktime that only need one peer-connection first, to avoid restarting the nodes
    32         self.test_expiry_fallback()
    33+        self.test_disconnect_fallback()
    34         self.test_notfound_fallback()
    35         self.test_preferred_inv()
    36         self.test_large_inv_batch()
    

    This test should fail with following commit, explicitly disabling reselect operation :https://github.com/bitcoin/bitcoin/commit/a3b7b951070b139599375846fd873c5dd798b762


    sipa commented at 7:28 pm on October 12, 2020:
    Added, thanks!
  300. in src/txrequest.h:163 in 93d6a00a2d outdated
    155+     *    announcements with the same txhash (subject to preferredness rules, and tiebreaking using a deterministic
    156+     *    salted hash of peer and txhash).
    157+     *  - The selected announcements are converted to GenTxids using their is_wtxid flag, and returned in
    158+     *    announcement order (even if multiple were added at the same time).
    159+     *
    160+     * 'now' cannot go backwards. In every call, the highest value for 'now' ever passed is used, rather than the
    


    ariard commented at 5:25 pm on October 12, 2020:
    If the clock keeps going back-and-forth but without never crossing announcements’ m_time, _DELAYED are going to stale forever ? Maybe worth to document to be sure we have all the same understanding of the new behavior.

    sipa commented at 6:15 pm on October 12, 2020:
    That seems absolutely trivial. If your clock makes no long-term progress then obviously no time-based events can be guaranteed to happen.

    ajtowns commented at 9:37 pm on October 12, 2020:

    This seems like it has slightly worse behaviour if time jumps backwards significantly (by more than the request/expiry delays). In that case:

    • any txs that are currently REQUESTED and get a NOTFOUND will result in all the CANDIDATE_READY alternatives being immediately requested (expiry will be set to the past so will expire immediately)
    • any new txs that come in will get immediately promoted from DELAYED to READY/BEST (reqtime in the past) and then be REQUESTED immediately, and then be immediately expired so any additional announcements that come in prior to the tx being delivered will also result in a request, etc.

    I think if a bunch of systems have time problems simultaneously (bad daylight savings transition or ntp problems?) that could trigger some network spikes as many nodes request txs from many other nodes.

    Making it the caller’s responsibility to use a steady clock seems better than having txrequest try to fake it, IMO anyway.

    Switching from chrono::duration to a chrono::time_point would allow you to test for tp::clock::is_steady and avoid actually doing any of the time-goes-backward code paths when you’re using a clock where that’s impossible, and also call tp::clock::now() directly rather than passing it in as a parameter (in which case the caller could cache old values and do them out of order etc to introduce bugs).

    Would deferring the number-go-up requirement for time to a later PR make sense?


    sipa commented at 10:09 pm on October 12, 2020:
    Ok, removed the last commit, and moved it to https://github.com/sipa/bitcoin/commits/202010_txrequest_rand_wtxid_monotime if we want it still.
  301. ariard commented at 5:31 pm on October 12, 2020: member

    Code Review ACK 93d6a00, I’m ~0 on “Remove support for time going backwards”, what did it provide exactly in term of robustness ? Canceling _READY/_BEST and thus avoiding to issue a GETDATA ?

    See new functional test to cover disconnection, I think TXID_RELAY_DELAY doesn’t have its functional test, I’ll try to add one.

  302. Add txrequest module
    This adds a new module (unused for now) which defines TxRequestTracker, a data
    structure that maintains all information about transaction requests, and coordinates
    requests.
    da3b8fde03
  303. Add txrequest unit tests
    Add unit tests for TxRequestTracker. Several scenarios are tested,
    randomly interleaved with eachother.
    
    Includes a test by Antoine Riard (ariard).
    3c7fe0e5a0
  304. Add txrequest fuzz tests
    This adds a fuzz test that reimplements a naive reimplementation of
    TxRequestTracker (with up to 16 fixed peers and 16 fixed txhashes),
    and compares the real implementation against it.
    5b03121d60
  305. Change transaction request logic to use txrequest
    This removes most transaction request logic from net_processing, and
    replaces it with calls to a global TxRequestTracker object.
    
    The major changes are:
    
    * Announcements from outbound (and whitelisted) peers are now always
      preferred over those from inbound peers. This used to be the case for the
      first request (by delaying the first request from inbound peers), and
      a bias afters. The 2s delay for requests from inbound peers still exists,
      but after that, if viable outbound peers remain for any given transaction,
      they will always be tried first.
    
    * No more hard cap of 100 in flight transactions per peer, as there is less
      need for it (memory usage is linear in the number of announcements, but
      independent from the number in flight, and CPU usage isn't affected by it).
      Furthermore, if only one peer announces a transaction, and it has over 100
      in flight and requestable already, we still want to request it from them.
      The cap is replaced with an additional 2s delay (possibly combined with the
      existing 2s delays for inbound connections, and for txid peers when wtxid
      peers are available).
    
    Includes functional tests written by Marco Falke and Antoine Riard.
    242d16477d
  306. Reduce MAX_PEER_TX_ANNOUNCEMENTS for non-PF_RELAY peers
    Maintaining up to 100000 INVs per peer is excessive, as that is far more
    than fits in a typical mempool.
    
    Also disable the "overload" penalty for PF_RELAY peers.
    de11b0a4ef
  307. Expedite removal of tx requests that are no longer needed
    Whenever a transaction is added to the mempool or orphan pool, both
    its txid and wtxid are considered AlreadyHave, and thus will eventually
    be removed from m_txrequest.
    
    The same is true for hashes added to the reject filter, but note that sometimes
    only the wtxid is added (in which case only the wtxid can be removed from
    m_txrequest).
    173a1d2d3f
  308. Make txid delay penalty also apply to fetches of orphan's parents cc16fff3e4
  309. Delete limitedmap as it is unused now 86f50ed10f
  310. Report and verify expirations fd9a0060f0
  311. sipa force-pushed on Oct 12, 2020
  312. sipa commented at 7:32 pm on October 12, 2020: member

    @ariard The only advantages to dropping the time-going-backwards support are IMO:

    • Slightly easier to review because fewer situations to take into account
    • Better worst-case amortized complexity under insane clock situations (as now m_state can only move forward, and is thus restricted to 1 creation, 1 deletion, 4 state transitions, and once being returned by GetRequestable).
  313. sipa force-pushed on Oct 12, 2020
  314. sipa commented at 10:12 pm on October 12, 2020: member
    I dropped the last commit, so time can go backwards again. It seems it’s not a clear-cut improvement, so it’s probably better to deal with it in a follow-up. The old commit is here https://github.com/sipa/bitcoin/commits/202010_txrequest_rand_wtxid_monotime in case we want it in the future.
  315. in src/test/fuzz/txrequest.cpp:288 in fd9a0060f0
    283+        }
    284+    }
    285+
    286+    void Check()
    287+    {
    288+        // Compare CountTracked and CountLoad with naive structure.
    


    jnewbery commented at 9:06 am on October 13, 2020:

    You’ve regressed this comment in the latest force push. It should say:

    // Compare Size(), Count(), CountInFlight(), and CountCandidates() with naive structure.

  316. jnewbery commented at 9:11 am on October 13, 2020: member

    utACK fd9a0060f028a4c01bd88f58777dea34bdcbafd1

    If you do retouch this branch, the commit log for Report and verify expirations could be improved.

  317. jonatack commented at 9:52 am on October 13, 2020: member
    WIP light ACK fd9a0060f028a4c01bd88f58777dea34bdcbafd1 have read the code, verified that each commit is hygienic, e.g. debug build clean and tests green, and have been running a node on and off with this branch and grepping the net debug log. Am still unpacking the discussion hidden by GitHub by fetching it via the API and connecting the dots, storing notes and suggestions in a local branch; at this point none are blockers.
  318. ryanofsky commented at 2:05 pm on October 13, 2020: member

    Light code review ACK fd9a0060f028a4c01bd88f58777dea34bdcbafd1, looking at txrequest implementation, unit test implementation, and net_processing integration, just trying to understand how it works and looking for anything potentially confusing in the implementation. Didn’t look at functional tests or catch up on review discussion. Just a sanity check review focused on:

    • da3b8fde03f2e8060bb7ff3bff17175dab85f0cd Add txrequest module (1/9)
    • 3c7fe0e5a0ee1abf4dc263ae5310e68253c866e1 Add txrequest unit tests (2/9)
    • 5b03121d60527a193a84c339151481f9c9c1962b Add txrequest fuzz tests (3/9)
    • 242d16477df1a024c7126bad23dde39cad217eca Change transaction request logic to use txrequest (4/9)
    • de11b0a4eff20da3e3ca52dc90948b5253d329c5 Reduce MAX_PEER_TX_ANNOUNCEMENTS for non-PF_RELAY peers (5/9)
    • 173a1d2d3f824b83777ac713e89bee69fd87692d Expedite removal of tx requests that are no longer needed (6/9)
    • cc16fff3e476a9378d2176b3c1b83ad12b1b052a Make txid delay penalty also apply to fetches of orphan’s parents (7/9)
    • 86f50ed10f66b5535f0162cf0026456a9e3f8963 Delete limitedmap as it is unused now (8/9)
    • fd9a0060f028a4c01bd88f58777dea34bdcbafd1 Report and verify expirations (9/9)
  319. in src/net_processing.cpp:3714 in fd9a0060f0
    3723-                        // message.
    3724-                        continue;
    3725-                    }
    3726-                    state->m_tx_download.m_tx_in_flight.erase(in_flight_it);
    3727-                    state->m_tx_download.m_tx_announced.erase(inv.hash);
    3728+                    // If we receive a NOTFOUND message for a tx we requested, mark the announcement for it as
    


    sdaftuar commented at 4:13 pm on October 13, 2020:
    It seems we no longer care whether the transaction was requested in order to mark the announcement as COMPLETED. I wondered if this might cause any problems (DoS or otherwise), but I think this preferable to the current behavior. Perhaps we should update the comment to omit the phrase “for a tx we requested”?
  320. sipa commented at 5:30 pm on October 13, 2020: member
    I’m going to leave further comment-only nits for a follow-up now.
  321. sdaftuar commented at 6:53 pm on October 13, 2020: member
    Code review partial-ack; specifically, I’ve read through everything other than the test and fuzz-test changes, and all looks right to me so far.
  322. ariard commented at 1:18 am on October 14, 2020: member

    Code Review ACK fd9a006. I’ve reviewed the new TxRequestTracker, its integration in net_processing, unit/functional/fuzzing test coverage. I looked more for soundness of new specification rather than functional consistency with old transaction request logic.

    Diff with last ACK is dropping “Remove support for time going backwards”, new units/functional tests comments.

  323. in src/txrequest.cpp:412 in da3b8fde03 outdated
    407+
    408+    void DisconnectedPeer(NodeId peer)
    409+    {
    410+        auto& index = m_index.get<ByPeer>();
    411+        auto it = index.lower_bound(ByPeerView{peer, false, uint256::ZERO});
    412+        while (it != index.end() && it->m_peer == peer) {
    


    ajtowns commented at 4:07 am on October 14, 2020:
    0if (it->m_peer != peer) return; // no tracked announcements
    1// invariant from this point: it == end or it->m_peer == peer
    2while (it != index.end()) {
    3    auto it_next = std::next(it);
    4    if (it_next != index.end() && it_next->m_peer != peer) it_next = index.end();
    5    // do things with it, but don't affect any other announcement from peer, so it_next remains valid
    6    it = it_next;
    7}
    

    would be slightly clearer to me for what it’s worth – it’s what I’m translating the code to in my head to be convinced it’s correct.

  324. MarcoFalke commented at 7:46 am on October 14, 2020: member

    Approach ACK fd9a0060f028a4c01bd88f58777dea34bdcbafd1 🏹

    Signature:

     0-----BEGIN PGP SIGNED MESSAGE-----
     1Hash: SHA512
     2
     3Approach ACK fd9a0060f028a4c01bd88f58777dea34bdcbafd1 🏹
     4-----BEGIN PGP SIGNATURE-----
     5
     6iQGzBAEBCgAdFiEE+rVPoUahrI9sLGYTzit1aX5ppUgFAlwqrYAACgkQzit1aX5p
     7pUhjIwv9EYiV1H9OmL1ZUSW0nLll5hHVgDQvvRx/4wd3a038VVKDP61IcBXd6oQn
     8035vbqlVbfgd5lVVUPFmuuLAt0g1vq0IZxWf+EU0ySjL4e76TntgvU2yE6q5PZDY
     9wnr7FX5uzP/AEIsfJr4uBuWVg5EmQXrLK2zt8RlgCeyiLKUW/CqzcjDHCU/rVYMV
    10kSNwb0pDRF0yyZbK2RtzhkliQM7a0fj56WCheRFQNtbJg+a99TU62vnAJWy/JWpd
    11pLHi3j8gShPUoKYlnok3xCgNehKHki3EDC8fotQbfC1Jcwg1UryLdA25H7H7bvWI
    1261UzERLFU6+hYRElKFQvrmB2wr3++RwMiS+LM3Y+Qw9Lb4PTnMZ7qvk3Bv4TyeeH
    13Hqpvzq9QFsr0CqWeNHBVHuOGDYhhX1KakKn2qONA491p1I4lSx6/A85jj73l4VsO
    14zoMPrRxuuXlccRM9NQjH/3hBDV8MPjtlac+r8EhE0K5lp2xhsMaczV6nCx+D0tfb
    15qAVwV0Xk
    16=WbEq
    17-----END PGP SIGNATURE-----
    

    Timestamp of file with hash f487101a5eb46060cd0a56b0091e376a7a553c1643ccb2255f894b27be16adfa -

  325. naumenkogs commented at 9:10 am on October 14, 2020: member

    Code Review ACK fd9a006. I’ve reviewed everything, mostly to see how this stuff works at the lower level (less documentation-wise, more implementation-wise), and to try breaking it with unexpected sequences of events.

    Comparison to the current master code is still hard, but maybe we shouldn’t even try: the only advantage current code has is standing test-of-time. This PR, on the other hand, has much better test coverage, and many people looked at it already.

  326. laanwj commented at 4:37 pm on October 14, 2020: member
    code review ACK fd9a0060f028a4c01bd88f58777dea34bdcbafd1
  327. laanwj merged this on Oct 14, 2020
  328. laanwj closed this on Oct 14, 2020

  329. ajtowns commented at 11:27 pm on October 14, 2020: member
    Post merge ACK fd9a0060f028a4c01bd88f58777dea34bdcbafd1
  330. fanquake removed this from the "Blockers" column in a project

  331. sidhujag referenced this in commit 7c94367064 on Oct 16, 2020
  332. ariard referenced this in commit c55ce67f2c on Oct 22, 2020
  333. ariard referenced this in commit bc4a230087 on Nov 2, 2020
  334. laanwj referenced this in commit 5b6f970e3f on Dec 16, 2020
  335. sidhujag referenced this in commit 74cd6ddc31 on Dec 17, 2020
  336. mjdietzx referenced this in commit ad6a547df5 on Dec 26, 2020
  337. deadalnix referenced this in commit 07731aaf05 on May 20, 2021
  338. Fabcien referenced this in commit 47c2219049 on May 20, 2021
  339. Fabcien referenced this in commit ececd2475c on May 20, 2021
  340. Fabcien referenced this in commit 984c943817 on May 20, 2021
  341. Fabcien referenced this in commit 530648e3d4 on May 20, 2021
  342. Fabcien referenced this in commit d15c768f7c on May 20, 2021
  343. Fabcien referenced this in commit 7f677c72ca on May 20, 2021
  344. Fabcien referenced this in commit 18e8a1f0c9 on May 20, 2021
  345. fanquake referenced this in commit 6bc1eca01b on Jun 16, 2021
  346. DrahtBot locked this on Feb 15, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-18 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me