The application layer is any data that is transmitted within P2P message payloads, and the processing of that data. Examples are tx inventory, addr gossiping, ping/pong processing.
CNode
currently contains many data and function members that are concerned with the application layer. These should be moved into net processing, so that CNode
is only concerned with the network layer (sending/receiving bytes, keeping statistics, eviction logic, etc).
One blocker to moving these is that the existing peer data structure in net processing is CNodeState
, which is guarded by cs_main. Moving all application layer data into CNodeState
would expand where we need to take and hold cs_main locks. Instead, we should create a new data structure in net processing called Peer
which doesn’t require a cs_main lock, and move the application layer data there.
https://github.com/jnewbery/bitcoin/tree/2020-06-cnode-comments is a move/comment only branch that re-orders the CNode
data members into logical groups and adds comments for each member, including TODOs for members that should be moved to net processing. The branch isn’t intended for merging, but is a guide for what I think needs to change in CNode
.
https://github.com/jnewbery/bitcoin/tree/2020-06-cs-main-split is a branch that implements Peer
and starts moving application layer data into that structure. I intend to peel off commits from that branch into separate PRs. That branch also starts moving data that doesn’t require the cs_main lock from CNodeState
into Peer
. Longer term, I believe almost all CNodeState
data can be moved into Peer
, greatly reducing the scope that cs_main locks are held in net processing.
Any help reviewing or implementing these changes would be very much appreciated!
PRs:
- #19219 Replace automatic bans with discouragement filter
- #19472 Reduce cs_main scope in MaybeDiscourageAndDisconnect()
- #19583 clean up Misbehaving()
- #19607 Add Peer struct for per-peer data in net processing
- #19910 Move peer_map to PeerManager
- #20624 Remove nStartingHeight check from block relay
- #19829 Move block inventory state to net_processing
- #20651 Make p2p recv buffer timeout 20 minutes for all peers
- #20811 Move net_processing implementation details out of header
- #20927 Clean up InactivityCheck()
- #20721 Move ping data to net_processing
- #21187 Only call PushAddress() from net_processing
- #21236 Extract
addr
send functionality into MaybeSendAddr() - #21186 Move addr data into net_processing
- #21162 Move RelayTransaction() into PeerManager
- #21160 Move tx data into net_processing
0/** Information about a peer */
1class CNode
2{
3 friend class CConnman;
4 friend struct ConnmanTestMsg;
5
6public:
7 /** A semaphore limits the number of outbound and manual peers. This
8 * CNode holds the grant until the connection is closed, at which point
9 * it's released to allow another connection. */
10 CSemaphoreGrant grantOutbound;
11 /** Reference count to prevent the CNode from being deleted while there
12 * are still references to it being held.
13 * TODO: replace with std::shared_ptr */
14 std::atomic<int> nRefCount{0};
15
16 /** Socket mutex */
17 RecursiveMutex cs_hSocket;
18 /** Socket */
19 SOCKET hSocket GUARDED_BY(cs_hSocket);
20
21 /** Send buffer mutex */
22 RecursiveMutex cs_vSend;
23 /** Send buffer */
24 std::deque<std::vector<unsigned char>> vSendMsg GUARDED_BY(cs_vSend);
25 /** Total size of all vSendMsg entries */
26 size_t nSendSize{0};
27 /** Offset inside the first vSendMsg already sent */
28 size_t nSendOffset{0};
29 /** Total bytes sent to this peer */
30 uint64_t nSendBytes GUARDED_BY(cs_vSend){0};
31 /** Whether the send buffer is full and we should pause sending
32 * data to this peer. */
33 std::atomic_bool fPauseSend{false};
34
35 /** Send processing mutex. Ensures that we don't enter SendMessages()
36 * for this peer on multiple threads */
37 RecursiveMutex cs_sendProcessing;
38
39 /** Receive buffer mutex */
40 RecursiveMutex cs_vProcessMsg;
41 /** Buffer of deserialized net messages */
42 std::list<CNetMessage> vProcessMsg GUARDED_BY(cs_vProcessMsg);
43 /** Total size of receive buffer mutex */
44 size_t nProcessQueueSize{0} GUARDED_BY(cs_vProcessMsg);
45 /** Whether the receive buffer is full and we should pause receiving
46 * data from this peer. */
47 std::atomic_bool fPauseRecv{false};
48
49 /** Receive buffer statistics mutex */
50 RecursiveMutex cs_vRecv;
51 /** Total bytes received from this peer */
52 uint64_t nRecvBytes GUARDED_BY(cs_vRecv){0};
53
54 /** Address of this peer */
55 const CAddress addr;
56 /** Bind address of our side of the connection */
57 const CAddress addrBind;
58 /** Mutex guarding the cleanSubVer field.
59 * TODO: replace with atomic */
60 RecursiveMutex cs_SubVer;
61 /** Sanitized string of the user agent byte array we read from the wire.
62 * This cleaned string can safely be logged or displayed. */
63 std::string cleanSubVer GUARDED_BY(cs_SubVer){};
64 /** Unusued in actual processing, only present for backward compatibility at RPC/QT level */
65 bool m_legacyWhitelisted{false};
66
67 /** If this peer is being used as a short lived feeler. */
68 bool fFeeler{false};
69 /** If this peer is being used to fetch addresses and then disconnect */
70 bool fOneShot{false};
71 /** If this peer is a manual connection added by command-line argument or RPC */
72 bool m_manual_connection{false};
73 /** If the connection with this peer was initiated by the peer */
74 const bool fInbound;
75
76 /** If the version-verack handshake has successfully completed. */
77 std::atomic_bool fSuccessfullyConnected{false};
78 /** Setting fDisconnect to true will cause the node to be disconnected the
79 / * next time DisconnectNodes() runs */
80 std::atomic_bool fDisconnect{false};
81
82 /** If this peer is a light client (doesn't serve blocks).
83 * TODO: move this application layer data to net processing. */
84 bool fClient{false};
85 /** If this peer is 'limited' (can only serve recent blocks).
86 * TODO: move this application layer data to net processing. */
87 bool m_limited_node{false};
88
89 /** Whether this peer is preferred for eviction */
90 bool m_prefer_evict{false};
91 /** The time of the last message sent to this peer. Used in inactivity checks */
92 std::atomic<int64_t> nLastSend{0};
93 /** The time of the last message received from this peer. Used in inactivity checks */
94 std::atomic<int64_t> nLastRecv{0};
95 /** Which netgroup this peer is in. Used in eviction logic */
96 const uint64_t nKeyedNetGroup;
97 /** Last time we accepted a block from this peer. Used in eviction logic */
98 std::atomic<int64_t> nLastBlockTime{0};
99 /** Last time we accepted a transaction from this peer. Used in eviction logic */
100 std::atomic<int64_t> nLastTXTime{0};
101 /** Best measured round-trip time for this peer. Used in eviction logic */
102 std::atomic<int64_t> nMinPingUsecTime{std::numeric_limits<int64_t>::max()};
103
104 /** The time that the connection with this node was established. Used in eviction logic */
105 const int64_t nTimeConnected;
106 /** The difference between the peer's clock and our own. Only used in logging */
107 std::atomic<int64_t> nTimeOffset{0};
108
109 /** The P2P version announced by the peer in its version message.
110 * TODO: this is only used in the application layer. Move to net processing */
111 std::atomic<int> nRecvVersion{INIT_PROTO_VERSION};
112 /** The P2P version announced by the peer in its version message.
113 * TODO: This seems to largely a duplicate of nRecvVersion. Remove. */
114 std::atomic<int> nVersion{0};
115 /** The supported services announced by the peer in its version message.
116 * TODO: Move this application layer data to net processing. */
117 std::atomic<ServiceFlags> nServices{NODE_NONE};
118
119 /** Addresses to send to this peer.
120 * TODO: move this application layer data to net processing. */
121 std::vector<CAddress> vAddrToSend;
122 /** Probabilistic filter of addresses that this peer already knows.
123 * TODO: move this application layer data to net processing. */
124 const std::unique_ptr<CRollingBloomFilter> m_addr_known;
125 /** Whether a GETADDR request is pending from this node.
126 * TODO: move this application layer data to net processing. */
127 bool fGetAddr{false};
128 /** Timestamp after which we should send the next addr message to this peer.
129 * TODO: move this application layer data to net processing. */
130 std::chrono::microseconds m_next_addr_send GUARDED_BY(cs_sendProcessing){0};
131 /** Timestamp after which we should advertise our local address to this peer.
132 * TODO: move this application layer data to net processing. */
133 std::chrono::microseconds m_next_local_addr_send GUARDED_BY(cs_sendProcessing){0};
134 /** If we've sent an initial ADDR message to this peer.
135 * TODO: move this application layer data to net processing. */
136 bool fSentAddr{false};
137
138 /** Address relay mutex.
139 * TODO: move this application layer data to net processing. */
140 RecursiveMutex cs_inventory;
141 /** List of block ids we still have announce.
142 / * There is no final sorting before sending, as they are always sent immediately
143 / * and in the order requested.
144 * TODO: move this application layer data to net processing. */
145 std::vector<uint256> vInventoryBlockToSend GUARDED_BY(cs_inventory);
146 /** List of block hashes to relay in headers messages.
147 * TODO: move this application layer data to net processing. */
148 std::vector<uint256> vBlockHashesToAnnounce GUARDED_BY(cs_inventory);
149 /** When the peer requests this block, we send an inv that
150 * triggers the peer to send a getblocks to fetch the next batch of
151 * inventory. Only used for peers that don't do headers-first syncing.
152 * TODO: move this application layer data to net processing. */
153 uint256 hashContinue;
154 /** This peer's height, as announced in its version message.
155 * TODO: move this application layer data to net processing. */
156 std::atomic<int> nStartingHeight{-1};
157
158 struct TxRelay {
159 /** bloom filter mutex */
160 mutable RecursiveMutex cs_filter;
161 /** We use fRelayTxes for two purposes -
162 * a) it allows us to not relay tx invs before receiving the peer's version message
163 * b) the peer may tell us in its version message that we should not relay tx invs
164 * unless it loads a bloom filter. */
165 bool fRelayTxes GUARDED_BY(cs_filter){false};
166 /** BIP 31 bloom filter */
167 std::unique_ptr<CBloomFilter> pfilter PT_GUARDED_BY(cs_filter) GUARDED_BY(cs_filter){nullptr};
168
169 /** Transaction relay mutex */
170 mutable RecursiveMutex cs_tx_inventory;
171 /** Probabilistic filter of txids that the peer already knows */
172 CRollingBloomFilter filterInventoryKnown GUARDED_BY(cs_tx_inventory){50000, 0.000001};
173 /** Set of transaction ids we still have to announce.
174 * They are sorted by the mempool before relay, so the order is not important. */
175 std::set<uint256> setInventoryTxToSend;
176 /** Timestamp after which we should send the next transaction INV message to this peer */
177 std::chrono::microseconds nNextInvSend{0};
178
179 /** If the peer has a pending BIP 35 MEMPOOL request to us */
180 bool fSendMempool GUARDED_BY(cs_tx_inventory){false};
181 /** Last time a MEMPOOL request was serviced. */
182 std::atomic<std::chrono::seconds> m_last_mempool_req{std::chrono::seconds{0}};
183
184 /** Feefilter mutex */
185 RecursiveMutex cs_feeFilter;
186 /** Minimum fee rate with which to filter inv's to this node */
187 CAmount minFeeFilter GUARDED_BY(cs_feeFilter){0};
188 /** Last feefilter value we sent to the peer */
189 CAmount lastSentFeeFilter{0};
190 /** Timestamp after which we should send the next FEEFILTER message to this peer */
191 int64_t nextSendTimeFeeFilter{0};
192 };
193
194 /** Transaction relay data for this peer. If m_tx_relay == nullptr then we don't
195 * relay transactions with this peer.
196 * TODO: move this application layer data to net processing. */
197 std::unique_ptr<TxRelay> m_tx_relay;
198
199 /** List of inv items requested by this peer in a getdata message.
200 * TODO: move this application layer data to net processing. */
201 std::deque<CInv> vRecvGetData;
202
203 /** The pong reply we're expecting, or 0 if no pong expected.
204 * TODO: move this application layer data to net processing. */
205 std::atomic<uint64_t> nPingNonceSent{0};
206 /** Time (in usec) the last ping was sent, or 0 if no ping was ever sent.
207 * TODO: move this application layer data to net processing. */
208 std::atomic<int64_t> nPingUsecStart{0};
209 /** Last measured ping round-trip time.
210 * TODO: move this application layer data to net processing. */
211 std::atomic<int64_t> nPingUsecTime{0};
212 /** Whether a ping request is pending to this peer.
213 * TODO: move this application layer data to net processing. */
214 std::atomic<bool> fPingQueued{false};
215
216 /** Orphan transactions to reconsider after the parent was accepted.
217 * TODO: move this application layer data to a global in net processing. */
218 std::set<uint256> orphan_work_set;
219
220private:
221 /** Unique numeric identifier for this node */
222 const NodeId id;
223 /** Node name mutex
224 * TODO: replace with atomic */
225 mutable RecursiveMutex cs_addrName;
226 /** Node name */
227 std::string addrName GUARDED_BY(cs_addrName);
228 /** This node's permission flags. */
229 NetPermissionFlags m_permissionFlags{ PF_NONE };
230 /** addrLocal mutex
231 * TODO: replace with atomic */
232 mutable RecursiveMutex cs_addrLocal;
233 /** Our address, as reported by the peer */
234 CService addrLocal GUARDED_BY(cs_addrLocal);
235
236 /** Random nonce sent in our VERSION message to detect connecting to ourselves.
237 * TODO: move this application layer data to net processing */
238 const uint64_t nLocalHostNonce;
239 /** Services offered to this peer.
240 *
241 * This is supplied by the parent CConnman during peer connection
242 * (CConnman::ConnectNode()) from its attribute of the same name.
243 *
244 * This is const because there is no protocol defined for renegotiating
245 * services initially offered to a peer. The set of local services we
246 * offer should not change after initialization.
247 *
248 * An interesting example of this is NODE_NETWORK and initial block
249 * download: a node which starts up from scratch doesn't have any blocks
250 * to serve, but still advertises NODE_NETWORK because it will eventually
251 * fulfill this role after IBD completes. P2P code is written in such a
252 * way that it can gracefully handle peers who don't make good on their
253 * service advertisements.
254 *
255 * TODO: move this application layer data to net processing. */
256 const ServiceFlags nLocalServices;
257 /** Our starting height that we advertised to this node in our VERSION message.
258 * TODO: this value is not used after sending the version message. We can remove this field. */
259 const int nMyStartingHeight;
260 /** The version that we advertised to the peer in our VERSION message.
261 * TODO: move this application layer data to net processing */
262 int nSendVersion{0};
263
264 /** Deserializer for messages received over the network. This is a derived
265 * class of TransportDeserializer based on the P2P version used with this
266 * peer. */
267 std::unique_ptr<TransportDeserializer> m_deserializer;
268 /** Serializer for messages sent over the network. This is a derived
269 * class of TransportDeserializer based on the P2P version used with this
270 * peer. */
271 std::unique_ptr<TransportSerializer> m_serializer;
272
273 /** Temporary buffer used by the SocketHandler thread for received messages,
274 * before they're pushed onto the vProcessMsg buffer. */
275 std::list<CNetMessage> vRecvMsg;
276
277 /** Statistics of bytes sent to this peer, broken out by message type */
278 mapMsgCmdSize mapSendBytesPerMsgCmd GUARDED_BY(cs_vSend);
279 /** Statistics of bytes received from this peer, broken out by message type */
280 mapMsgCmdSize mapRecvBytesPerMsgCmd GUARDED_BY(cs_vRecv);
281
282public:
283 CNode(NodeId id, ServiceFlags nLocalServicesIn, int nMyStartingHeightIn, SOCKET hSocketIn,
284 const CAddress &addrIn, uint64_t nKeyedNetGroupIn, uint64_t nLocalHostNonceIn,
285 const CAddress &addrBindIn, const std::string &addrNameIn = "",
286 bool fInboundIn = false, bool block_relay_only = false);
287 ~CNode();
288 CNode(const CNode&) = delete;
289 CNode& operator=(const CNode&) = delete;
290
291 NodeId GetId() const {
292 return id;
293 }
294
295 /** TODO: move this application layer function to net processing */
296 uint64_t GetLocalNonce() const {return nLocalHostNonce;}
297
298 /** TODO: move this application layer function to net processing */
299 int GetMyStartingHeight() const {return nMyStartingHeight;}
300
301 /** TODO: move this application layer function to net processing */
302 ServiceFlags GetLocalServices() const { return nLocalServices; }
303
304 /** TODO: move these application layer functions to net processing */
305 void SetRecvVersion(int nVersionIn) { nRecvVersion = nVersionIn; }
306 int GetRecvVersion() const { return nRecvVersion; }
307 void SetSendVersion(int nVersionIn);
308 int GetSendVersion() const;
309
310 /** TODO: move this application layer function to net processing */
311 bool IsAddrRelayPeer() const { return m_addr_known != nullptr; }
312
313 /** TODO: Replace with std::shared_ptr refcounts */
314 int GetRefCount() const
315 {
316 assert(nRefCount >= 0);
317 return nRefCount;
318 }
319
320 CNode* AddRef()
321 {
322 nRefCount++;
323 return this;
324 }
325
326 void Release()
327 {
328 nRefCount--;
329 }
330
331 bool ReceiveMsgBytes(const char *pch, unsigned int nBytes, bool& complete);
332
333 CService GetAddrLocal() const;
334 //! May not be called more than once
335 void SetAddrLocal(const CService& addrLocalIn);
336
337 std::string GetAddrName() const;
338 //! Sets the addrName only if it was not previously set
339 void MaybeSetAddrName(const std::string& addrNameIn);
340
341 bool HasPermission(NetPermissionFlags permission) const {
342 return NetPermissions::HasFlag(m_permissionFlags, permission);
343 }
344
345 /** TODO: move this application layer function to net processing */
346 void AddAddressKnown(const CAddress& _addr)
347 {
348 assert(m_addr_known);
349 m_addr_known->insert(_addr.GetKey());
350 }
351
352 /** TODO: move this application layer function to net processing */
353 void PushAddress(const CAddress& _addr, FastRandomContext &insecure_rand)
354 {
355 // Known checking here is only to save space from duplicates.
356 // SendMessages will filter it again for knowns that were added
357 // after addresses were pushed.
358 assert(m_addr_known);
359 if (_addr.IsValid() && !m_addr_known->contains(_addr.GetKey())) {
360 if (vAddrToSend.size() >= MAX_ADDR_TO_SEND) {
361 vAddrToSend[insecure_rand.randrange(vAddrToSend.size())] = _addr;
362 } else {
363 vAddrToSend.push_back(_addr);
364 }
365 }
366 }
367
368 /** TODO: move this application layer function to net processing */
369 void AddInventoryKnown(const CInv& inv)
370 {
371 if (m_tx_relay != nullptr) {
372 LOCK(m_tx_relay->cs_tx_inventory);
373 m_tx_relay->filterInventoryKnown.insert(inv.hash);
374 }
375 }
376
377 /** TODO: move this application layer function to net processing */
378 void PushTxInventory(const uint256& hash)
379 {
380 if (m_tx_relay == nullptr) return;
381 LOCK(m_tx_relay->cs_tx_inventory);
382 if (!m_tx_relay->filterInventoryKnown.contains(hash)) {
383 m_tx_relay->setInventoryTxToSend.insert(hash);
384 }
385 }
386
387 /** TODO: move this application layer function to net processing */
388 void PushBlockInventory(const uint256& hash)
389 {
390 LOCK(cs_inventory);
391 vInventoryBlockToSend.push_back(hash);
392 }
393
394 /** TODO: move this application layer function to net processing */
395 void PushBlockHash(const uint256 &hash)
396 {
397 LOCK(cs_inventory);
398 vBlockHashesToAnnounce.push_back(hash);
399 }
400
401 void CloseSocketDisconnect();
402
403 void copyStats(CNodeStats &stats, const std::vector<bool> &m_asmap);
404};