See individual commits. Benefits:
- Allows us to begin moving stuff out of CNode and into CNodeState (after #10652 and follow-ups)
- Drops boost dependency and overhead
- Drops global signal registration
- Friendlier backtraces
tested something functionally like this before (eliminating the signals with a direct function call) and saw a measurable performance improvement
How did you test that? It’d be good to benchmark whether this gives us a performance improvement.
1995@@ -1996,15 +1996,16 @@ void CConnman::ThreadMessageHandler()
1996 continue;
1997
1998 // Receive messages
Receive messages
comment into the if block (the block covers both receiving and sending messages. Having the comment outside the block suggests it’s just for receiving messages)
422@@ -422,18 +423,18 @@ struct CombinerAll
423 };
424
425 // Signals for message handling
Interface for message handling
48+ bool ProcessMessages(CNode* pfrom, std::atomic<bool>& interrupt) override;
49+ /**
50+ * Send queued protocol messages to be sent to a give node.
51+ *
52+ * @param[in] pto The node which we are sending messages to.
53+ * @param[in] connman The connection manager for that node.
Tested ACK f785e8479a8ffca6c89d71f9dd6f1c1b455c2671. A few nitty comments inline, plus one more: https://github.com/bitcoin/bitcoin/blob/master/src/net_processing.cpp#L126 seems to be a comment that’s got stranded from its code, and is now nonsensical after this PR. Consider removing it entirely.
I’ve run this over the function test suite and it seemed to run slightly faster, though within expected variance. Do you have any ways of more focused performance testing?
2011+ fMoreWork |= (fMoreNodeWork && !pnode->fPauseSend);
2012+ if (flagInterruptMsgProc)
2013+ return;
2014+ // Send messages
2015+ {
2016+ LOCK(pnode->cs_sendProcessing);
2212@@ -2212,6 +2213,7 @@ CConnman::CConnman(uint64_t nSeed0In, uint64_t nSeed1In) : nSeed0(nSeed0In), nSe
2213 nMaxAddnode = 0;
2214 nBestHeight = 0;
2215 clientInterface = NULL;
2216+ messageInterface = NULL;
1963@@ -1966,7 +1964,9 @@ bool CConnman::OpenNetworkConnection(const CAddress& addrConnect, bool fCountFai
1964 if (fAddnode)
1965 pnode->fAddnode = true;
1966
1967- GetNodeSignals().InitializeNode(pnode, *this);
1968+ if (messageInterface) {
1969+ messageInterface->InitializeNode(pnode);
430- boost::signals2::signal<bool (CNode*, CConnman&, std::atomic<bool>&), CombinerAll> ProcessMessages;
431- boost::signals2::signal<bool (CNode*, CConnman&, std::atomic<bool>&), CombinerAll> SendMessages;
432- boost::signals2::signal<void (CNode*, CConnman&)> InitializeNode;
433- boost::signals2::signal<void (NodeId, bool&)> FinalizeNode;
434+protected:
435+ NetEventsInterface() = default;
432- boost::signals2::signal<void (CNode*, CConnman&)> InitializeNode;
433- boost::signals2::signal<void (NodeId, bool&)> FinalizeNode;
434+protected:
435+ NetEventsInterface() = default;
436+public:
437+ virtual ~NetEventsInterface() = default;
I must be missing your point. This is just a safety precaution in case anything ever wants to hold an abstract pointer. This may be the case if, for example, we create a vector of message processors in order to run them against each-other.
I can remove it if you’d like, but I’m not understanding the objection to being cautious.
368@@ -417,20 +369,20 @@ void MaybeSetPeerAsAnnouncingHeaderAndIDs(NodeId nodeid, CConnman& connman) {
369 return;
370 }
371 }
372- connman.ForNode(nodeid, [&connman](CNode* pfrom){
373+ connman->ForNode(nodeid, [&connman](CNode* pfrom){
88@@ -89,10 +89,6 @@ std::string strSubVersion;
89
90 limitedmap<uint256, int64_t> mapAlreadyAskedFor(MAX_INV_SZ);
91
92-// Signals for message handling
160 nSendBufferMaxSize = connOptions.nSendBufferMaxSize;
161 nReceiveFloodSize = connOptions.nReceiveFloodSize;
162 nMaxOutboundTimeframe = connOptions.nMaxOutboundTimeframe;
163 nMaxOutboundLimit = connOptions.nMaxOutboundLimit;
164 vWhitelistedRange = connOptions.vWhitelistedRange;
165+
There are a few too many edge-cases here to make this a scripted diff.
The following commits will move a few functions into PeerLogicValidation, where
the local connman instance can be used. This change prepares for that usage.
Drop boost signals in favor of a stateful class. This will allow the message
processing loop to actually move to net_processing in a future step.
The copy in PeerLogicValidation can be used instead.
438@@ -436,19 +439,16 @@ struct CombinerAll
439 }
440 };
441
442-// Signals for message handling
443-struct CNodeSignals
444+// Interface for message handling
Ideally we’d use a doxygen-compliant comment here, so that it shows the class description
0/**
1 * Interface for message handling
2 */
26-void RegisterNodeSignals(CNodeSignals& nodeSignals);
27-/** Unregister a network node */
28-void UnregisterNodeSignals(CNodeSignals& nodeSignals);
29-
30-class PeerLogicValidation : public CValidationInterface {
31+class PeerLogicValidation : public CValidationInterface, public NetEventsInterface {
I don’t anticipate per-connection, but definitely per-connman. I have immediate plans to add different handlers and run them against each-other for testing.
Making this final is a good idea, but I’d rather not keep squashing more changes in. Let’s do that in a follow up that also renames PeerLogicValidation to something that better reflects what it’s doing.
This should avoid either attempting to use an invalid reference/pointer to the
other.
194@@ -195,11 +195,10 @@ void Shutdown()
195 #endif
196 MapPort(false);
197 UnregisterValidationInterface(peerLogic.get());
198- peerLogic.reset();
199 g_connman.reset();
200+ peerLogic.reset();