Add per message stats to getnettotals rpc #26337

issue vasild openend this issue on October 19, 2022
  1. vasild commented at 12:24 pm on October 19, 2022: contributor

    Is your feature request related to a problem? Please describe.

    It is not possible to get per message stats accumulated since the node has started. This would make analyzing network traffic in details easier.

    Describe the solution you’d like

    Extend getnettotals with a data similar to what is present in getpeerinfo per peer.

    Describe alternatives you’ve considered

    Looking at getpeerinfo gives some approximation, but it is only for currently connected peers and the stats are lost when a peer disconnects. It is possible to derive/interpolate from that and the time the peer has been connected, but that is cumbersome and prone to errors due to temporary traffic glitches (if the traffic is not uniform over time). E.g. if this peer has been connected for 10 seconds and transferred 5 bytes for INV messages and bitcoind is up for 1000 seconds, then 500 * number_of_peers bytes have been transferred for INV messages since startup, assuming number of connected peers does not change.

    Additional context

    Suggested additions to getnettotals:

     0{
     1  "totalbytesrecv": 304173500,
     2  "totalbytessent": 291919674,
     3+ "bytessent_per_msg": {
     4+   "addrv2": 8215,
     5+   "feefilter": 32,
     6+   "getaddr": 24,
     7+   "getdata": 9542,
     8+   "getheaders": 1053,
     9+   "headers": 1696,
    10+   "inv": 1536387,
    11+   "notfound": 61,
    12+   "ping": 3584,
    13+   "pong": 3616,
    14+   "sendaddrv2": 24,
    15+   "sendcmpct": 33,
    16+   "sendheaders": 24,
    17+   "tx": 1446525,
    18+   "verack": 24,
    19+   "version": 127,
    20+   "wtxidrelay": 24
    21+ },
    22+ "bytesrecv_per_msg": {
    23+   "addrv2": 52611,
    24+   "feefilter": 32,
    25+   "getdata": 15413,
    26+   "getheaders": 1053,
    27+   "headers": 3604,
    28+   "inv": 444758,
    29+   "notfound": 183,
    30+   "ping": 3616,
    31+   "pong": 3584,
    32+   "sendaddrv2": 24,
    33+   "sendcmpct": 66,
    34+   "sendheaders": 24,
    35+   "tx": 506127,
    36+   "verack": 24,
    37+   "version": 126,
    38+   "wtxidrelay": 24
    39+ },
    40  "timemillis": 1666181842031,
    41  "uploadtarget": {
    42    "timeframe": 86400,
    43    "target": 0,
    44    "target_reached": false,
    45    "serve_historical_blocks": true,
    46    "bytes_left_in_cycle": 0,
    47    "time_left_in_cycle": 0
    48  }
    49}
    
  2. vasild added the label Feature on Oct 19, 2022
  3. satsie commented at 2:12 am on November 3, 2022: contributor
    If no one else is taking a look at this, I’d like to take this one.
  4. vasild commented at 7:45 am on November 3, 2022: contributor
    @satsie, I am not currently. Would be happy to review.
  5. satsie commented at 8:39 pm on November 9, 2022: contributor

    @vasild I’ve been working on this and noticed message types missing from the suggested RPC response you provided. Specifically, these message types are not there:

    • addr
    • merkleblock
    • getblocks
    • block
    • mempool
    • filterload
    • filteradd
    • filterclear
    • cmpctblock
    • getblocktxn
    • blocktxn
    • getcfilters
    • cfilter
    • getcfheaders
    • cfheaders
    • getcfcheckpt
    • sendtxrncl

    Have they intentionally been left out? I’m very new to the code base so I wasn’t able to identify if there is something special or different about these types.

  6. sipa commented at 10:36 pm on November 9, 2022: member

    @satsie I assume @vasild is just giving an example, and there can always be messages that are never actually used. Presumably he means that messages that were never seen could be omitted from the RPC output, rather than be listed with value 0.

    In any case, Concept ACK about having such statistics. I’d go even further and also add:

    • Broken down per connection type (as well as a total which has them all summed up like suggested above).
    • Also have statistics for counts of messages (not just how many bytes they were).
  7. vasild commented at 11:34 am on November 10, 2022: contributor

    I did not intentionally leave them out, sorry! I just copy pasted an output from my screen. I missed the fact that if some value is 0 it would not be in the output. I guess some global analogue of CNode::mapSendBytesPerMsgType and CNode::mapRecvBytesPerMsgType should do it. Every time the numbers in those CNode maps are incremented, also increment the newly added, global ones. I guess the new maps should be added to the CConnman class.

    Broken down per connection type

    Aha! I was thinking about the same and also per network type, but couldn’t figure a nice way to visualize that. So we would have something like the following (using only some values for clarity):

     0"bytessent_per_msg": {
     1    "outbound_full_relay": {
     2        "addr": 555,
     3        "inv": 666,
     4        "tx": 777
     5    },
     6    "inbound": {
     7        "addr": 555,
     8        "inv": 666,
     9        "tx": 777
    10    },
    11    "block_relay": {
    12        "addr": 555,
    13        "inv": 666,
    14        "tx": 777
    15    }
    16},
    17"bytesrecv_per_msg": {
    18    "outbound_full_relay": {
    19        "addr": 555,
    20        "inv": 666,
    21        "tx": 777
    22    },
    23    "inbound": {
    24        "addr": 555,
    25        "inv": 666,
    26        "tx": 777
    27    },
    28    "block_relay": {
    29        "addr": 555,
    30        "inv": 666,
    31        "tx": 777
    32    }
    33}
    

    @sipa, is this what you suggested? If broken down also per network:

      0"bytessent_per_msg": {
      1    "outbound_full_relay": {
      2        "ipv4": {
      3            "addr": 555,
      4            "inv": 666,
      5            "tx": 777
      6        },
      7        "ipv6": {
      8            "addr": 555,
      9            "inv": 666,
     10            "tx": 777
     11        },
     12        "tor": {
     13            "addr": 555,
     14            "inv": 666,
     15            "tx": 777
     16        }
     17    },
     18    "inbound": {
     19        "ipv4": {
     20            "addr": 555,
     21            "inv": 666,
     22            "tx": 777
     23        },
     24        "ipv6": {
     25            "addr": 555,
     26            "inv": 666,
     27            "tx": 777
     28        },
     29        "tor": {
     30            "addr": 555,
     31            "inv": 666,
     32            "tx": 777
     33        }
     34    },
     35    "block_relay": {
     36        "ipv4": {
     37            "addr": 555,
     38            "inv": 666,
     39            "tx": 777
     40        },
     41        "ipv6": {
     42            "addr": 555,
     43            "inv": 666,
     44            "tx": 777
     45        },
     46        "tor": {
     47            "addr": 555,
     48            "inv": 666,
     49            "tx": 777
     50        }
     51    }
     52},
     53"bytesrecv_per_msg": {
     54    "outbound_full_relay": {
     55        "ipv4": {
     56            "addr": 555,
     57            "inv": 666,
     58            "tx": 777
     59        },
     60        "ipv6": {
     61            "addr": 555,
     62            "inv": 666,
     63            "tx": 777
     64        },
     65        "tor": {
     66            "addr": 555,
     67            "inv": 666,
     68            "tx": 777
     69        }
     70    },
     71    "inbound": {
     72        "ipv4": {
     73            "addr": 555,
     74            "inv": 666,
     75            "tx": 777
     76        },
     77        "ipv6": {
     78            "addr": 555,
     79            "inv": 666,
     80            "tx": 777
     81        },
     82        "tor": {
     83            "addr": 555,
     84            "inv": 666,
     85            "tx": 777
     86        }
     87    },
     88    "block_relay": {
     89        "ipv4": {
     90            "addr": 555,
     91            "inv": 666,
     92            "tx": 777
     93        },
     94        "ipv6": {
     95            "addr": 555,
     96            "inv": 666,
     97            "tx": 777
     98        },
     99        "tor": {
    100            "addr": 555,
    101            "inv": 666,
    102            "tx": 777
    103        }
    104    }
    105}
    

    I think that the information would be useful but is the presentation too ugly/cumbersome?

    Also have statistics for counts of messages

    Yes, please! :) I guess then e.g. "addr": 555 would become "addr": { "count": 5, "bytes": 555 } or "addr": [5, 555].

  8. satsie commented at 12:34 pm on November 10, 2022: contributor

    Thank you both for the clarification! And thank you @vasild for the sample JSON for what a response might look like with connection types included. It does look busy, but what you’ve laid out makes sense to me.

    I’m not quite to the point where I’m ready to add connection type. I’m still working on the regular breakdown by message type (need to update/write a test). The good news is my implementation is right in line with what you said about new maps in CConnman that are similar to CNode::mapSendBytesPerMsgType and CNode::mapRecvBytesPerMsgType :)

    Adding message count also sounds like a great improvement. I like the "addr": { "count": 5, "bytes": 555 } proposal a lot.

    I’ll be back when I have actual code to show!

  9. sipa commented at 3:06 pm on November 10, 2022: member

    @vasild Yeah, I think a breakdown by network is useful too. This does make me wonder whether all combinations should be provided:

    • number of messages sent in total
    • number of messages sent, per message type
    • number of messages sent, per network
    • number of messages sent, per connection type
    • number of messages sent, per message type and network
    • number of messages sent, per message type and connection type
    • number of messages sent, per network and connection type
    • number of messages sent, per message type, network, and connection type
    • … all the above repeated for bytes instead of messages, and for received instead of sent

    That’s 32 distinct sets of data (each keyed by whatever they’re broken down by), which is probably excessive. But only giving the most-broken-down version is far less usable (as I expect most users to be more interested in more-aggregated results than broken-down ones).

  10. ajtowns commented at 4:23 am on November 11, 2022: contributor
    Might be better to provide a new getnetworkmsgstats rpc that you invoke with up to three parameters: whether to split up per msgtype/conntype/network; whether to do msgs or bytes (or average bytes/msg?); whether to do received or sent (or combined?). Then just leave getnettotals for the actual totals.
  11. satsie commented at 8:43 pm on November 15, 2022: contributor

    @ajtowns I like your proposal for a new getnetworkmsgstats RPC, and it’s seems like a intriguing design challenge that I’d love to dive into more.

    To your last comment about leaving getnettotals for the actual totals, are you proposing to leave getnettotals alone entirely? To refrain from making any changes to it in favor of a new getnetworkmsgstats RPC?

    I like the symmetry of adding the breakdown by msgtype in getnettotals because it aligns with what getpeerinfo is returning. It also seems useful to me for node operators to have some kind of breakdown that they can get “for free” (i.e. without the need to make a second RPC call), which may spark curiosity and lead them to use a future getnetworkmsgstats.

  12. ajtowns commented at 1:34 pm on November 16, 2022: contributor
    @satsie Don’t really have an opinion. What I had in mind was not changing what getnettotals reported, and just letting you get all the combinations from getnetmsgstats, but having the totals match the per-active-peer report from getpeerinfo makes sense too.
  13. amitiuttarwar commented at 7:32 pm on November 30, 2022: contributor
    • concept ACK to adding per message stats to the RPC interface
    • I like the idea of introducing getnetmsgstats to allow users to retrieve stats broken down along different categories (message type, connection type, network) & (sent vs received)
  14. satsie commented at 4:35 am on December 1, 2022: contributor

    Hi all! I spent some time thinking about this. I actually implemented the breakdown of bytes by message type in getnettotals as the issue description suggests, but ultimately decided against making a PR for it.

    Why I think these stats don’t belong in getnettotals

    getnettotals implies that the response will be totals only.

    Why I think these stats could maybe go in getnetworkinfo

    Doing so would mirror the getpeerinfo RPC. Since these two calls are so similarly named, it makes sense for their responses to align where possible.

    Why I think these stats actually belong in a totally new RPC

    Based on aj’s comment, and the ones leading up to it, there is support for a new RPC to get network message stats, which can handle a number of different breakdowns. From a usability perspective, this seems preferable to the 32 distinct sets of data that sipa mentioned :) As a side note, if a new RPC is going to be the permanent solution, the question of adding one very specific type of breakdown in an existing RPC like getnetinfo doesn’t necessarily need to be solved now.

    It’s also been brought to my attention that RPCs have a high maintenance cost. If a message type breakdown becomes available in two places (getnetinfo and a new RPC), it can not only add potential complexity, but it would be very difficult to deprecate from one of the calls later.

    Lastly, a sticking point for me had originally been this idea of getting some kind of message stats for free with an RPC that you may already be making. The need to specify parameters, something that you’d have to do with a new get network message stats RPC, was friction I didn’t want to add (I know it’s small but developers are lazy!). Then I realized having a default set of parameters on a new RPC provided the same easy “run this one command” developer experience.

    Conclusion

    At the end of the day, the only argument I had for only adding message stats to an existing RPC was to make the responses of getnetworkinfo and getpeerinfo more symmetric. By itself, that doesn’t seem like enough to decide against a new RPC. These stats can also be added to getnetworkinfo at a later time if it seems like the right thing to do.

    However I acknowledge that there is a lot I don’t know! If there are any holes in my reasoning, I want to talk about them.

    Proposal for a new RPC

    For the reasons above, I’ve decided to focus on a new RPC. Here, I present the design for feedback:

    New RPC name: getnetworkmsgstats Notes: I considered abbreviating “network” to “net” (aka getnetmsgstats) but saw three RPCs use “network” (getnetworkhashps, getnetworkinfo, and setnetworkactive) as opposed to the one that used “net” (getnettotals)

    Arguments in

    Argument 1: breakdown (is there a better name for this?) Type: string, optional, default=msgtype How the results should be broken down. Possible options are ‘msgtype’ (message type), ‘conntype’ (connection type), or ‘network’.

    Argument 2: direction (again, is there a better name for this?) Type: string, optional, default=received If the results should be for received messages or ones that have been sent. Possible options are ‘received’ and ‘sent;

    As you’ll see below, I’ve decided to always return the message count and number of bytes. Admittedly, this decision is arbitrary. It makes naming easier, particularly because I am struggling with the return fields.

    Should they be generic enough to support any kind of response? Or should they be specific to what the user is requesting? For example, the call could just return a number like “getaddr”: 5, or fields could be more specific like “getaddr_bytes”: 555 / “getaddr_count”: 5. I know that the user should remember what arguments they invoked the call with, but I can see someone copy pasting a response and sending it over to a friend without specifying the units (bytes or count). If a number is going to appear somewhere, I tend to prefer if the units are close by.

    On the flip side, it’s very hard to develop against APIs that don’t have reliable return fields. It’s much easier when you know exactly what you are getting back, regardless of how a call is being made.

    Example responses

    Example 1: bitcoin-cli getnetworkmsgstats defaults to received messages broken down by message type

     0 {
     1    “addr”: {
     2        “count”: 5,
     3        “bytes”: 555
     4    },
     5    “block”: {
     6        “count”: 3,
     7        “bytes”: 333
     8    },
     9    “inv”: {
    10        “count”: 7,
    11        “bytes”: 777
    12    }
    13    
    14}
    

    Example 2: bitcoin-cli getnetworkmsgstats “conntype” “sent” sent messages broken down by connection type

     0{
     1    “ipv4”: {
     2        “count”: 5,
     3        “bytes”: 555
     4    },
     5    “ipv6”: {
     6        “count”: 3,
     7        “bytes”: 333
     8    },
     9    “onion”: {
    10        “count”: 7,
    11        “bytes”: 777
    12    }
    13    
    14}
    

    Outstanding questions

    I’m having the most difficulty with where to draw the line for what should be a parameter, and what should always be returned. This leads to related questions on how return fields are named. Any opinions on removing the “direction” argument (received/sent) and always returning stats for both received and sent messages (wrap the top level objects in my examples into messages_received and messages_sent objects)?

    Any guidance and feedback is much appreciated, and many thanks to @amitiuttarwar for helping me get this far!

  15. vasild commented at 9:59 am on December 1, 2022: contributor

    Seems like the most flexible would be to have one argument - breakdown or splitby whose value is a coma separated list of direction, msgtype, conntype, network. I think we can always/unconditionally provide { "count": 5, "bytes": 55} at the inner level.

    So:

    • direction would split by sent, recv
    • msgtype would split by tx, inv, addrv2, etc
    • conntype would split by inbound, full-outbound, etc
    • network would split by ipv4, tor, etc

    For example

    • getnewrpcname direction:
    0"sent": {
    1    "count": 5,
    2    "bytes": 55
    3},
    4"recv": {
    5    "count": 5,
    6    "bytes": 55
    7}
    
    • getnewrpcname network:
    0"ipv4": {
    1    "count": 5,
    2    "bytes": 55
    3},
    4"i2p": {
    5    "count": 5,
    6    "bytes": 55
    7}
    
    • getnewrpcname msgtype,conntype:
     0"tx": {
     1    "inbound": {
     2        "count": 5,
     3        "bytes": 55
     4    },
     5    "full-outbound": {
     6        "count": 5,
     7        "bytes": 55
     8    }
     9},
    10"inv": {
    11    "inbound": {
    12        "count": 5,
    13        "bytes": 55
    14    },
    15    "full-outbound": {
    16        "count": 5,
    17        "bytes": 55
    18    }
    19}
    
    • getnewrpcname "":
    0"count": 5,
    1"bytes": 55
    
  16. ajtowns commented at 2:31 pm on December 3, 2022: contributor

    Notes: I considered abbreviating “network” to “net” (aka getnetmsgstats) but saw three RPCs use “network” (getnetworkhashps, getnetworkinfo, and setnetworkactive) as opposed to the one that used “net” (getnettotals)

    Hmm; I think for getnetworkhasps “network” refers to mainnet vs testnet vs regtest, which isn’t really comparable. Could also consider getpeermsgstats but not sure how much sense that makes, since the stats aggregate over things that are no longer peers.

    I’m having the most difficulty with where to draw the line for what should be a parameter, and what should always be returned. This leads to related questions on how return fields are named. Any opinions on removing the “direction” argument (received/sent) and always returning stats for both received and sent messages

    I think it’d be better to return both sent/recv with a single call, otherwise you get slight discrepancies due to only counting one not the other for msgs that happen between the two rpc calls. Something like:

     0{
     1  "bytes_sent_per_conntype_msg": {
     2    "full-outbound": {
     3      "ping": 123456,
     4      "pong": 123440,
     5    }
     6  }
     7  "msgs_sent_per_conntype_msg": {
     8    "full-outbound": {
     9      "ping": 7716,
    10      "pong": 7715
    11    }
    12  }
    13}
    

    That more or less matches the bytesrecv_per_msg etc fields that getpeerinfo uses, and avoids having many count and bytes fields, which just seem like noise to me. (Dropping the “per*” part entirely might be better, though; result["msgs_sent"]["full-outbound"]["pong"] is already pretty self-documenting)

    You could perhaps have “direction” and “msg_bytes” be optional filters. If they’re enabled, then you split by them; if they’re not enabled, then you respectively combine sent and received and just give a total, and only deal in bytes. So:

     0$ bitcoin-cli getnetworkmsgstats '["direction", "conntype", "msg"]'
     1{
     2  "sent": {
     3    "full-outbound": {
     4      "ping": 123456
     5    }
     6  },
     7  "recv": {
     8    "full-outbound": {
     9      "ping": 123440,
    10    }
    11  }
    12}
    13$ bitcoin-cli getnetworkmsgstats '["msg_bytes", "conntype"]'
    14{
    15  "msgs": {
    16    "full-outbound": 15431
    17  },
    18  "bytes": {
    19    "full-outbound": 246896
    20  }
    21}
    22$ bitcoin-cli getnetworkmsgstats '["msg_bytes", "direction", "network", "conntype", "msg"]' |
    23     jq '.msgs.recv.ipv4.["full-outbound"].ping'
    247715
    

    Only annoying thing there is that you’d perhaps want direction to be on by default, so you only have to specify it in the rare occassions when you don’t care…

  17. satsie commented at 5:12 pm on December 8, 2022: contributor

    Thank you vasild and ajtowns! I really appreciate your feedback and the guidance on this. Wanted to reply sooner but have been tied up with a few things. @vasild - Does the ordering of the splitby parameters matter? For example, would the following command be valid?

    getnewrpcname conntype direction

    If ordering matters, I believe that makes 16 possible ways to invoke the new RPC, right? An on/off option for each of the 4 dimensions (direction, msgtype, conntype, network) = 2 * 2 * 2 * 2 = 2^4 = 16

    As far as implementation goes, would it make sense to store the data at the most granular level (direction x msgtype x connectiontype x network), and then, if the user doesn’t care about certain dimensions, do some aggregation?

    For example, if the middle dimensions msgtype and and connection type were dropped and the user only wanted direction x network, the code would “roll up” and aggregate the results through the dimensions the user doesn’t care about.

    If this is the most granular breakdown for direction x msgtype x conntype x network,

     0sent: {
     1  addr: {
     2    inbound: {
     3      ipv4: {
     4        count: 2,
     5        bytes: 22
     6      },
     7      tor: {
     8      	count: 3,
     9      	bytes: 33
    10      }
    11    },
    12    block_relay: {
    13      ipv4: {
    14        count: 2,
    15        bytes: 22
    16      },
    17      tor: {
    18      	count: 3,
    19      	bytes: 33
    20      }
    21    }
    22  },
    23  block: {
    24    inbound: {
    25      ipv4: {
    26        count: 2,
    27        bytes: 22
    28      },
    29      tor: {
    30      	count: 3,
    31      	bytes: 33
    32      }
    33    },
    34    block_relay: {
    35      ipv4: {
    36        count: 2,
    37        bytes: 22
    38      },
    39      tor: {
    40      	count: 3,
    41      	bytes: 33
    42      }
    43    }
    44  }
    45},
    46(results for the received direction omitted for simplicity. It can be thought of as being very similar to the results for the sent direction)
    

    Step 1 would be to eliminate the connection type breakdown by aggregating across this dimension:

     0sent: {
     1  addr: {
     2    ipv4: {
     3      count: 4,
     4      bytes: 44
     5    },
     6    tor: {
     7     count: 6,
     8     bytes: 66
     9    }
    10  },
    11  block: {
    12    ipv4: {
    13      count: 4,
    14      bytes: 44
    15    },
    16    tor: {
    17      count: 6,
    18      bytes: 66
    19    }
    20  }
    21}
    

    Then, step 2 would be to eliminate the msg type breakdown by aggregating again, but this time over msg type:

    0sent: {
    1  ipv4: {
    2    count: 8,
    3    bytes: 88
    4  },
    5  tor: {
    6   count: 12,
    7   bytes: 132
    8  }
    9}
    

    This feels like the naive way to do it, but I can’t think of a better way to always be able to serve up this data without storing everything at the most granular level. What do you think?


    @ajtowns - Thanks for the context on the naming! Personally I prefer getnetmsgstats because it’s shorter, and offers up information that would be complementary to getnettotals. On the flip side I am also hesitant to break any naming conventions that may already be in place, so I want to make sure I have an accurate lay of the land.

    I’m cool with returning both sent/recv in a single call, especially since that’s what getpeerinfo is doing. I like how the fields you proposed match getpeerinfo’s bytesrecv_per_msg. I’m also open to dropping the count and byte fields as you have done in the example. I know it’s a trade off, but I am trying to avoid returning a response with too many levels of nested data.

    I also agree with potentially dropping the “per*” part.

    I think the second example you shared with the optional filters is similar to where vasild was going with his comment. I’d like to ask a question I also am asking him: does the ordering of the filters matter? Per the final example you provide:

    $bitcoin-cli getnetworkmsgstats ‘[“msg_bytes”, “direction”, “network”, “conntype”, “msg”]’

    Would the following command be a valid? With the filters moved around?

    $bitcoin-cli getnetworkmsgstats ‘[“network”, “msg_bytes”, “conntype”, “msg”, “direction”]’

    In my above response to vasild I go into an implementation detail about always storing the results at the most granular level, and then doing some aggregation to deliver what the user requested. I believe this is in line with your comment,

    if they’re not enabled, then you respectively combine sent and received and just give a total,

    but I wanted to work through an example to make sure I understood.


    Thank you again for your help! I have some assumptions that I first want to make sure are correct, but I think I am circling in on a design that I think makes sense. Excited for this to continue to evolve.

  18. ajtowns commented at 8:39 pm on December 8, 2022: contributor

    As far as implementation goes, would it make sense to store the data at the most granular level (direction x msgtype x connectiontype x network), and then, if the user doesn’t care about certain dimensions, do some aggregation?

    Yes, I think that’s the most sensible approach.

    Allowing the order to vary would be nice, but it’s probably tricky to implement, and not that crucial. Here’s an example of how you could do that in python, in case it might be helpful:

     0import random
     1
     2def setup():
     3    x = {}
     4    for net in ["ipv4", "tor"]:
     5         for conntype in ["outbound", "inbound", "blockonly"]:
     6             for direc in ["send", "recv"]:
     7                 for msg in ["ping", "pong", "inv", "addrv2"]:
     8                     msgs = random.randrange(1, 1000)
     9                     b = random.randrange(msgs*16, msgs*5000)
    10                     x[net,conntype,direc,msg,"count"] = msgs
    11                     x[net,conntype,direc,msg,"bytes"] = b
    12    # a map of a tuple of strings is fine for a demo in python;
    13    # should be enums, and maybe just a fixed size array in C++?
    14    return x
    15
    16def aggregate(x, *keys):
    17     assert len(keys) > 0
    18     result = {}
    19     for path, v in x.items():
    20         # ignore msg counts if we're not splitting up by msg_bytes
    21         if 4 not in keys and path[4] != "bytes": continue
    22
    23         # find the right place to add this item
    24         r = result
    25         for k in keys[:-1]:
    26             p = path[k]
    27             if p not in r: r[p] = {}
    28             r = r[p]
    29         p = path[keys[-1]]
    30         if p not in r: r[p] = 0
    31
    32         # add it
    33         r[p] += v
    34
    35     return result
    36
    37x=setup()
    38
    39sorted(x.items())[0]
    40(('ipv4', 'blockonly', 'recv', 'addrv2', 'bytes'), 428731)
    41
    42# 0=net, 1=conntype, 2=direction, 3=msg, 4=msg_bytes
    43aggregate(x, 0,4)
    44{'ipv4': {'count': 11879, 'bytes': 29856223},
    45 'tor': {'count': 11945, 'bytes': 28160822}}
    46
    47aggregate(x, 4,0)
    48{'count': {'ipv4': 11879, 'tor': 11945},
    49 'bytes': {'ipv4': 29856223, 'tor': 28160822}}
    50
    51aggregate(x, 4)
    52{'count': 23824, 'bytes': 58017045}
    53
    54aggregate(x, 0)
    55{'ipv4': 29856223, 'tor': 28160822}
    56
    57aggregate(x, 2,3)
    58{'send': {'ping': 6945670, 'pong': 8965830, 'inv': 4387195, 'addrv2': 3964518},
    59 'recv': {'ping': 7467989, 'pong': 13483640, 'inv': 6491104, 'addrv2': 6311099}}
    60
    61aggregate(x, 4,1,2)
    62{'count': {'outbound': {'send': 3514, 'recv': 3461},
    63  'inbound': {'send': 4504, 'recv': 4921},
    64  'blockonly': {'send': 2931, 'recv': 4493}},
    65 'bytes': {'outbound': {'send': 7440743, 'recv': 6742217},
    66  'inbound': {'send': 8067908, 'recv': 15541030},
    67  'blockonly': {'send': 8754562, 'recv': 11470585}}}
    
  19. vasild commented at 9:34 am on December 9, 2022: contributor

    Does the ordering of the splitby parameters matter?

    No idea :) Maybe whatever is easier to implement and results in less code.

    As far as implementation goes, would it make sense to store the data at the most granular level … This feels like the naive way to do it, but I can’t think of a better way to always be able to serve up this data without storing everything at the most granular level. What do you think?

    Yes, I don’t see how else it could be. If we “group by” already in the C++ code then it would not be possible to split it later if requested.

    I do not object dropping the direction option value and always splitting by it, ie always returning sent and recv.

    The msg_bytes option value seems a bit misaligned with the others. For example if split by msgtype produces tx=2, inv=5, addrv2=1 then not splitting by msgtype would produce a single number which is the sum of all (8). But the proposed msg_bytes would return both count and bytes if enabled and only bytes if disabled. I find this confusing. To follow the others’ logic it would have to sum the count and the bytes if disabled, which is meaningless.

    I think it would be best if it is clearly labeled whether the number is bytes or count, either at the leaf level or somewhere higher. E.g. .ipv4.inv.count or .count.ipv4.inv. The extra nesting seems ok to me.

  20. satsie commented at 10:12 pm on February 27, 2023: contributor

    Thank you everyone for your input on the concept and design for this issue! I’d like to present the work I have so far for feedback: https://github.com/satsie/bitcoin/pull/4

    The PR description has some sample request/responses and summarizes the path I took to get here. I also have some open comments on areas I want to take a closer look at, or am hoping for input on before I put an official PR up to Bitcoin Core.

    Huge thank you to Amiti, Vasil, and AJ for already chiming in on previous iterations of the code :heart:. Since I last commented here, I made a few changes to the design of the new RPC. Here’s what changed:

    • Call the new RPC getnetmsgstats (instead of getnetworkmsgstats)
    • Always return the sent and received directions, and do so at the highest level of the response object. Direction is not available as a filter. This is to reduce possible discrepancies that may come out of separate calls for sent and received stats.
    • The msg_count and total_bytes fields are always returned. This is not available as a filter.
    • Arguments: just one optional array of one or more filters.

    Here’s an updated summary of the design:

    getnetmsgstats RPC

    Returns the message count and total number of bytes for sent and received network traffic. Results may optionally be broken down message type, connection type, and/or network.

    Arguments

    filters: an optional array of one or more filters. Valid options are: msgtype, conntype, and network. If no filters are specified, totals are returned.

    Examples

     0 ./src/bitcoin-cli getnetmsgstats
     1{
     2  "sent": {
     3    "total": {
     4      "msg_count": 175,
     5      "total_bytes": 34109
     6    }
     7  },
     8  "received": {
     9    "total": {
    10      "msg_count": 280,
    11      "total_bytes": 2144499
    12    }
    13  }
    14}
    
      0./src/bitcoin-cli getnetmsgstats '["conntype", "network", "msgtype"]'
      1{
      2  "sent": {
      3    "block-relay-only": {
      4      "ipv4": {
      5        "getheaders": {
      6          "msg_count": 5,
      7          "total_bytes": 5265
      8        },
      9        "headers": {
     10          "msg_count": 4,
     11          "total_bytes": 424
     12        },
     13        "inv": {
     14          "msg_count": 1,
     15          "total_bytes": 61
     16        },
     17        "ping": {
     18          "msg_count": 6,
     19          "total_bytes": 192
     20        },
     21        "pong": {
     22          "msg_count": 4,
     23          "total_bytes": 128
     24        },
     25        "sendaddrv2": {
     26          "msg_count": 4,
     27          "total_bytes": 96
     28        },
     29        "sendcmpct": {
     30          "msg_count": 5,
     31          "total_bytes": 165
     32        },
     33        "sendheaders": {
     34          "msg_count": 2,
     35          "total_bytes": 48
     36        },
     37        "verack": {
     38          "msg_count": 5,
     39          "total_bytes": 120
     40        },
     41        "version": {
     42          "msg_count": 5,
     43          "total_bytes": 635
     44        },
     45        "wtxidrelay": {
     46          "msg_count": 4,
     47          "total_bytes": 96
     48        }
     49      }
     50    },
     51    "outbound-full-relay": {
     52      "ipv4": {
     53        "addr": {
     54          "msg_count": 6,
     55          "total_bytes": 360
     56        },
     57        "addrv2": {
     58          "msg_count": 15,
     59          "total_bytes": 868
     60        },
     61        "feefilter": {
     62          "msg_count": 9,
     63          "total_bytes": 288
     64        },
     65        "getaddr": {
     66          "msg_count": 9,
     67          "total_bytes": 216
     68        },
     69        "getblocktxn": {
     70          "msg_count": 2,
     71          "total_bytes": 4027
     72        },
     73        "getdata": {
     74          "msg_count": 210,
     75          "total_bytes": 42978
     76        },
     77        "getheaders": {
     78          "msg_count": 9,
     79          "total_bytes": 9477
     80        },
     81        "headers": {
     82          "msg_count": 11,
     83          "total_bytes": 1004
     84        },
     85        "inv": {
     86          "msg_count": 431,
     87          "total_bytes": 175079
     88        },
     89        "ping": {
     90          "msg_count": 17,
     91          "total_bytes": 544
     92        },
     93        "pong": {
     94          "msg_count": 17,
     95          "total_bytes": 544
     96        },
     97        "sendaddrv2": {
     98          "msg_count": 7,
     99          "total_bytes": 168
    100        },
    101        "sendcmpct": {
    102          "msg_count": 11,
    103          "total_bytes": 363
    104        },
    105        "sendheaders": {
    106          "msg_count": 8,
    107          "total_bytes": 192
    108        },
    109        "tx": {
    110          "msg_count": 59,
    111          "total_bytes": 41631
    112        },
    113        "verack": {
    114          "msg_count": 9,
    115          "total_bytes": 216
    116        },
    117        "version": {
    118          "msg_count": 10,
    119          "total_bytes": 1270
    120        },
    121        "wtxidrelay": {
    122          "msg_count": 7,
    123          "total_bytes": 168
    124        }
    125      }
    126    }
    127  },
    128  "received": {
    129    "block-relay-only": {
    130      "ipv4": {
    131        "feefilter": {
    132          "msg_count": 2,
    133          "total_bytes": 64
    134        },
    135        "getheaders": {
    136          "msg_count": 3,
    137          "total_bytes": 3159
    138        },
    139        "headers": {
    140          "msg_count": 3,
    141          "total_bytes": 237
    142        },
    143        "inv": {
    144          "msg_count": 1,
    145          "total_bytes": 133
    146        },
    147        "ping": {
    148          "msg_count": 4,
    149          "total_bytes": 128
    150        },
    151        "pong": {
    152          "msg_count": 4,
    153          "total_bytes": 128
    154        },
    155        "sendaddrv2": {
    156          "msg_count": 3,
    157          "total_bytes": 72
    158        },
    159        "sendcmpct": {
    160          "msg_count": 5,
    161          "total_bytes": 165
    162        },
    163        "sendheaders": {
    164          "msg_count": 3,
    165          "total_bytes": 72
    166        },
    167        "verack": {
    168          "msg_count": 5,
    169          "total_bytes": 120
    170        },
    171        "version": {
    172          "msg_count": 5,
    173          "total_bytes": 631
    174        },
    175        "wtxidrelay": {
    176          "msg_count": 3,
    177          "total_bytes": 72
    178        }
    179      }
    180    },
    181    "outbound-full-relay": {
    182      "ipv4": {
    183        "addr": {
    184          "msg_count": 5,
    185          "total_bytes": 60219
    186        },
    187        "addrv2": {
    188          "msg_count": 23,
    189          "total_bytes": 110809
    190        },
    191        "blocktxn": {
    192          "msg_count": 2,
    193          "total_bytes": 4600726
    194        },
    195        "cmpctblock": {
    196          "msg_count": 3,
    197          "total_bytes": 37593
    198        },
    199        "feefilter": {
    200          "msg_count": 9,
    201          "total_bytes": 288
    202        },
    203        "getdata": {
    204          "msg_count": 42,
    205          "total_bytes": 3174
    206        },
    207        "getheaders": {
    208          "msg_count": 9,
    209          "total_bytes": 9477
    210        },
    211        "headers": {
    212          "msg_count": 14,
    213          "total_bytes": 1484
    214        },
    215        "inv": {
    216          "msg_count": 243,
    217          "total_bytes": 109647
    218        },
    219        "notfound": {
    220          "msg_count": 1,
    221          "total_bytes": 277
    222        },
    223        "ping": {
    224          "msg_count": 17,
    225          "total_bytes": 544
    226        },
    227        "pong": {
    228          "msg_count": 17,
    229          "total_bytes": 544
    230        },
    231        "sendaddrv2": {
    232          "msg_count": 8,
    233          "total_bytes": 192
    234        },
    235        "sendcmpct": {
    236          "msg_count": 16,
    237          "total_bytes": 528
    238        },
    239        "sendheaders": {
    240          "msg_count": 9,
    241          "total_bytes": 216
    242        },
    243        "tx": {
    244          "msg_count": 1039,
    245          "total_bytes": 1225891
    246        },
    247        "verack": {
    248          "msg_count": 10,
    249          "total_bytes": 240
    250        },
    251        "version": {
    252          "msg_count": 10,
    253          "total_bytes": 1276
    254        },
    255        "wtxidrelay": {
    256          "msg_count": 8,
    257          "total_bytes": 192
    258        }
    259      }
    260    }
    261  }
    262}
    

    So that is where things are at! Appreciate any and all feedback on the new PR and thank you everyone for the time you’ve already put into helping guide this.

  21. satsie commented at 9:08 pm on April 17, 2023: contributor

    Hi, I’m back again :laughing:

    Here is the latest code I have: https://github.com/satsie/bitcoin/pull/5

    It incorporates a bunch of feedback I got from the code I previously posted. The changes are mostly under the hood and have all been documented in the new PR’s description for anyone that is curious. Design-wise, the only noteworthy thing is ‘direction’ is now available as a “aggregate type” (the term I am now using instead of “filter”). So now, results can be aggregated by:

    • direction
    • network
    • connection type
    • message type

    Direction will no longer be returned by default.

    As mentioned, the RPC really hasn’t changed, but attaching an updated summary of the latest so folks don’t have to look through previous comments.

    Here’s an updated summary of the design:

    getnetmsgstats RPC

    Returns the message count and total number of bytes for sent and received network traffic. Results may optionally be broken down message type, connection type, and/or network.

    Arguments

    aggregate_by: an optional array of one or more aggregate types. Valid options are: direction, network, message_type, and connection_type. If no aggregate types are specified, totals are returned.

    Examples

    0 ./src/bitcoin-cli getnetmsgstats
    1{
    2  "totals": {
    3    "message_count": 1905,
    4    "byte_count": 818941962
    5  }
    6}
    
      0./src/bitcoin-cli getnetmsgstats '["conntype","msgtype"]'
      1{
      2  "block-relay-only": {
      3    "addrv2": {
      4      "message_count": 1,
      5      "byte_count": 40
      6    },
      7    "block": {
      8      "message_count": 6,
      9      "byte_count": 9899426
     10    },
     11    "cmpctblock": {
     12      "message_count": 1,
     13      "byte_count": 16615
     14    },
     15    "feefilter": {
     16      "message_count": 1,
     17      "byte_count": 32
     18    },
     19    "getdata": {
     20      "message_count": 1,
     21      "byte_count": 241
     22    },
     23    "getheaders": {
     24      "message_count": 4,
     25      "byte_count": 4212
     26    },
     27    "headers": {
     28      "message_count": 10,
     29      "byte_count": 1303
     30    },
     31    "inv": {
     32      "message_count": 6,
     33      "byte_count": 366
     34    },
     35    "ping": {
     36      "message_count": 4,
     37      "byte_count": 128
     38    },
     39    "pong": {
     40      "message_count": 4,
     41      "byte_count": 128
     42    },
     43    "sendaddrv2": {
     44      "message_count": 4,
     45      "byte_count": 96
     46    },
     47    "sendcmpct": {
     48      "message_count": 6,
     49      "byte_count": 198
     50    },
     51    "sendheaders": {
     52      "message_count": 4,
     53      "byte_count": 96
     54    },
     55    "verack": {
     56      "message_count": 4,
     57      "byte_count": 96
     58    },
     59    "version": {
     60      "message_count": 4,
     61      "byte_count": 507
     62    },
     63    "wtxidrelay": {
     64      "message_count": 4,
     65      "byte_count": 96
     66    }
     67  },
     68  "outbound-full-relay": {
     69    "addr": {
     70      "message_count": 6,
     71      "byte_count": 30302
     72    },
     73    "addrv2": {
     74      "message_count": 10,
     75      "byte_count": 76016
     76    },
     77    "blocktxn": {
     78      "message_count": 1,
     79      "byte_count": 1288086
     80    },
     81    "cmpctblock": {
     82      "message_count": 1,
     83      "byte_count": 16615
     84    },
     85    "feefilter": {
     86      "message_count": 15,
     87      "byte_count": 480
     88    },
     89    "getaddr": {
     90      "message_count": 8,
     91      "byte_count": 192
     92    },
     93    "getblocktxn": {
     94      "message_count": 1,
     95      "byte_count": 2515
     96    },
     97    "getdata": {
     98      "message_count": 79,
     99      "byte_count": 16951
    100    },
    101    "getheaders": {
    102      "message_count": 15,
    103      "byte_count": 15795
    104    },
    105    "headers": {
    106      "message_count": 20,
    107      "byte_count": 2039
    108    },
    109    "inv": {
    110      "message_count": 134,
    111      "byte_count": 58826
    112    },
    113    "notfound": {
    114      "message_count": 7,
    115      "byte_count": 787
    116    },
    117    "other": {
    118      "message_count": 6,
    119      "byte_count": 438
    120    },
    121    "ping": {
    122      "message_count": 15,
    123      "byte_count": 480
    124    },
    125    "pong": {
    126      "message_count": 14,
    127      "byte_count": 448
    128    },
    129    "sendaddrv2": {
    130      "message_count": 10,
    131      "byte_count": 240
    132    },
    133    "sendcmpct": {
    134      "message_count": 19,
    135      "byte_count": 627
    136    },
    137    "sendheaders": {
    138      "message_count": 14,
    139      "byte_count": 336
    140    },
    141    "tx": {
    142      "message_count": 398,
    143      "byte_count": 211333
    144    },
    145    "verack": {
    146      "message_count": 16,
    147      "byte_count": 384
    148    },
    149    "version": {
    150      "message_count": 17,
    151      "byte_count": 2151
    152    },
    153    "wtxidrelay": {
    154      "message_count": 10,
    155      "byte_count": 240
    156    }
    157  }
    158}
    
      0./src/bitcoin-cli getnetmsgstats '["network", "direction", "connection_type", "message_type"]'
      1{
      2  "ipv4": {
      3    "received": {
      4      "block-relay-only": {
      5        "addrv2": {
      6          "message_count": 5,
      7          "byte_count": 227
      8        },
      9        "block": {
     10          "message_count": 6,
     11          "byte_count": 9899426
     12        },
     13        "cmpctblock": {
     14          "message_count": 2,
     15          "byte_count": 25184
     16        },
     17        "feefilter": {
     18          "message_count": 1,
     19          "byte_count": 32
     20        },
     21        "getheaders": {
     22          "message_count": 2,
     23          "byte_count": 2106
     24        },
     25        "headers": {
     26          "message_count": 6,
     27          "byte_count": 1041
     28        },
     29        "inv": {
     30          "message_count": 3,
     31          "byte_count": 183
     32        },
     33        "ping": {
     34          "message_count": 6,
     35          "byte_count": 192
     36        },
     37        "pong": {
     38          "message_count": 6,
     39          "byte_count": 192
     40        },
     41        "sendaddrv2": {
     42          "message_count": 2,
     43          "byte_count": 48
     44        },
     45        "sendcmpct": {
     46          "message_count": 3,
     47          "byte_count": 99
     48        },
     49        "sendheaders": {
     50          "message_count": 2,
     51          "byte_count": 48
     52        },
     53        "verack": {
     54          "message_count": 2,
     55          "byte_count": 48
     56        },
     57        "version": {
     58          "message_count": 2,
     59          "byte_count": 253
     60        },
     61        "wtxidrelay": {
     62          "message_count": 2,
     63          "byte_count": 48
     64        }
     65      },
     66      "outbound-full-relay": {
     67        "addr": {
     68          "message_count": 4,
     69          "byte_count": 30222
     70        },
     71        "addrv2": {
     72          "message_count": 26,
     73          "byte_count": 148422
     74        },
     75        "blocktxn": {
     76          "message_count": 2,
     77          "byte_count": 3752987
     78        },
     79        "cmpctblock": {
     80          "message_count": 2,
     81          "byte_count": 25184
     82        },
     83        "feefilter": {
     84          "message_count": 11,
     85          "byte_count": 352
     86        },
     87        "getdata": {
     88          "message_count": 24,
     89          "byte_count": 2184
     90        },
     91        "getheaders": {
     92          "message_count": 11,
     93          "byte_count": 11583
     94        },
     95        "headers": {
     96          "message_count": 20,
     97          "byte_count": 2120
     98        },
     99        "inv": {
    100          "message_count": 275,
    101          "byte_count": 116207
    102        },
    103        "notfound": {
    104          "message_count": 9,
    105          "byte_count": 981
    106        },
    107        "other": {
    108          "message_count": 44,
    109          "byte_count": 3430
    110        },
    111        "ping": {
    112          "message_count": 20,
    113          "byte_count": 640
    114        },
    115        "pong": {
    116          "message_count": 20,
    117          "byte_count": 640
    118        },
    119        "sendaddrv2": {
    120          "message_count": 9,
    121          "byte_count": 216
    122        },
    123        "sendcmpct": {
    124          "message_count": 18,
    125          "byte_count": 594
    126        },
    127        "sendheaders": {
    128          "message_count": 11,
    129          "byte_count": 264
    130        },
    131        "tx": {
    132          "message_count": 1161,
    133          "byte_count": 596142
    134        },
    135        "verack": {
    136          "message_count": 12,
    137          "byte_count": 288
    138        },
    139        "version": {
    140          "message_count": 12,
    141          "byte_count": 1536
    142        },
    143        "wtxidrelay": {
    144          "message_count": 9,
    145          "byte_count": 216
    146        }
    147      }
    148    },
    149    "sent": {
    150      "block-relay-only": {
    151        "getdata": {
    152          "message_count": 1,
    153          "byte_count": 241
    154        },
    155        "getheaders": {
    156          "message_count": 2,
    157          "byte_count": 2106
    158        },
    159        "headers": {
    160          "message_count": 6,
    161          "byte_count": 474
    162        },
    163        "inv": {
    164          "message_count": 3,
    165          "byte_count": 183
    166        },
    167        "ping": {
    168          "message_count": 6,
    169          "byte_count": 192
    170        },
    171        "pong": {
    172          "message_count": 6,
    173          "byte_count": 192
    174        },
    175        "sendaddrv2": {
    176          "message_count": 2,
    177          "byte_count": 48
    178        },
    179        "sendcmpct": {
    180          "message_count": 3,
    181          "byte_count": 99
    182        },
    183        "sendheaders": {
    184          "message_count": 2,
    185          "byte_count": 48
    186        },
    187        "verack": {
    188          "message_count": 2,
    189          "byte_count": 48
    190        },
    191        "version": {
    192          "message_count": 2,
    193          "byte_count": 254
    194        },
    195        "wtxidrelay": {
    196          "message_count": 2,
    197          "byte_count": 48
    198        }
    199      },
    200      "outbound-full-relay": {
    201        "addr": {
    202          "message_count": 4,
    203          "byte_count": 250
    204        },
    205        "addrv2": {
    206          "message_count": 19,
    207          "byte_count": 938
    208        },
    209        "feefilter": {
    210          "message_count": 12,
    211          "byte_count": 384
    212        },
    213        "getaddr": {
    214          "message_count": 12,
    215          "byte_count": 288
    216        },
    217        "getblocktxn": {
    218          "message_count": 2,
    219          "byte_count": 3883
    220        },
    221        "getdata": {
    222          "message_count": 249,
    223          "byte_count": 48813
    224        },
    225        "getheaders": {
    226          "message_count": 12,
    227          "byte_count": 12636
    228        },
    229        "headers": {
    230          "message_count": 13,
    231          "byte_count": 1297
    232        },
    233        "inv": {
    234          "message_count": 464,
    235          "byte_count": 166868
    236        },
    237        "ping": {
    238          "message_count": 21,
    239          "byte_count": 672
    240        },
    241        "pong": {
    242          "message_count": 20,
    243          "byte_count": 640
    244        },
    245        "sendaddrv2": {
    246          "message_count": 9,
    247          "byte_count": 216
    248        },
    249        "sendcmpct": {
    250          "message_count": 13,
    251          "byte_count": 429
    252        },
    253        "sendheaders": {
    254          "message_count": 11,
    255          "byte_count": 264
    256        },
    257        "tx": {
    258          "message_count": 44,
    259          "byte_count": 18966
    260        },
    261        "verack": {
    262          "message_count": 12,
    263          "byte_count": 288
    264        },
    265        "version": {
    266          "message_count": 13,
    267          "byte_count": 1651
    268        },
    269        "wtxidrelay": {
    270          "message_count": 9,
    271          "byte_count": 216
    272        }
    273      }
    274    }
    275  }
    276}
    
  22. vasild commented at 8:44 am on April 18, 2023: contributor

    @satsie, thanks, all looks good except the semantic of the argument (I have not looked at the code yet).

    To me “aggregate by” means to “sum by” those fields, similarly to the GROUP BY clause in SQL. So, if I aggregate by connection_type and message_type I would expect to see only network and direction in the output. I do not have a strong opinion whether the argument should act that way or in the inverse way, but the name of the argument should be chosen in such a way as to avoid confusion. Some ideas:

    • to group by the listed fields: group_by, aggregate_by, sum_by, omit, hide
    • to group by the non-listed fields: show_only
  23. satsie commented at 6:08 pm on April 25, 2023: contributor

    Thanks @vasild! I think show_only is really clear and straightforward. I will update my code to use that instead.

    In the process of updating, I noticed the phrase “aggregated by” used in the context of the getpeerinfo RPC, where it breaks results down by message type:

    0                    {RPCResult::Type::OBJ_DYN, "bytessent_per_msg", "",
    1                    {
    2                        {RPCResult::Type::NUM, "msg", "The total bytes sent aggregated by message type\n"
    3                                                      "When a message type is not listed in this json object, the bytes sent are 0.\n"
    4                                                      "Only known message types can appear as keys in the object."}
    5                    }},
    6                    {RPCResult::Type::OBJ_DYN, "bytesrecv_per_msg", "",
    7                    {
    8                        {RPCResult::Type::NUM, "msg", "The total bytes received aggregated by message type\n"
    

    Its usage is not in alignment with your suggestion that “aggregate by” should be for fields that do not appear in the results, but luckily it’s just part of RPC documentation :)


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-22 06:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me