net: redundant connections with a single peer #22559

issue tryphe openend this issue on July 26, 2021
  1. tryphe commented at 9:35 pm on July 26, 2021: contributor

    To reproduce: #22559 (comment) (general I2P setup, in+out) #22559 (comment) (multiple in, multiple out)

    I’ve observed my node keeping multiple connections to the same peer in various ways, due to long I2P connection negotiation times, as it’s not uncommon to take 10 to 20 seconds to connect to a peer. If additional connections are made during negotiation with the same peer, all connections remain open.

    The bug should occur on any network. This is an I2P only node, which I’m sure increases the chances of this bug happening because my peers.dat is tiny. Note that I do not have any addnode= in my config or anything like that.

    1 inbound + 1 outbound: I connected to a peer. 18 seconds later, it connected to me. These connections persisted for over 12 hours. Both peers are running on a very recent 22.99.0 master branch.

    2 inbound or 2 inbound: (2 can be any number with the right timing) A node connected to me twice around the same time. These connections persisted for over 12 hours. This peer was running the 22.0.0 release.

    getpeerinfo data for 1 inbound + 1 outbound peers:

      0  {
      1    "id": 680,
      2    "addr": "jz3s4eurm5vzjresf4mwo7oni4bk36daolwxh4iqtewakylgkxmq.b32.i2p:0",
      3    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
      4    "network": "i2p",
      5    "services": "0000000000000409",
      6    "servicesnames": [
      7      "NETWORK",
      8      "WITNESS",
      9      "NETWORK_LIMITED"
     10    ],
     11    "relaytxes": true,
     12    "lastsend": 1627290126,
     13    "lastrecv": 1627290124,
     14    "last_transaction": 1627290120,
     15    "last_block": 0,
     16    "bytessent": 3930317,
     17    "bytesrecv": 13011805,
     18    "conntime": 1627229534,
     19    "timeoffset": -2,
     20    "pingtime": 3.264304,
     21    "minping": 0.719468,
     22    "version": 70016,
     23    "subver": "/Satoshi:22.99.0(@dunxen)/",
     24    "inbound": false,
     25    "bip152_hb_to": false,
     26    "bip152_hb_from": false,
     27    "startingheight": 692610,
     28    "synced_headers": 692727,
     29    "synced_blocks": 692727,
     30    "inflight": [
     31    ],
     32    "addr_processed": 7049,
     33    "addr_rate_limited": 1244,
     34    "permissions": [
     35    ],
     36    "minfeefilter": 0.00001000,
     37    "bytessent_per_msg": {
     38      "addrv2": 121698,
     39      "feefilter": 32,
     40      "getaddr": 24,
     41      "getdata": 442191,
     42      "getheaders": 1053,
     43      "headers": 8480,
     44      "inv": 3307633,
     45      "notfound": 316,
     46      "ping": 16128,
     47      "pong": 16128,
     48      "sendaddrv2": 24,
     49      "sendcmpct": 66,
     50      "sendheaders": 24,
     51      "tx": 16345,
     52      "verack": 24,
     53      "version": 127,
     54      "wtxidrelay": 24
     55    },
     56    "bytesrecv_per_msg": {
     57      "addrv2": 167167,
     58      "feefilter": 32,
     59      "getdata": 2090,
     60      "getheaders": 1053,
     61      "headers": 12190,
     62      "inv": 3572569,
     63      "notfound": 122,
     64      "ping": 16128,
     65      "pong": 16128,
     66      "sendaddrv2": 24,
     67      "sendcmpct": 66,
     68      "sendheaders": 24,
     69      "tx": 9208916,
     70      "verack": 24,
     71      "version": 136,
     72      "wtxidrelay": 24
     73    },
     74    "connection_type": "outbound-full-relay"
     75  },
     76  {
     77    "id": 682,
     78    "addr": "jz3s4eurm5vzjresf4mwo7oni4bk36daolwxh4iqtewakylgkxmq.b32.i2p:0",
     79    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
     80    "network": "i2p",
     81    "services": "0000000000000409",
     82    "servicesnames": [
     83      "NETWORK",
     84      "WITNESS",
     85      "NETWORK_LIMITED"
     86    ],
     87    "relaytxes": true,
     88    "lastsend": 1627290124,
     89    "lastrecv": 1627290123,
     90    "last_transaction": 1627290013,
     91    "last_block": 0,
     92    "bytessent": 1585430,
     93    "bytesrecv": 7794448,
     94    "conntime": 1627229552,
     95    "timeoffset": -13,
     96    "pingtime": 1.396263,
     97    "minping": 0.759804,
     98    "version": 70016,
     99    "subver": "/Satoshi:22.99.0(@dunxen)/",
    100    "inbound": true,
    101    "bip152_hb_to": false,
    102    "bip152_hb_from": false,
    103    "startingheight": 692610,
    104    "synced_headers": 692727,
    105    "synced_blocks": 692727,
    106    "inflight": [
    107    ],
    108    "addr_processed": 6055,
    109    "addr_rate_limited": 1188,
    110    "permissions": [
    111    ],
    112    "minfeefilter": 0.00001000,
    113    "bytessent_per_msg": {
    114      "addrv2": 124133,
    115      "feefilter": 32,
    116      "getdata": 90391,
    117      "getheaders": 1053,
    118      "headers": 8455,
    119      "inv": 1201793,
    120      "ping": 16160,
    121      "pong": 16160,
    122      "sendaddrv2": 24,
    123      "sendcmpct": 66,
    124      "sendheaders": 24,
    125      "tx": 126964,
    126      "verack": 24,
    127      "version": 127,
    128      "wtxidrelay": 24
    129    },
    130    "bytesrecv_per_msg": {
    131      "addrv2": 145913,
    132      "cmpctblock": 2422,
    133      "feefilter": 32,
    134      "getaddr": 24,
    135      "getdata": 14266,
    136      "getheaders": 1053,
    137      "headers": 12084,
    138      "inv": 5364780,
    139      "notfound": 61,
    140      "ping": 16160,
    141      "pong": 16160,
    142      "sendaddrv2": 24,
    143      "sendcmpct": 66,
    144      "sendheaders": 24,
    145      "tx": 2221195,
    146      "verack": 24,
    147      "version": 136,
    148      "wtxidrelay": 24
    149    },
    150    "connection_type": "inbound"
    151  }
    

    getpeerinfo data for 2 inbound peers:

      0 {
      1    "id": 1326,
      2    "addr": "acgncqkgqekcxaagpes6ubfiuhg54ijklwcupbnitte5svh3a3bq.b32.i2p:0",
      3    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
      4    "network": "i2p",
      5    "services": "0000000000000409",
      6    "servicesnames": [
      7      "NETWORK",
      8      "WITNESS",
      9      "NETWORK_LIMITED"
     10    ],
     11    "relaytxes": true,
     12    "lastsend": 1629449575,
     13    "lastrecv": 1629449571,
     14    "last_transaction": 1629449402,
     15    "last_block": 0,
     16    "bytessent": 4675731,
     17    "bytesrecv": 5393457,
     18    "conntime": 1629383715,
     19    "timeoffset": -3,
     20    "pingtime": 2.009341,
     21    "minping": 0.5858950000000001,
     22    "version": 70016,
     23    "subver": "/Satoshi:22.0.0(MN@ca)/",
     24    "inbound": true,
     25    "bip152_hb_to": false,
     26    "bip152_hb_from": false,
     27    "startingheight": 696531,
     28    "synced_headers": 696664,
     29    "synced_blocks": 696664,
     30    "inflight": [
     31    ],
     32    "addr_relay_enabled": true,
     33    "addr_processed": 6062,
     34    "addr_rate_limited": 0,
     35    "permissions": [
     36    ],
     37    "minfeefilter": 0.00001000,
     38    "bytessent_per_msg": {
     39      "addrv2": 149637,
     40      "cmpctblock": 5710,
     41      "feefilter": 32,
     42      "getdata": 2941,
     43      "getheaders": 1053,
     44      "headers": 9752,
     45      "inv": 3847162,
     46      "notfound": 3173,
     47      "ping": 17568,
     48      "pong": 17312,
     49      "sendaddrv2": 24,
     50      "sendcmpct": 66,
     51      "sendheaders": 24,
     52      "tx": 621102,
     53      "verack": 24,
     54      "version": 127,
     55      "wtxidrelay": 24
     56    },
     57    "bytesrecv_per_msg": {
     58      "addrv2": 130686,
     59      "feefilter": 32,
     60      "getaddr": 24,
     61      "getdata": 51317,
     62      "getheaders": 1053,
     63      "headers": 10519,
     64      "inv": 5142494,
     65      "ping": 17312,
     66      "pong": 17568,
     67      "sendaddrv2": 24,
     68      "sendcmpct": 66,
     69      "sendheaders": 24,
     70      "tx": 22157,
     71      "verack": 24,
     72      "version": 133,
     73      "wtxidrelay": 24
     74    },
     75    "connection_type": "inbound"
     76  },
     77  {
     78    "id": 1327,
     79    "addr": "acgncqkgqekcxaagpes6ubfiuhg54ijklwcupbnitte5svh3a3bq.b32.i2p:0",
     80    "addrbind": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p:0",
     81    "network": "i2p",
     82    "services": "0000000000000409",
     83    "servicesnames": [
     84      "NETWORK",
     85      "WITNESS",
     86      "NETWORK_LIMITED"
     87    ],
     88    "relaytxes": true,
     89    "lastsend": 1629449578,
     90    "lastrecv": 1629449580,
     91    "last_transaction": 1629448430,
     92    "last_block": 0,
     93    "bytessent": 4869327,
     94    "bytesrecv": 5433036,
     95    "conntime": 1629383742,
     96    "timeoffset": -21,
     97    "pingtime": 1.979714,
     98    "minping": 0.69269,
     99    "version": 70016,
    100    "subver": "/Satoshi:22.0.0(MN@ca)/",
    101    "inbound": true,
    102    "bip152_hb_to": false,
    103    "bip152_hb_from": false,
    104    "startingheight": 696531,
    105    "synced_headers": 696664,
    106    "synced_blocks": 696664,
    107    "inflight": [
    108    ],
    109    "addr_relay_enabled": true,
    110    "addr_processed": 6188,
    111    "addr_rate_limited": 0,
    112    "permissions": [
    113    ],
    114    "minfeefilter": 0.00001000,
    115    "bytessent_per_msg": {
    116      "addrv2": 148858,
    117      "feefilter": 32,
    118      "getdata": 1932,
    119      "getheaders": 1053,
    120      "headers": 9858,
    121      "inv": 3872869,
    122      "notfound": 2259,
    123      "ping": 17568,
    124      "pong": 17312,
    125      "sendaddrv2": 24,
    126      "sendcmpct": 66,
    127      "sendheaders": 24,
    128      "tx": 797297,
    129      "verack": 24,
    130      "version": 127,
    131      "wtxidrelay": 24
    132    },
    133    "bytesrecv_per_msg": {
    134      "addrv2": 133478,
    135      "feefilter": 32,
    136      "getaddr": 24,
    137      "getdata": 68654,
    138      "getheaders": 1053,
    139      "headers": 10625,
    140      "inv": 5171958,
    141      "ping": 17312,
    142      "pong": 17568,
    143      "sendaddrv2": 24,
    144      "sendcmpct": 66,
    145      "sendheaders": 24,
    146      "tx": 12037,
    147      "verack": 24,
    148      "version": 133,
    149      "wtxidrelay": 24
    150    },
    151    "connection_type": "inbound"
    152  }
    
  2. tryphe added the label Bug on Jul 26, 2021
  3. ghost commented at 10:07 pm on July 26, 2021: none
    Can you share the result for bitcoin-cli -netinfo and relevant options from bitcoin.conf ? Does this happen only with i2p peers?
  4. tryphe commented at 0:24 am on July 27, 2021: contributor

    Can you share the result for bitcoin-cli -netinfo and relevant options from bitcoin.conf ? Does this happen only with i2p peers?

    Sure! {moved extra info to original post about being an I2P-only node}

    Edit: I misread your initial request but here’s the -netinfo output:

    0Bitcoin Core v22.99.0-8193294caba0-dirty - 70016/Satoshi:22.99.0/
    1
    2        ipv4    ipv6   onion     i2p   total   block  manual
    3in         0       0       0       3       3
    4out        0       0       0      11      11       0       2
    5total      0       0       0      14      14
    6
    7Local addresses
    8bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p     port      0    score      4
    

    getnetworkinfo:

     0{
     1  "version": 229900,
     2  "subversion": "/Satoshi:22.99.0/",
     3  "protocolversion": 70016,
     4  "localservices": "0000000000000409",
     5  "localservicesnames": [
     6    "NETWORK",
     7    "WITNESS",
     8    "NETWORK_LIMITED"
     9  ],
    10  "localrelay": true,
    11  "timeoffset": -6,
    12  "networkactive": true,
    13  "connections": 10,
    14  "connections_in": 4,
    15  "connections_out": 6,
    16  "networks": [
    17    {
    18      "name": "ipv4",
    19      "limited": true,
    20      "reachable": false,
    21      "proxy": "",
    22      "proxy_randomize_credentials": false
    23    },
    24    {
    25      "name": "ipv6",
    26      "limited": true,
    27      "reachable": false,
    28      "proxy": "",
    29      "proxy_randomize_credentials": false
    30    },
    31    {
    32      "name": "onion",
    33      "limited": true,
    34      "reachable": false,
    35      "proxy": "",
    36      "proxy_randomize_credentials": false
    37    },
    38    {
    39      "name": "i2p",
    40      "limited": false,
    41      "reachable": true,
    42      "proxy": "127.0.0.1:7656",
    43      "proxy_randomize_credentials": false
    44    }
    45  ],
    46  "relayfee": 0.00001000,
    47  "incrementalfee": 0.00001000,
    48  "localaddresses": [
    49    {
    50      "address": "bitcornrd36coazsbzsz4pdebyzvaplmsalq4kpoljmn6cg6x5zq.b32.i2p",
    51      "port": 0,
    52      "score": 4
    53    }
    54  ],
    55  "warnings": "This is a pre-release test build - use at your own risk - do not use for mining or merchant applications"
    56}
    

    bitcoin.conf:

    0listen=1
    1port=8333
    2bind=127.0.0.1
    3onlynet=i2p
    4i2pacceptincoming=1
    5i2psam=127.0.0.1:7656
    

    (note: removed the port= line and the bug still occurs)

  5. tryphe renamed this:
    net: redundant connections are made with a single peer
    net: redundant connections with a single peer
    on Jul 27, 2021
  6. jonatack commented at 3:46 am on July 27, 2021: member
    Thanks for reporting and for testing an I2P service. Are you or the inbound peer running a version before #22112? I see you’re setting port to 8333. (This might be a duplicate of #21389.)
  7. tryphe commented at 3:49 am on July 27, 2021: contributor

    Thanks for reporting. Are you or the inbound peer running a version before #22112? I see you’re setting port to 8333. (This might be a duplicate of #21389.)

    Negative. The port=0 change was merged on 7/13, and the version was bumped to 0.22 on 7/20. I just happened to leave my config the same.

  8. tryphe commented at 3:53 am on July 27, 2021: contributor
    As per getnetworkinfo, my I2P port is forced to 0, so I don’t think this something to do with the bug.
  9. tryphe commented at 3:55 am on July 27, 2021: contributor

    I tried to replicate the bug by waiting for a random I2P peer to make an inbound connection. Afterwards, I call addconnection abc.b32.i2p full-outbound-relay. I commented out these lines to skip the regtest check.

    But I can’t reproduce the bug in the OP, the call fails via debug.log: Failed to open new connection, already connected

    I think one or two things are happening here:

    1. Some environmental variables are at play. I did not connect to the peer as quickly as when the bug was produced. Ping times on I2P are often 5 to 20 seconds. I’m not sure if this could cause the issue. I often see inbound/outbound duplicate connections to the same peer at the same time, however the latter connection almost always closes within a few seconds, except for in the case of the OP and one other time that I’ve observed. Perhaps the bug happens if we connect simultaneously, but only under certain conditions?
    2. Maybe the call stack which fails during my test is not the same as during real time. There are many places in which we call FindNode in various fashions (sometimes called directly or through AlreadyConnectedToAddress, for example) and does not fail or succeed verbosely, which may lead to an edge case somewhere?
  10. jonatack commented at 4:01 am on July 27, 2021: member
    Are both peers you? (an inbound peer running an earlier version could still double connect to you.)
  11. tryphe commented at 4:03 am on July 27, 2021: contributor

    Are both peers you? (an inbound peer running an earlier version could still double connect to you.)

    Nope, although they do have an @dunxen in their User Agent. But I’m unsure who that is. We are both running the i2p-port=0 branch.

  12. jonatack commented at 4:29 am on July 27, 2021: member
    Just restarted a clearnet/onion/i2p node and both peers were briefly double-connected to me as double inbound peers, but only for a few seconds and now down to one connection each. @duncandean, at what commit is your I2P service running? (cli -version or -netinfo will tell you)
  13. tryphe commented at 4:45 am on July 27, 2021: contributor

    Just restarted a clearnet/onion/i2p node and both peers were briefly double-connected to me as double inbound peers, but only for a few seconds and now down to one connection each.

    Sounds about right, I’ve observed this as well. I only spotted the bug because I frequently look at my peer list to see how quickly people are onboarding to I2P. But fwiw, my node was up for about a week before I observed the 12 hour long connections in the OP.

    I’m going to clone my I2P machine to see if I can reproduce the bug by connecting to myself.

  14. dunxen commented at 5:35 am on July 27, 2021: contributor

    @duncandean, at what commit is your I2P service running? (cli -version or -netinfo will tell you)

    I’m at fd557ceb.

    Also, I do have an addnode=bitcorn... in my config.

  15. tryphe commented at 5:52 am on July 27, 2021: contributor

    @duncandean, at what commit is your I2P service running? (cli -version or -netinfo will tell you)

    I’m at fd557ce.

    Also, I do have an addnode=bitcorn... in my config.

    Ahh, thanks for clarifying!

    I wonder why my node automatically maintained a connection to you, though. I don’t use any addnodes.

  16. jonatack commented at 7:15 am on July 27, 2021: member
    @tryphe you will probably see the address if you run rpc getnodeaddresses 0 “i2p”, indicating its in your addrman. Then look at how many I2P peers your node knows (cli -addrinfo gives the totals).
  17. tryphe commented at 7:54 am on July 27, 2021: contributor

    @tryphe you will probably see the address if you run rpc getnodeaddresses 0 “i2p”, indicating its in your addrman. Then look at how many I2P peers your node knows (cli -addrinfo gives the totals).

    18 I2P peers currently.

  18. tryphe commented at 8:09 am on July 27, 2021: contributor

    I was able to reproduce the behavior of the bug with 2 machines on mainnet. To reproduce:

    Optional steps on installing if you don’t have an I2P node: a. stackexchange howto for I2P bitcoin b. Configure and confirm your i2pd mixnet(external) port and sam(local loopback) port in /etc/i2pd/i2pd.conf c. Restart i2pd: sudo systemctl restart i2pd d. Ensure the sam/mixnet ports are listening locally: ss -lt. You should see 127.0.0.1:sam_port(default 7656) and 0.0.0.0:mixnet_port(random port) e. Open a hole in your firewall to your mixnet ports. One mixnet port for each machine.

    1. Obtain two I2P bitcoin nodes and ensure the I2P mixnet port is externally accessible on each machine.

    2. Start bitcoind or bitcoin-qt. Run getnetworkinfo until the service is established and you see your abcdef.b32.i2p address near the bottom of the output. Takes a few minutes.

    3. On machine A, enter addnode machine_b_address.b32.i2p onetry, and vice versa on machine B.

    4. Run the commands at the same time.

    If you are fast enough, the machines will maintain two connections, and the bug occurs. If you are too slow, normal expected behavior will occur, and each subsequent connection will close.

    Note: also works with addconnection instead of addnode, which has the same behavior bugwise, but tags the connection differently, ie. “Full Relay” vs “Manual”.

  19. tryphe commented at 9:19 am on July 27, 2021: contributor
    I didn’t test it on normal IP but I’m going to assume the latency has something to do with the bug. Otherwise it would have already been reported…. maybe. If anyone can try to reproduce it on non-i2p with/without some latency, that would be sweet!
  20. tryphe commented at 10:47 am on July 27, 2021: contributor
    Doh, this seems like a duplicate of #21389. Just realized, sorry. Will follow up there if there’s a good solution found.
  21. tryphe closed this on Jul 27, 2021

  22. tryphe commented at 10:53 am on July 27, 2021: contributor

    @jonatack I think we should re-open the original issue instead of keeping this one open. Regarding this comment, it should be fixed, but isn’t.

    But feel free to re-open this one if you want.

    Note: I removed the port= line from my config before and after testing, so it seems unrelated to that.

  23. jonatack commented at 11:43 am on July 27, 2021: member

    Note: I removed the port= line from my config before and after testing, so it seems unrelated to that.

    I didn’t try setting port=8333 while testing #22112, so thanks for checking and reporting on that. I have watch -netinfo 4 running all the time and haven’t seen any persistent double connections lately but will keep an eye out. Good to finally know who the bitcorn...b32.i2p address is!

  24. vasild commented at 1:05 pm on July 27, 2021: member

    I think this is a different issue than #21389.

    The latter allowed A to open more than one outgoing connections to B on different ports. This is I2P specific because ports are irrelevant in I2P (when using SAM 3.1), so connecting to B:port1 and B:port2 would be, for sure, connecting to the same peer.

    This issue looks to be about A opening a connection to B while, at about the same time, B opening a connection to A. I think, with the right timing, this can happen also with other networks, i.e. is not I2P specific. I have not confirmed that with an experiment.

  25. tryphe commented at 8:36 pm on July 27, 2021: contributor
    Thanks vasild! That makes sense. If one socket is already open, it works as intended. I’ll reopen this.
  26. tryphe reopened this on Jul 27, 2021

  27. tryphe commented at 10:35 pm on July 27, 2021: contributor

    @jonatack I tried your patch(looks like the comment is deleted now) with some added verbosity and both nodes disconnect each other instead of maintaining redundant connections.

    This looks like an observation timing problem. From the perspective of node A and B, they both tried to connect first. If they disconnect on accept, both are disconnected. If we make them disconnect after accept, I assume both will disconnect unless there is some sort of agreement made on who will disconnect. Something like: if (my_address < peer_address) { disconnect(); } that way only one side will disconnect and no communication is required.

    But putting in this kind of logic will require the other peer to have an updated binary. Maybe if they don’t disconnect after a certain period and they are the one with a lower peer address, disconnect one of the redundant sockets after a certain period.

    Or maybe it can be done with just some FindNode() logic and an edge case is missing somewhere. I’m not sure.

  28. tryphe commented at 4:46 am on August 2, 2021: contributor

    I observed the bug happening again today while restarting my node on the current master branch with a different 22.99.0 peer.

    Maybe we can add a task scheduler routine to check for duplicate connections? It seems like we can’t patch into existing functionality so this seems like the most obvious thing to do.

  29. jonatack commented at 10:01 am on August 2, 2021: member
    Was the double connection in+in or in+out? An I2P peer?
  30. tryphe commented at 10:26 pm on August 2, 2021: contributor

    Was the double connection in+in or in+out? An I2P peer?

    In + out, I2P

  31. xanoni commented at 10:32 am on August 14, 2021: none

    I have a (hopefully straight forward) question that’s not exactly the same that OP mentioned, but somewhat related:

    Would bitcoind ever connect to (and stay connected to) the same node more than once if that node was using some combination of Tor, I2P, and clearnet? Or would it detect this reliably and only keep one of the connections?

  32. vasild commented at 12:48 pm on August 20, 2021: member

    Would bitcoind ever connect to (and stay connected to) the same node more than once if that node was using some combination of Tor, I2P, and clearnet? Or would it detect this reliably and only keep one of the connections?

    There is no such detection. So we can connect to e.g. 2.39.173.126:8333 and 2g5qfdkn2vvcbqhzcyvyiitg4ceukybxklraxjnu7atlhd22gdwywaid.onion:8333 and remain connected even if that is the same (multi-homed) node.

    There is only a detection that we do not keep a connection to ourselves. It works roughly like this: when A connects to B, A assigns a “random” nonce to that connection and sends it as part of the VERSION message to B. When B receives the VERSION message it checks if the nonce from it equals any of the nonces of its outgoing connections - if it does, then B closes the just accepted connection, thinking that it connected to itself:

    https://github.com/bitcoin/bitcoin/blob/192a959b65660ffacedb5a5eb2a0d26736c636d7/src/net_processing.cpp#L2535-L2541

  33. xanoni commented at 9:35 pm on August 20, 2021: none

    There is no such detection.

    Thank you. How difficult would it be to detect this, if desired? Is there some type of fingerprint/key/identifier that could be matched? Or would this require a whole new mechanism to be designed & implemented? [1]

    Also, would it be possible for an attacker to just create 13,337 Hidden Services (e.g., all .onion’s or a mix of I2P and Tor) that all connect to the same bitcoind? Or does bitcoind only behave predictably with one address per network? I assume independent of the answer, an attacker could modify bitcoind to handle a large number of addresses predictably.

    (One could also reframe the above as: can I protect my node from Sybil-attacks by creating an undisclosed number of extra hidden services / I2P destinations?)

    [1] If yes, it could potentially be similar to what JoinMarket is doing with its Fidelity Bonds. (Assuming one wants protection from malicious Sybil activity … otherwise I guess there would be a non-deposit solution.) However, I am aware that this would be a very major change and is probably not needed.

  34. tryphe commented at 9:36 pm on August 22, 2021: contributor

    I noticed 2 inbound connections to me from a 22.0.0 peer which lasted many hours, so I updated the OP with the relevant peer info. I knew in+out was possible, but not in+in, which seems more dangerous as it seems like an I2P peer could possibly maliciously fill as many of my inbound connections as they want, with careful timing.

    Although I’m not sure whether this is conceptually more dangerous than someone just creating a bunch of different I2P addresses and connecting to me normally, which is already possible without the bug.

  35. vasild commented at 2:59 pm on August 23, 2021: member

    I confirm the 2x inbound case: just execute the below two times quickly one after another (in different terminals, e.g. start connecting to the same peer at the same time):

    0bitcoin-cli addnode 4hllr6w55mbtemb3ebvlzl4zj6qke4si7zcob5qdyg63mjgq624a.b32.i2p:0 onetry
    

    The relevant code is:

    https://github.com/bitcoin/bitcoin/blob/f6f7a12462b381945b1cb9bcb94b129d8fb7e289/src/net.cpp#L2214-L2233

    This is the code that opens a new connection - on line 2216 or 2219 it checks if we are already connected to the peer (this looks into vNodes) and it adds the peer to vNodes later on line 2232. So, if two threads come here and pass the check “not already connected to that peer” they will both connect and add the peer to vNodes.

    This is not I2P specific, but if ConnectNode() (line 2222) is slow like in I2P, that makes it more likely to occur.

  36. tryphe commented at 1:50 am on August 24, 2021: contributor

    Thanks @vasild! I can confirm this. I initially tried this with a bunch of peers that weren’t myself, to no avail. Seems like my latency was too low to reproduce the bug after being connected to the mixnet for a while. But I tried again with a fresh VM and a fresh I2P address and was able to easily open 4 connections to my main node which remained open.

    0$ ./bitcoin-cli getpeerinfo | grep \"addr\":
    1    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",
    2    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",
    3    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",
    4    "addr": "yadcavfqg6urtrpss2zxylxumw6sbp5tgk6hon7uxosjippeulwa.b32.i2p:0",
    
  37. tryphe commented at 2:04 am on August 24, 2021: contributor

    If we modify OpenNetworkConnection() in vasild’s comment above to remove the outbound race condition, there is still an inbound race condition in this call to AcceptConnection() via CConnMan::SocketHandler(): https://github.com/bitcoin/bitcoin/blob/602c8eb8f0fb1a09ba44b8a744c4aaf3f8a6a78a/src/net.cpp#L1528

    The new connection is accepted but the node is not added to vNodes until the end of nested call CreateNodeFromAcceptedSocket(): https://github.com/bitcoin/bitcoin/blob/602c8eb8f0fb1a09ba44b8a744c4aaf3f8a6a78a/src/net.cpp#L1205-L1208 pnode is not created and added to vNodes until the end of the function, when we know hSocket is valid. But intuitively it doesn’t seem to me like the rest of the code should block for very long. At least not enough time for multiple other connections to occur. Strange?

    We should look into fixing both inbound and outbound races so that implementations with OR without the outbound race won’t be able to create duplicate connections to future nodes that have been patched for this bug.


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-18 21:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me