Faster tests with topological mempool rpc sync 🚀 #15858

pull MarcoFalke wants to merge 2 commits into bitcoin:master from MarcoFalke:1904-qaSync changing 14 files +66 −4
  1. MarcoFalke commented at 8:25 pm on April 19, 2019: member

    Most of the tests spend time waiting for the poisson tx send to timeout. However, tests that don’t test p2p behaviour, don’t need to.

    Speed them up by syncing transactions over rpc.

    Overall, the tests run three times as fast for me now:

    • Before:
    0ALL                                   | ✓ Passed  | 1395 s (accumulated) 
    1Runtime: 368 s
    
    • After:
    0ALL                                   | ✓ Passed  | 940 s (accumulated) 
    1Runtime: 120 s
    
  2. test: Use RPC mempool sync to speed up tests fad26ed5fa
  3. jnewbery commented at 8:33 pm on April 19, 2019: member

    Nice! This has been on my list for a long time. Thanks for doing it!

    🚀 concept ACK

  4. DrahtBot added the label Tests on Apr 19, 2019
  5. laanwj commented at 12:20 pm on April 20, 2019: member

    This is an impressive improvement in speed! ACK fad26ed5fa5bf9fc0cf60e84a135e7d03c934000

    a test here:

    0ALL                                   | ✓ Passed  | 1788 s (accumulated)
    1Runtime: 472 s
    
    0ALL                                   | ✓ Passed  | 1516 s (accumulated)
    1Runtime: 388 s
    
  6. in test/functional/test_framework/util.py:417 in fad26ed5fa outdated
    413+                    # from the pool
    414+                except JSONRPCException:
    415+                    # This transaction violates policy (e.g. RBF policy). The
    416+                    # mempools should still converge when the high-fee
    417+                    # replacement is synced in a later call
    418+                    pass
    


    promag commented at 11:02 pm on April 21, 2019:
    Could assert error “insufficient fee”?
  7. in test/functional/test_framework/util.py:435 in fad26ed5fa outdated
    432+                for tx in pool_remote:
    433+                    pool_remote[tx].ancestors = {a: pool_remote[a] for a in pool_remote[tx].ancestors}
    434+
    435+                # Push this pool to all targets
    436+                for i_target, rpc_target in enumerate(rpc_connections):
    437+                    missing_txids = pool[i_remote].difference(pool[i_target])
    


    promag commented at 11:04 pm on April 21, 2019:
    Could skip when i_remove == i_target?

    MarcoFalke commented at 8:15 pm on April 22, 2019:
    I opted for code brevity. Calculating the empty set as a difference from two equal sets should take negligible time compared to an rpc call
  8. in test/functional/test_framework/util.py:424 in fad26ed5fa outdated
    421     while time.time() <= stop_time:
    422         pool = [set(r.getrawmempool()) for r in rpc_connections]
    423-        if pool.count(pool[0]) == len(rpc_connections):
    424+        sync_done = pool.count(pool[0]) == len(rpc_connections)
    425+        if use_rpc_sync and not sync_done:
    426+            for i_remote, rpc_remote in enumerate(rpc_connections):
    


    promag commented at 11:14 pm on April 21, 2019:
    A brief description of the following algorithm would be handy.
  9. promag commented at 11:28 pm on April 21, 2019: member
    Concept ACK.
  10. in test/functional/test_framework/util.py:393 in fad26ed5fa outdated
    389@@ -390,15 +390,63 @@ def sync_blocks(rpc_connections, *, wait=1, timeout=60):
    390         time.sleep(wait)
    391     raise AssertionError("Block sync timed out:{}".format("".join("\n  {!r}".format(b) for b in best_hash)))
    392 
    393-def sync_mempools(rpc_connections, *, wait=1, timeout=60, flush_scheduler=True):
    394+def sync_mempools(rpc_connections, *, wait=1, timeout=60, use_rpc_sync=False, flush_scheduler=True):
    


    jonatack commented at 9:04 am on April 22, 2019:
    Adding code documentation in the docstring here would be great, detailling use_rpc_sync purpose and use.
  11. jonatack commented at 9:12 am on April 22, 2019: member
    Concept ACK. Consider describing this argument’s purpose and use in the sync_mempools function docstring, in example_test.py, and possibly in rpc_deprecated.py.
  12. MarcoFalke commented at 12:35 pm on April 22, 2019: member
    Added a commit to extend the docstring, as requested by @promag and @jonatack
  13. in test/functional/test_framework/util.py:423 in fa1680d748 outdated
    421     stop_time = time.time() + timeout
    422     while time.time() <= stop_time:
    423         pool = [set(r.getrawmempool()) for r in rpc_connections]
    424-        if pool.count(pool[0]) == len(rpc_connections):
    425+        sync_done = pool.count(pool[0]) == len(rpc_connections)
    426+        if use_rpc_sync and not sync_done:
    


    jnewbery commented at 7:39 pm on April 22, 2019:

    It seems slightly odd to me that this code, which we only ever expect to run once, should be in a while loop. I think it’d be clearer to split it out into its own function. Something like:

     0def sync_mempools(rpc_connections, *, wait=1, timeout=60, use_rpc_sync=False, flush_scheduler=True):
     1    """
     2    Wait until everybody has the same transactions in their memory
     3    pools. If use_rpc_sync is set, sync all transactions right away.
     4    """
     5    if use_rpc_sync:
     6        force_sync_mempools(rpc_connections)
     7    else:
     8        stop_time = time.time() + timeout
     9        while time.time() <= stop_time:
    10            pool = [set(r.getrawmempool()) for r in rpc_connections]
    11            if pool.count(pool[0]) == len(rpc_connections):
    12                break
    13            time.sleep(wait)
    14        else:
    15            raise AssertionError("Mempool sync timed out:{}".format("".join("\n  {!r}".format(m) for m in pool)))
    16
    17    if flush_scheduler:
    18        for r in rpc_connections:
    19            r.syncwithvalidationinterfacequeue()
    20
    21def force_sync_mempools(rpc_connections):
    22
    23    class TxInfo:
    24        def __init__(self, *, raw, ancestors):
    25            self.raw = raw
    26            self.ancestors = ancestors
    27
    28    def topo_send(txs, rpc, pool_add):
    29        for i in txs:
    30            topo_send(txs[i].ancestors, rpc, pool_add)
    31            if i not in pool_add:
    32                try:
    33                    assert_equal(i, rpc.sendrawtransaction(txs[i].raw))
    34                    pool_add.add(i)
    35                    # Note that conflicted txs (due to RBF) are not removed
    36                    # from the pool
    37                except JSONRPCException as e:
    38                    # This transaction violates policy (e.g. RBF policy). The
    39                    # mempools should still converge when the high-fee
    40                    # replacement is synced in a later call
    41                    assert 'insufficient fee' in e.error['message']
    42
    43    pool = [set(r.getrawmempool()) for r in rpc_connections]
    44    # Iterate over all nodes, get their raw mempool and send the
    45    # missing txs to all other nodes
    46    for i_remote, rpc_remote in enumerate(rpc_connections):
    47        pool_remote = {
    48            txid: TxInfo(raw=rpc_remote.getrawtransaction(txid), ancestors=info['depends'])
    49            for txid, info in rpc_remote.getrawmempool(verbose=True).items()
    50        }
    51        # Create "recursive pools" for ancestors
    52        for tx in pool_remote:
    53            pool_remote[tx].ancestors = {a: pool_remote[a] for a in pool_remote[tx].ancestors}
    54
    55        # Push this pool to all targets
    56        for i_target, rpc_target in enumerate(rpc_connections):
    57            missing_txids = pool[i_remote].difference(pool[i_target])
    58            # Send missing txs
    59            topo_send(
    60                txs={txid: pool_remote[txid]
    61                     for txid in pool_remote if txid in missing_txids},
    62                rpc=rpc_target,
    63                pool_add=pool[i_target],
    64            )
    65    # If the sync fails there is a logic error in the sync or test code
    66    pool = [set(r.getrawmempool()) for r in rpc_connections]
    67    assert pool.count(pool[0]) == len(rpc_connections)
    

    MarcoFalke commented at 8:08 pm on April 22, 2019:
    I’d rather not use a while-else, which I have to look up what it does every time I encounter it. I opted for a smaller diff and added an assert, so the diff should be easier to review and it should be clear that it is only run once via the assert.

    MarcoFalke commented at 8:17 pm on April 22, 2019:
    Also rewriting the function would invalidate previous review

    jnewbery commented at 9:25 pm on April 22, 2019:

    I’m surprised you think Python’s while-else syntax is confusing. I don’t think I’ve ever had a problem with it: https://stackoverflow.com/a/3295949. I certainly think it’s clearer than the existing code, which has a return buried in a conditional in the while loop, but hidden under the flush_scheduler cleanup code.

    I’m pretty sure you’ve been pushed into this weird construction (execute-once code within a while loop) because of the flush_scheduler cleanup code being inside the loop. Restructuring the function so the loop breaks and then does the cleanup allows the RPC sync to be separated from the while loop, which seems clearer to me.


    MarcoFalke commented at 10:06 pm on April 22, 2019:
    Unlike if-else, where only one branch is executed, the while-else (or for-else) generally execute both branches. To make it even more confusing, it won’t execute the else branch when you break.

    MarcoFalke commented at 10:09 pm on April 22, 2019:
    Anyway, I am happy to review a follow-up pull that switches both sync_ helpers to the while-else syntax. But, I’d rather not do it here.
  14. in test/functional/test_framework/util.py:426 in fa1680d748 outdated
    424-        if pool.count(pool[0]) == len(rpc_connections):
    425+        sync_done = pool.count(pool[0]) == len(rpc_connections)
    426+        if use_rpc_sync and not sync_done:
    427+            # Iterate over all nodes, get their raw mempool and send the
    428+            # missing txs to all other nodes
    429+            for i_remote, rpc_remote in enumerate(rpc_connections):
    


    jnewbery commented at 7:40 pm on April 22, 2019:
    nit: remote and target don’t seem like very descriptive names to me from and to seem more intuitive to me as the node you’re sync’ing from and to.

    MarcoFalke commented at 8:13 pm on April 22, 2019:
    Renamed to to and from
  15. in test/functional/test_framework/util.py:431 in fa1680d748 outdated
    429+            for i_remote, rpc_remote in enumerate(rpc_connections):
    430+                pool_remote = {
    431+                    txid: TxInfo(raw=rpc_remote.getrawtransaction(txid), ancestors=info['depends'])
    432+                    for txid, info in rpc_remote.getrawmempool(verbose=True).items()
    433+                }
    434+                # Create "recursive pools" for ancestors
    


    jnewbery commented at 7:41 pm on April 22, 2019:
    nit: can you make this comment more explicit? What is a “recursive pool”?

    MarcoFalke commented at 8:13 pm on April 22, 2019:
    Removed mention of “recursive”
  16. test: Add documentation to sync_mempools::use_rpc_sync faac5a8ff6
  17. MarcoFalke force-pushed on Apr 22, 2019
  18. jnewbery commented at 9:26 pm on April 22, 2019: member
    Aesthetic disagreements aside, this is a great improvement. utACK faac5a8ff6c9966daa880bc8c02951fea804589d
  19. in test/functional/test_framework/test_framework.py:98 in faac5a8ff6
    94@@ -95,6 +95,7 @@ def __init__(self):
    95         self.nodes = []
    96         self.network_thread = None
    97         self.rpc_timeout = 60  # Wait for up to 60 seconds for the RPC server to respond
    98+        self.use_rpc_sync = False
    


    kostyantyn commented at 2:38 pm on April 23, 2019:
    Curious, as the improvement is mostly gained by bypassing Poisson feature, why don’t disable it via parameter e.,g,-enablepoisson=0, which would be allowed only in regtest? In this case, nodes behave closer to a real scenario, we have cleaner logs as txs are exchanged via P2P only (no race condition), would work even if sync_mempool hasn’t been called and no need to maintain a custom tx exchange function.

    MarcoFalke commented at 9:02 pm on April 23, 2019:

    @kostyantyn Good point! Done in

    • net: Send txs without delay on regtest #15881
  20. MarcoFalke closed this on Apr 23, 2019

  21. MarcoFalke deleted the branch on Apr 23, 2019
  22. MarcoFalke restored the branch on Apr 24, 2019
  23. MarcoFalke commented at 3:20 pm on April 24, 2019: member
    Going back to this approach, since modifying bitcoind is interpreted as too controversial
  24. MarcoFalke reopened this on Apr 24, 2019

  25. MarcoFalke commented at 3:20 pm on April 24, 2019: member
    Unless there are objections this will be merged on Monday
  26. promag commented at 3:26 pm on April 24, 2019: member
    Could make sense to skip this optimization in one of the travis job?
  27. DrahtBot commented at 4:28 pm on April 25, 2019: member

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Conflicts

    Reviewers, this pull request conflicts with the following ones:

    • #15911 (Use wallet RBF default for walletcreatefundedpsbt by Sjors)
    • #15891 (test: Require standard txs in regtest by default by MarcoFalke)

    If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

  28. jonatack commented at 10:55 pm on April 29, 2019: member

    Functional tests on master at 8da1aa4 with Linux Debian 4.19.28-2 (2019-03-15) x86/64

    ALL | ✓ Passed | 1824 s (accumulated) Runtime: 466 s

    ALL | ✓ Passed | 1867 s (accumulated) Runtime: 476 s

    ALL | ✓ Passed | 1856 s (accumulated) Runtime: 473 s

    ALL | ✓ Passed | 1774 s (accumulated) Runtime: 453 s


    Functional tests with this PR at https://github.com/bitcoin/bitcoin/pull/15858/commits/faac5a8ff6c9966daa880bc8c02951fea804589d rebased on master at 8da1aa4

    ALL | ✓ Passed | 1372 s (accumulated) Runtime: 352 s

    ALL | ✓ Passed | 1388 s (accumulated) Runtime: 357 s

    ALL | ✓ Passed | 1472 s (accumulated) Runtime: 377 s

    ALL | ✓ Passed | 1449 s (accumulated) Runtime: 373 s

  29. MarcoFalke closed this on Apr 30, 2019

  30. MarcoFalke deleted the branch on Apr 30, 2019
  31. laanwj commented at 8:19 pm on April 30, 2019: member

    Going back to this approach, since modifying bitcoind is interpreted as too controversial

    I prefer this approach as well. I think the only use for this, ever, is the tests, so I’d prefer not to change bitcoind with a special case for it.

  32. DrahtBot locked this on Dec 16, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-11-17 15:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me