test: check custom descendant limit in mempool_packages.py #17461

pull theStack wants to merge 1 commits into bitcoin:master from theStack:20191112-test-check_custom_descendant_limit_in_mempool-packages changing 1 files +29 −3
  1. theStack commented at 3:31 am on November 13, 2019: member

    This is a follow-up PR to #17435, testing the custom descendant limit, passed by the argument -limitdescendantcount. It was more tricky than expected, mainly because we don’t know for sure at which point node1 has got all the transactions broadcasted from node0 (for the ancestor test this wasn’t a problem since the txs were immediately available through invalidateblock) – a simple sync_mempools() doesn’t work here since the mempool contents are not equal due to different ancestor/descendant limits. Hence I came up with a “hacky manual sync”:

    1. wait until the mempool has the expected tx count (see conditions below)
    2. after that, wait some time and get sure that the mempool contents haven’t changed in-between

    Like for Similar to the ancestor test, we overall check for three four conditions:

    • the # of txs in the node1 mempool is equal to the descendant limit (plus 1 for the parent tx, plus the # txs from the previous ancestor test which are still in) (done by the hacky sync above)
    • all txs in node1 mempool are a subset of txs in node0 mempool
    • part of the constructed descendant-chain (the first ones up to the limit) are contained in node1 mempool
    • the remaining part of the constructed descendant-chain (all after the first ones up to the limit) is not contained in node1 mempool
  2. theStack force-pushed on Nov 13, 2019
  3. theStack renamed this:
    test: check custom descedant limit in mempool_packages.py
    test: check custom descendant limit in mempool_packages.py
    on Nov 13, 2019
  4. theStack force-pushed on Nov 13, 2019
  5. fanquake added the label Tests on Nov 13, 2019
  6. fanquake requested review from JeremyRubin on Feb 24, 2020
  7. JeremyRubin commented at 6:21 pm on February 24, 2020: contributor

    Review notes:

    1. Don’t love – self.assert_mempool_unchanged(self.nodes[1], waitingtime=5). Long timeouts which return early are OK, but short-er timeouts which always wait for the whole timeout are riskier IMO especially when we don’t control the runtime.
    2. The test doesn’t check that the descendants limit is actually enforced, e.g., that MAX_CUSTOM_DESCDENDANTS+1 is not in the mempool
    3. The Lightning carve out should also get a test, but doesn’t need to be here.
  8. theStack commented at 9:35 am on February 25, 2020: member
    @JeremyRubin: Thanks for your review notes! I fully agree to your points 1 and 2 (can’t say anything about point 3 yet), they both actually stem from the fact that I can’t find a concrete way to express the event of “node1 has received all txs from node0”. As long as I don’t know that all txs from node0 have been broadcasted, it is impossible to check that something is not in the mempool on node1 (there could still arrive a tx in the future). That’s why I worked with just waiting some time, which is of course ugly. Is there any other way? Can I somehow ask node0 if it has broadcasted to all of its peers?
  9. JeremyRubin commented at 8:35 pm on February 26, 2020: contributor
    You could expose the CNodeState via an RPC enabled just for testing. That’s probably the most robust for what you want to test.
  10. MarcoFalke commented at 10:01 pm on February 26, 2020: member
    It is possible to set the poisson delay to 0 with -whitelist=noban@127.0.0.1. Not sure if that solves your problem and it might even interact with other functionality of the test.
  11. theStack force-pushed on Feb 26, 2020
  12. theStack commented at 10:30 pm on February 26, 2020: member

    Turns out that I thought way too complicated when working in this originally, as the solution is really as simple as just activating the immediate tx relay through -whitelist=127.0.0.1. Magically the minute while I was trying this out @MarcoFalke suggested the same (with noban@ though). :smile: As far as I can see the other functionality of the test is not affected by tx relay/timing, so using this parameter for node0 should be fine. @JeremyRubin: Thanks for your suggestion, the idea of exposing parts of CNodeState could be useful for the future.

    Rebased and force-pushed with the simpler solution. Also edited the PR description to strikethrough the original complicated parts (can also be completely removed if that is preferred).

  13. JeremyRubin commented at 10:54 pm on February 26, 2020: contributor

    Also endorse using a whitelist – I wasn’t sure if you wanted all of the whitelist behaviors, such as re-relay and others, but if it works for this test then that’s perfect.

    You may still need some sync to wait until the mempool is the expected size (e.g.,

    0start = now()
    1while len(mempool1) != MAX_ANCESTORS_CUSTOM + 1 + MAX_DESCENDANTS_CUSTOM:
    2   if now() > start + 60 seconds:
    3      raise AssertionError("test timed out on mempool sync")
    4   else:
    5      sleep(0.01)
    

    ). While this is still race-conditiony, in most cases the relay should happen immediately, but it’s just possible the mempool hasn’t synced fast enough.

    This kind of waiting is more OK, as it’s just a test timeout that happens when we’re pretty sure it’s an error, and we quit the loop most times after one iteration.

  14. test: check custom descendant limit in mempool_packages.py
    To test the custom descendant limit on node1 (passed by the argument
    -limitdescendantcount), we check for four conditions:
        -> the # of txs in the node1 mempool is equal to the limit
           (plus 1 for the parent tx, plus the # txs from the previous ancestor
            test which are still in)
        -> all txs in node1 mempool are a subset of txs in node0 mempool
        -> part of the constructed descendant-chain (the first ones up to the
           limit) are contained in node1 mempool
        -> the remaining part of the constructed descendant-chain (all after the
           first ones up to the limit) is *not* contained in node1 mempool
    b902bd66b0
  15. theStack force-pushed on Feb 26, 2020
  16. theStack commented at 11:25 pm on February 26, 2020: member
    @JeremyRubin: Agreed, makes absolute sense to be cautious and wait for the (unlikely, but possible) case that node1 has not been synced. Luckily there exists already a function wait_until in the test framework util that contains pretty much exactly your suggested waiting loop, where the condition is passed in form of a lambda function, leading to: wait_until(lambda: len(self.nodes[1].getrawmempool(False)) == MAX_ANCESTORS_CUSTOM + 1 + MAX_DESCENDANTS_CUSTOM) Force-pushed using this approach, with the timeout lowered to 10 seconds (instead of the default 60 seconds).
  17. JeremyRubin commented at 4:23 am on February 27, 2020: contributor

    Excellent. utACK b902bd6

    failures look unrelated?

  18. JeremyRubin added this to the "General Testing" column in a project

  19. theStack commented at 12:24 pm on February 27, 2020: member

    failures look unrelated?

    Yes, the Travis failure is unrelated, a unit test is causing the trouble (named mockforward, see https://travis-ci.org/bitcoin/bitcoin/jobs/655587226#L3004)

    Where can I find more informations about your mentioned “Lightning carve out” that also needs a test? Does this have to do with this: https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2018-November/016518.html

  20. JeremyRubin approved
  21. JeremyRubin commented at 6:40 pm on February 27, 2020: contributor

    You can read more about it here: https://github.com/bitcoin/bitcoin/blob/4502ed7cd1b3f0478dd2fdf048ad36cd87a8e2df/src/validation.cpp#L751

    (also adding github approval flag)

  22. MarcoFalke commented at 7:26 pm on February 27, 2020: member
    For that there are some tests in test/functional/mempool_package_onemore.py
  23. MarcoFalke merged this on Feb 27, 2020
  24. MarcoFalke closed this on Feb 27, 2020

  25. instagibbs commented at 8:13 pm on February 27, 2020: member
    post-hom concept ACK, thanks for putting in the test writing effort!
  26. sidhujag referenced this in commit 8dcdc4b2c3 on Feb 28, 2020
  27. sidhujag referenced this in commit 7b4ba9c5e1 on Nov 10, 2020
  28. theStack deleted the branch on Dec 1, 2020
  29. Fabcien referenced this in commit 0ea7b89e0a on Jan 8, 2021
  30. DrahtBot locked this on Aug 18, 2022

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-05 00:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me