Add test and docs for getblockfrompeer with pruning #23813

pull fjahr wants to merge 2 commits into bitcoin:master from fjahr:2021-12-getblockfrompeer-follow-up changing 2 files +46 −6
  1. fjahr commented at 11:55 am on December 18, 2021: contributor

    These are additions to getblockfrompeer that I already suggested on the original PR.

    The two commits do the following:

    1. Add a test for getblockfrompeer usage on pruned nodes. This is important because many use-cases for getblockfrompeer are in a context of a pruned node.
    2. Add some information on how long the users of pruned nodes can expect the block to be available after they have used the RPC. I think the behavior is not very intuitive for users and I would not be surprised if users expect the block to be available indefinitely.
  2. laanwj added the label Docs on Dec 18, 2021
  3. laanwj added the label RPC/REST/ZMQ on Dec 18, 2021
  4. DrahtBot commented at 4:24 pm on December 18, 2021: contributor

    The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

    Reviews

    See the guideline for information on the review process.

    Type Reviewers
    ACK stratospher, Sjors, brunoerg, MarcoFalke
    Concept ACK theStack
    Stale ACK jonatack

    If your review is incorrectly listed, please react with 👎 to this comment and the bot will ignore it on the next update.

    Conflicts

    No conflicts as of last run.

  5. theStack commented at 5:31 pm on December 18, 2021: contributor
    Concept ACK
  6. Sjors commented at 2:39 am on December 20, 2021: member
    Can you rebase this on #23706? Otherwise it might get confusing…
  7. fjahr force-pushed on Dec 20, 2021
  8. fjahr commented at 9:16 pm on December 20, 2021: contributor

    Can you rebase this on #23706? Otherwise it might get confusing…

    Done

  9. in test/functional/rpc_getblockfrompeer.py:90 in 5ed8de7a2a outdated
    86+        peers = self.nodes[2].getpeerinfo()
    87+        assert_equal(len(peers), 1)
    88+        peer_2_peer_0_id = peers[0]["id"]
    89+        result = self.nodes[2].getblockfrompeer(pruned_block, peer_2_peer_0_id)
    90+        self.wait_until(lambda: self.check_for_block(self.nodes[2], pruned_block), timeout=1)
    91+        assert(not "warnings" in result)
    


    Sjors commented at 2:51 am on December 21, 2021:
    I dropped the warnings field in 6f2133d3e6a9df23ba7922f51bdc17acd049e2e6, so this assert can’t fail anymore…

    fjahr commented at 8:45 pm on December 21, 2021:
    fixed
  10. fjahr force-pushed on Dec 21, 2021
  11. brunoerg approved
  12. brunoerg commented at 11:05 am on December 22, 2021: contributor

    tACK 809b2afa02e5ab8a839306562e82434c218f7632

  13. DrahtBot added the label Needs rebase on Jan 25, 2022
  14. fjahr force-pushed on Jan 25, 2022
  15. fjahr commented at 7:56 pm on January 25, 2022: contributor
    Rebased since #23706 was merged
  16. DrahtBot removed the label Needs rebase on Jan 25, 2022
  17. maflcko renamed this:
    Add test and docs for getblockfrompeer
    Add test and docs for getblockfrompeer with pruning
    on Jan 26, 2022
  18. in src/rpc/blockchain.cpp:797 in 11009c3a60 outdated
    792@@ -793,7 +793,8 @@ static RPCHelpMan getblockfrompeer()
    793         "\nAttempt to fetch block from a given peer.\n"
    794         "\nWe must have the header for this block, e.g. using submitheader.\n"
    795         "Subsequent calls for the same block and a new peer will cause the response from the previous peer to be ignored.\n"
    796-        "\nReturns an empty JSON object if the request was successfully scheduled.",
    797+        "\nReturns an empty JSON object if the request was successfully scheduled."
    798+        "\nNote: On a pruned node this block might be pruned again as soon as the pruneheight surpasses the blockheight at the time of fetching the block.\n",
    


    Sjors commented at 1:17 pm on January 26, 2022:

    What block does “the block” refer to? The most recent block that’s being fetched as part of the regular activity?

    Maybe this can be a bit shorter: “Previously pruned blocks are only preserved until the next pruning event.” (afaik this is true as long as the user doesn’t change -prune)


    fjahr commented at 7:11 pm on January 29, 2022:

    “block” both times refers to the block being fetched with getblockfrompeer. I see how it was not super clear so I revised the wording a bit.

    I think your shorter version is not necessarily correct. The block might be pruned with the next pruning event or not, it depends on the prune height requested if it is in a file that is otherwise ready to be pruned. So the block with the highest height in the same file is the deciding factor as far as I understand. Since the fetched blocked is usually appended in the file with the current tip, the next pruning event will often not allow that file to be pruned. The expanded test that you requested actually shows this well already and so I structured it in a way that this behavior is made explicit.


    Sjors commented at 4:00 pm on January 30, 2022:
    Ah I see. Then I think there should be a warning that this can temporarily cause pruning to be deferred? I.e. disk space may exceed -prune? Is there an upper limit of required extra free disk space that we can suggest?

    fjahr commented at 5:32 pm on January 30, 2022:
    Yes, if the node has not synced up to the fetched block yet, that can happen. That is the issue I am addressing with #23927 . I could amend the comment here now and then change it again if that PR is merged. But I think I would rather keep it separate since I also don’t know what might be merged first. If the fix in #23927 is not getting in, I would change that PR to at least document the issue better for the user instead.

    jonatack commented at 10:48 pm on January 30, 2022:

    not sure, but without the spaces these look like RPC fields or inputs, yet aren’t part of this RPC

    0        "\nNote: On a pruned node this block might be pruned again as soon as the prune height surpasses the block height at the time of fetching the block.\n",
    

    fjahr commented at 0:26 am on February 6, 2022:
    done
  19. Sjors commented at 1:20 pm on January 26, 2022: member

    6946327b191a0b573658e098e1d470d8dd7dae37 looks good.

    Do you want to expand the test to show how the fetched block disappears again at the next prune event?

  20. fjahr commented at 7:12 pm on January 29, 2022: contributor

    Do you want to expand the test to show how the fetched block disappears again at the next prune event?

    Expanded the test. See my comment for some more related info.

  21. in test/functional/rpc_getblockfrompeer.py:28 in 6946327b19 outdated
    26 
    27-    def check_for_block(self, hash):
    28+    def check_for_block(self, node, hash):
    29         try:
    30-            self.nodes[0].getblock(hash)
    31+            node.getblock(hash)
    


    jonatack commented at 10:52 pm on January 30, 2022:

    Maybe faster/cleaner to pass only the node number instead of whole node objects (also, use named args)

     0     def check_for_block(self, node, hash):
     1         try:
     2-            self.nodes[0].getblock(hash)
     3+            self.nodes[node].getblock(hash)
     4             return True
     5@@ -65,7 +65,7 @@ class GetBlockFromPeerTest(BitcoinTestFramework):
     6 
     7         self.log.info("Successful fetch")
     8         result = self.nodes[0].getblockfrompeer(short_tip, peer_0_peer_1_id)
     9-        self.wait_until(lambda: self.check_for_block(self.nodes[0], short_tip), timeout=1)
    10+        self.wait_until(lambda: self.check_for_block(node=0, hash=short_tip), timeout=1)
    11         assert_equal(result, {})
    12 
    13@@ -86,7 +86,7 @@ class GetBlockFromPeerTest(BitcoinTestFramework):
    14         peer_2_peer_0_id = peers[0]["id"]
    15         result = self.nodes[2].getblockfrompeer(pruned_block, peer_2_peer_0_id)
    16-        self.wait_until(lambda: self.check_for_block(self.nodes[2], pruned_block), timeout=1)
    17+        self.wait_until(lambda: self.check_for_block(node=2, hash=pruned_block), timeout=1)
    18         assert_equal(result, {})
    

    fjahr commented at 0:26 am on February 6, 2022:
    done
  22. jonatack commented at 10:54 pm on January 30, 2022: contributor

    ACK 6946327b191a0b573658e098e1d470d8dd7dae37

    Couple nits, happy to re-ACK if take.

  23. DrahtBot added the label Needs rebase on Jan 31, 2022
  24. fjahr force-pushed on Feb 6, 2022
  25. fjahr commented at 0:27 am on February 6, 2022: contributor
    Rebased and took @jonatack ’s suggestions.
  26. DrahtBot removed the label Needs rebase on Feb 6, 2022
  27. in test/functional/rpc_getblockfrompeer.py:45 in 476f63a081 outdated
    41@@ -37,7 +42,7 @@ def run_test(self):
    42 
    43         self.log.info("Connect nodes to sync headers")
    44         self.connect_nodes(0, 1)
    45-        self.sync_blocks()
    46+        self.sync_blocks(self.nodes[0:1])
    


    theStack commented at 2:43 pm on March 1, 2022:

    This is currently a no-op from a sync perspective, as the passed list only contains one element (x[0:1]is the same as [x[0]]). Probably you meant

    0        self.sync_blocks(self.nodes[0:2])
    

    ? An other alternative would be self.sync_blocks([self.nodes[0], self.nodes[1]]).


    fjahr commented at 5:01 pm on March 5, 2022:
    Indeed. Fixed, thanks!
  28. fjahr force-pushed on Mar 5, 2022
  29. DrahtBot added the label Needs rebase on Jun 1, 2022
  30. fjahr force-pushed on Jun 5, 2022
  31. fjahr commented at 11:32 pm on June 5, 2022: contributor
    Rebased
  32. DrahtBot removed the label Needs rebase on Jun 6, 2022
  33. Sjors commented at 9:51 am on July 18, 2022: member
    Somewhat ominous CI failure.
  34. fjahr force-pushed on Jul 22, 2022
  35. fjahr commented at 4:22 pm on July 22, 2022: contributor

    Somewhat ominous CI failure.

    Seems to have been the CI choking but I also saw that there was a silent merge conflict after I re-ran, so rebased and fixed that.

  36. in test/functional/rpc_getblockfrompeer.py:133 in da1bd8e31d outdated
     98+        assert_raises_rpc_error(-1, "Block not available (pruned data)", self.nodes[2].getblock, pruned_block)
     99+
    100+        self.log.info("Fetch pruned block")
    101+        peers = self.nodes[2].getpeerinfo()
    102+        assert_equal(len(peers), 1)
    103+        peer_2_peer_0_id = peers[0]["id"]
    


    Sjors commented at 7:27 am on September 13, 2022:
    nit: node_2_peer_0_id or pruned_node_peer_0_id

    fjahr commented at 7:02 pm on December 22, 2022:
    done
  37. in test/functional/rpc_getblockfrompeer.py:124 in da1bd8e31d outdated
    89 
    90+        self.log.info("Connect pruned node")
    91+        # We need to generate more blocks to be able to prune
    92+        self.connect_nodes(0, 2)
    93+        self.generate(self.nodes[0], 400)
    94+        pruneheight = self.nodes[2].pruneblockchain(300)
    


    Sjors commented at 7:29 am on September 13, 2022:
    If you have to retouch, if you call this pruned_node = self.nodes[2] the rest of the test is a bit easier to read.

    fjahr commented at 7:02 pm on December 22, 2022:
    done
  38. Sjors approved
  39. Sjors commented at 7:30 am on September 13, 2022: member
    utACK da1bd8e31dc0ccb297c23fce68a68b6d1a51b508
  40. DrahtBot added the label Needs rebase on Oct 26, 2022
  41. fjahr force-pushed on Nov 13, 2022
  42. fjahr commented at 7:21 pm on November 13, 2022: contributor
    rebased
  43. DrahtBot removed the label Needs rebase on Nov 13, 2022
  44. in src/rpc/blockchain.cpp:432 in 23ea3e01ca outdated
    427@@ -428,7 +428,8 @@ static RPCHelpMan getblockfrompeer()
    428         "getblockfrompeer",
    429         "Attempt to fetch block from a given peer.\n\n"
    430         "We must have the header for this block, e.g. using submitheader.\n"
    431-        "Subsequent calls for the same block and a new peer will cause the response from the previous peer to be ignored.\n\n"
    432+        "Subsequent calls for the same block and a new peer will cause the response from the previous peer to be ignored.\n"
    433+        "Note: On a pruned node this block can be pruned again as soon as the prune height surpasses the block height at the time it was fetched.\n\n"
    


    luke-jr commented at 0:04 am on November 15, 2022:

    If a user calls getblockfrompeer 200 times or so, it’s quite possible they will fill a block file and get pruned before the current height is pruned.

    The only way to guarantee any survival of a fetched block would be something like #19463


    fjahr commented at 0:35 am on November 20, 2022:
    Hm, that’s correct, how would you like Note: There is no guarantee the block will be available at a specific time in the future as it may be gone with the next pruning event ?

    luke-jr commented at 10:15 pm on November 20, 2022:

    Maybe a bit long. How about “Note: The block could be re-pruned as soon as it is received.”

    Though if the peer simply never responds, it’s basically the same outcome?


    fjahr commented at 12:29 pm on November 27, 2022:

    Thanks, I took your suggested test.

    And it’s true, it’s the same outcome if the block is never received. But I don’t think we have to worry about that here, the wording above says “Attempt to fetch…” which implies the operation might fail and there should be logging in place as well.

  45. luke-jr changes_requested
  46. fjahr force-pushed on Nov 27, 2022
  47. DrahtBot added the label Needs rebase on Dec 5, 2022
  48. rpc: Add note on guarantees to getblockfrompeer cd761e6b2c
  49. fjahr force-pushed on Dec 19, 2022
  50. DrahtBot removed the label Needs rebase on Dec 20, 2022
  51. test: Add test for getblockfrompeer on pruned nodes fe329dc936
  52. fjahr force-pushed on Dec 22, 2022
  53. fjahr commented at 7:02 pm on December 22, 2022: contributor
    Failed to address @Sjors ’ comments on the previous rebase, done now.
  54. fanquake requested review from Sjors on Jan 12, 2023
  55. stratospher commented at 6:11 pm on March 8, 2023: contributor

    ACK fe329dc.

    Went through the test and observed how:

    • 200 + 4 + 400 = 1..604 blocks present in blk0.dat, blk1.dat, blk2.dat
    • pruneblockchain(300) removes blk0.dat and now 249..604 blocks in blk1.dat, blk2.dat
    • then a previously pruned block is stored in blk2.dat (using getblockfrompeer RPC)
    • 604 + 250 = 249..854 blocks in blk1.dat, blk2.dat, blk3.dat (+ prev pruned block in blk2.dat)
    • pruneblockchain(700) removes blk1.dat. we still have blk2.dat, blk3.dat and so prev pruned block also.
    • 854 + 250 = 500..1104 blocks in blk2.dat, blk3.dat, blk4.dat (+ prev pruned block in blk2.dat)
    • pruneblockchain(1000) removes blk2.dat and prev pruned block is no longer there
  56. DrahtBot requested review from brunoerg on Mar 8, 2023
  57. DrahtBot requested review from jonatack on Mar 8, 2023
  58. Sjors commented at 5:10 pm on March 9, 2023: member
    re-utACK fe329dc936d1e02da406345e4223e11d1fa6fb38
  59. DrahtBot removed review request from Sjors on Mar 9, 2023
  60. brunoerg approved
  61. brunoerg commented at 12:13 pm on March 10, 2023: contributor
    re-ACK fe329dc936d1e02da406345e4223e11d1fa6fb38
  62. maflcko commented at 12:32 pm on March 10, 2023: member

    review ACK fe329dc936d1e02da406345e4223e11d1fa6fb38 🍉

    Signature:

    0untrusted comment: signature from minisign secret key on empty file; verify via: minisign -Vm "${path_to_any_empty_file}" -P RWTRmVTMeKV5noAMqVlsMugDDCyyTSbA3Re5AkUrhvLVln0tSaFWglOw -x "${path_to_this_whole_four_line_signature_blob}"
    1RUTRmVTMeKV5npGrKx1nqXCw5zeVHdtdYURB/KlyA/LMFgpNCs+SkW9a8N95d+U4AP1RJMi+krxU1A3Yux4bpwZNLvVBKy0wLgM=
    2trusted comment: review ACK fe329dc936d1e02da406345e4223e11d1fa6fb38 🍉
    3u3CI1IY3BfxGVCvywNIp4h0mrUP/3/VdSmvbxT7Lm+Zq1O8vpmSbEHTjPOu+KM/2fiovMCWbbc44ZC0cYHe+BA==
    
  63. fanquake merged this on Mar 10, 2023
  64. fanquake closed this on Mar 10, 2023

  65. sidhujag referenced this in commit e39dbe978e on Mar 10, 2023
  66. bitcoin locked this on Mar 9, 2024

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-09-29 01:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me