RFC: keepheight RPC for pruned nodes #13943

issue Sjors opened this issue on August 11, 2018
  1. Sjors commented at 12:52 PM on August 11, 2018: member

    Originally suggested by @jb55 here on the c-lightning repo.

    A prune event can cause data to disappear that an RPC client still needs.

    As long as the client doesn't fall too far behind this is unlikely to happen, because we always keep the most recent 550MB around. But even then it's nice to have certainty, and applications do sometimes crash, or get DOS'ed.

    A bigger problem are wallets, which can now be dynamically loaded and unloaded. In addition, multiprocess #10102 would allow use cases like leaving the daemon running and only loading the wallet when the GUI starts, a much snappier user experience than we have now where the user always has to wait at launch. But with a pruned node, this can lead to rescan failing because it fell too far behind, forcing the user to re-download the entire chain.

    So it would be nice if RPC clients (as well as a wallet process) can tell the node not to prune below a certain hight. That creates three new problems:

    1. what do you do when storage reaches prune=?
    2. what if the client forgets to cancel this request?
    3. do we want to bother tracking this per RPC user, and how do you combine these wishes?

    One approach is to have a second parameter that specifies the max additional storage allowed before pruning kicks in regardless:

    • keepheight <height> <maxspace> with current_height and 300 MB (~ 1 day) being the default

    Multiple RPC users could call this, and we could track the lowest requested height and the highest maxspace. These automatically expire once there's more than maxspace block data on top of height.

    This is enough for the use case of a client like c-lightning that's usually up to date, but might be away for a day due to a crash. However they might as well just recommend prune=850 for the same effect, so I'm not sure how useful this is. Although it automatically expires, there's no way for a client to clean this up.

    Another approach is to allow any RPC client to set whatever value they like, as well as read the current value. That means clients can clean up, but they have no way of knowing if there are other clients, so it seems brittle.

    Keeping track of this setting per RPC user, and then honoring the "max", seems the most sensible.

    Instead of a maxspace parameter, the node could also pause syncing when it's about to run out of space (<maxpause> hours). However this seems impractical if e.g. a user opens a wallet once and then deletes it.

    So perhaps there's no point in any of this and instead we should just refetch a range of historical blocks if they're needed, something #10794 can do.

  2. NicolasDorier commented at 4:55 AM on August 15, 2018: contributor

    Instead of new method, better using existing one pruneblockchain <height> and adding optional arg.

  3. fanquake added the label Brainstorming on Sep 6, 2018
  4. Sjors commented at 2:14 PM on February 19, 2019: member

    The proposed Lightning 1.1 specification adds channel_announcement_plus_proof (channel_announce + tx + merkle), i.e. a Merkle proof with each announcement. Those Merkle proofs can be verified beyond the prune window, so it seems this idea is no longer necessary.

  5. Sjors closed this on Feb 19, 2019

  6. DrahtBot locked this on Dec 16, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-04-14 09:15 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me