JSON RPC Batch Requests Can Consistently OOM Kill Bitcoin Process #14376

issue etscrivner openend this issue on October 3, 2018
  1. etscrivner commented at 3:56 am on October 3, 2018: contributor

    Description

    It is possible to consistently cause a bitcoind process to be OOM killed on Linux (and probably termed on other operating systems) by making excessively large batched JSON RPC requests. There is currently no configurable way to limit batch sizes, so this can be done to any bitcoind instance providing the JSON RPC interfaces.

    Expected Behavior

    The server responds with a “batch too large” error or equivalent whenever a configured batch size limit is reached.

    Actual Behavior

    Bitcoind attempts to satisfy the request until it exhausts all available memory and is then OOM killed by the Linux kernel.

    Reproduction

    This issue can be easily reproduced under the following conditions:

    • Create a fresh Bitcoin Docker image using Ubuntu Linux.
    • Run the image with -p 8332:8332 -p 8333:8333 -m=100m to restrict the available memory to 100MB and expose RPC and chatter protocols.
    • Run bitcoind with the following flags: -testnet -rpcuser=<user> -rpcpassword=<pass> -rpcbind=0.0.0.0 -rpcallowip=0.0.0.0/0
    • Wait for the first ~25,000 testnet blocks to sync.
    • Execute JSON RPC batch request to getblockhash for the first 20,000 block hashes
    • Execute JSON RPC batch request getblock(<hash>, 2) for the first 20,000 block hashes, requesting all 20,000 blocks and included transactions in one batch response.
    • Bitcoin process will eventually exhaust memory and be terminated.

    The bitcoind process will be OOM killed by the Linux kernel:

    screen shot 2018-10-02 at 8 23 31 pm

    The socket on the RPC request will EOF:

    screen shot 2018-10-02 at 8 32 25 pm

    Looking in the system logs you will see something like:

    0Out of memory: kill process 1984 (bitcoind) score 910 or sacrifice child
    1<snip>
    2bitcoin-httpwor invoked oom-killer: gfp_mask=0x14201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD), nodemask=(null),  order=0, oom_score_adj=0
    

    Machine Type

    This works on all AWS instance types running Gentoo or Ubuntu Linux, and should be reproducible on any Linux instance type that has the OOM killer enabled.

    References

  2. fanquake added the label RPC/REST/ZMQ on Oct 3, 2018
  3. MarcoFalke commented at 4:01 am on October 3, 2018: member
    The rpc interface is not hardened against attackers and should only be called from internal, well-behaving code and not exposed to the outside directly.
  4. MarcoFalke added the label Resource usage on Oct 3, 2018
  5. MarcoFalke added the label Brainstorming on Oct 3, 2018
  6. etscrivner commented at 4:06 am on October 3, 2018: contributor
    @MarcoFalke Absolutely agree that RPC interfaces are quite risky to expose publicly, but one can never guarantee the code interfacing with a node - even if trusted - is “well-behaving”. This issue was discovered when a trusted internal network user accidentally ran a script overnight requesting overly large batches of blocks from a Bitcoin node.
  7. sipa commented at 5:09 am on October 3, 2018: member

    Agree with @MarcoFalke that this is not a critical issue as RPC clients are assumed to be trusted.

    That doesn’t mean we shouldn’t fix this issue, though.

  8. MarcoFalke commented at 6:01 am on October 3, 2018: member
    Sure, if there is a way to prevent the crash lets do it, but I am not sure if we can efficiently estimate memory usage for each request.
  9. jonasschnelli commented at 9:08 am on October 3, 2018: contributor

    Agree with @sipa and @MarcoFalke. There is no need for too much hand holding on the RPC interface. Its assumed authenticated/trusted,… you can “shutdown” bitcoind more efficient with a stop command then with sending 20k getblockhash requests in a single batch. :-)

    Of course we could make is more sane as @sipa said,… but I also don’t see a pressing need for this.

  10. etscrivner commented at 4:02 pm on October 3, 2018: contributor

    Yeah, I don’t think this is urgent by any means, but I believe it could be improved without too much work. I’m happy to take a crack at a fix but wanted to leave room for some comments first.

    One simple proposal I’ve considered that I’d be interested in feedback on:

    PROPOSAL Add a new configuration option -rpcmaxbatchsize=<numrequests>:

    • Takes the maximum number of requests allowed in a single RPC batch request.
    • Default: unlimited (-1) - the current configuration.
    • When 0 - either disable batching entirely or trigger a configuration error, no preference here.
    • When <-1 - Trigger a configuration error and ignore.

    When rpcmaxbatchsize is configured AND # of requests in batch > rpcmaxbatchsize:

    • Throw a JSONRPCError with type RPC_INVAILD_PARAMETER and message Number of requests in batch exceeds rpcmaxbatchsize.

    Enforcing this would be as simple as adding a guard clause at the top of JSONRPCExecBatch that checks vReq.size().

    UPSIDES

    • Simple to implement and enforce.
    • Provides some degree of crash safety in the presence of malfunctioning or misconfigured software interacting with a node over RPC.

    DOWNSIDES

    • Because the enforcement is on batch request size and not on memory used, this would disallow some large batch requests that would otherwise be able to succeed. However, the equivalent information can always just be aggregated from multiple smaller batch requests.
    • Requires node administrators to do some back of the envelope math to configure a reasonable rpcmaxbatchsize based on available system memory and number of RPC threads.
  11. promag commented at 0:39 am on October 12, 2018: member
    The OOM happens because the batch is processed as a whole: the json batch is parsed in one go, the full response is written in one go. Instead it should be “streamed”?
  12. etscrivner commented at 6:48 pm on October 12, 2018: contributor

    @promag AFAICT there are several ways to tackle this with varying levels of complexity and therefore cost and inherited complexity:

    1. Use a simple cap on number of requests per batch (See my proposal above)
    2. Try to calculate exact memory usage and return an out of memory error if this is exceeded when answering a query.
    3. Figured out how to use chunked transfer encoding which is possible with libevent but entails some significant changes to the Bitcoin HTTP server.

    I’d be interested in additional ideas as well as what folks think about the trade-offs inherent in each.

  13. meshcollider commented at 10:22 am on October 14, 2018: contributor
    Possibly related #11322 and #11368?
  14. promag commented at 7:00 am on October 18, 2018: member

    @etscrivner of course, and to support streaming it would require changes to how the JSON is parsed.

    Regarding #14376 (comment):

    1. Requires parsing the whole JSON request
    2. I don’t know how you could do that, and also the same as 1.
    3. Maybe not worth the effort.

    Maybe with multi process separation, with RPC workers as separate processes, we could avoid OOM the main process.

    IMO we should not limit the batch size, it defeats the purpose of batches. If the node can handle consecutive requests it should also handle the same requests in a batch.

  15. etscrivner commented at 6:30 pm on October 18, 2018: contributor

    @promag Thanks for the thoughtful response here.

    One small point of clarification here: As I see it, the issue here isn’t really with JSON request parsing (since the requests here are very small, see original bug report). It is instead with the size of the response objects. Essentially, it is possible to make bitcoind consume 10+ GB of memory from ~1-2MB of request JSON. Though I take your point that the requests themselves could also be excessively large, causing the same issue, but this seems less likely given the trusted multi-user configuration described below. Instead it is this non-linear relationship between

    Many businesses deploy bitcoin nodes in a multi-user configuration - where one or several full nodes are accessed by trusted individuals across many departments. This simplifies deployment and maintenance of the nodes and also lowers costs by trading off trust for convenience.

    Capping the batch size helps indirectly prevent this explosion of memory usage for multi-user full clients may accidentally attempt to run request batches that are simply too large.

    P.S. Whoops accidentally closed this issue.

  16. etscrivner closed this on Oct 18, 2018

  17. DrahtBot locked this on Sep 8, 2021

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2024-12-22 06:12 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me