consider adding a new interface RawTxFeed on Mining IPC #34030

issue plebhash openend this issue on December 8, 2025
  1. plebhash commented at 9:31 am on December 8, 2025: none

    Sv2 Job Declarator Server (a.k.a. JDS) needs to keep an in-memory representation of the mempool so it can validate the custom jobs it receives via DeclareMiningJob message

    currently, SRI JDS polls some RPCs against Bitcoin Core, but polling is rather suboptimal

    on https://github.com/stratum-mining/sv2-apps/issues/26 we considered switching to an approach where we subscribe for the rawtx ZMQ feed… JDS would have some task that’s solely dedicated to consuming this feed

    however, I hear ZMQ is not super reliable and this might be a good opportunity to leverage IPC, so I’m creating this issue to brainstorm with @ryanofsky @Sjors @ismaelsadeeq whether this would make sense

    in order to provide something similar to a rawtx ZMQ feed, I can imagine the following:

    • new createRawTxFeed method under interface Mining, returning a RawTxFeed as a result
    • new interface RawTxFeed with a waitNext method

    (although I probably have blindspots and core devs might be able to imagine better approaches)

    getTransactionsByWitnessId (to be potentially introduced via #34020) would be useful as a secondary mechanism to be leveraged if JDS receives a DeclareMiningJob contains some wtxid that hasn’t arrived on the feed yet, which could happen for two different potential reasons:

    • this specific wtxid is on the back of the queue while the chained calls to RawTxFeed.waitNext didn’t fully catch up yet, in which case getTransactionsByWitnessId will accelerate the process
    • Bitcoin Core simply hasn’t seen this tx yet, in which case getTransactionsByWitnessId will return an error and JDS has no option but to send ProvideMissingTransactions to JDC
  2. fanquake added the label interfaces on Dec 8, 2025
  3. Sjors commented at 9:57 am on December 8, 2025: member

    JDS polls some RPCs against Bitcoin Core

    Can you enumerate which RPC calls you’re polling? And briefly explain why.

    For streaming changes to the mempool, we’d probably want to introduce a whole new Mempool interface, since there are other use cases (wallets, lightning nodes, block explorers). And such an interface might as well have methods to submit new transaction (packages).

    We haven’t used IPC for ZMQ-style streaming yet. In the Mining interface you call a method and get a result, immediately or after a delay. So this might be quite involved.

    Calling a waitNext method and getting a response for every single new mempool transaction is simpler. But it would be very inefficient when there’s dozens of new transactions per second. They could be collected in batches though.

    You could also use a (package) fee rate filter so the JDS only collects transactions that are likely to be mined, fetching specific missing txs just in time via getTransactionsByWitnessId().

  4. plebhash commented at 10:18 am on December 8, 2025: none

    Can you enumerate which RPC calls you’re polling?

    getrawmempool is the only polled RPC, under a user-defined period, to periodically check whether there’s new txs it needs to be aware of

    • health is called once at startup
    • getrawtransaction is called (multiple times, one for each new tx) after getrawmempool on each polling cycle
    • submitblock is called ad-hoc after JDC sends PushSolution
  5. plebhash commented at 2:14 pm on December 8, 2025: none

    They could be collected in batches though.

    yeah waitNext probably shouldn’t return one single tx at a time, batching makes more sense

    overall the main point is to be able to have something that allows Bitcoin Core to push mempool updates ASAP, instead of forcing the client to have to poll for it

  6. plebhash commented at 2:29 pm on December 8, 2025: none

    fetching specific missing txs just in time via getTransactionsByWitnessId()

    I feel this should be left as a last-resort kind of thing

    whatever feed JDS gets from Bitcoin Core (be it updates with single or batched txs), it should contain full txdata instead of only wtxids

    then it’s up to JDS implementation to aim to optimize which txs it could selectively drop to avoid unbounded memory consumption

    main point being:

    txdata should be made available sooner, rather than selectively fetched (with extra round-trips on hot Job Declaration paths) later

  7. ryanofsky commented at 2:57 pm on December 8, 2025: contributor

    We haven’t used IPC for ZMQ-style streaming yet. In the Mining interface you call a method and get a result, immediately or after a delay. So this might be quite involved.

    This is true for the mining interface, but the other interfaces in #29409 and #10102 do used streamed notifications. The wallet uses Chain.handleNotifications to start receiving notifications and transactions and blocks:

    https://github.com/bitcoin/bitcoin/blob/d9efd1e49d1df154970b6a60229eedde3ba7cffe/src/ipc/capnp/chain.capnp#L58

    providing a ChainNotifications instance to see transactions added / removed from the mempool:

    https://github.com/bitcoin/bitcoin/blob/d9efd1e49d1df154970b6a60229eedde3ba7cffe/src/ipc/capnp/chain.capnp#L74-L75

    I will to say I don’t understand enough of the context behind this to know if this design is ideal, and am curious how the JDS mempool will be managed. Is it supposed to be a superset of bitcoin core mempool + any transactions that pool participants sent? Does that also mean it will keep transactions that get dropped from the bitcoin core mempool and maybe doesn’t need transactionRemovedFromMempool information?

  8. Sjors commented at 3:25 pm on December 8, 2025: member

    A pool’s JDS processes proposed templates from multiple miners, which will have divergent mempools. So until a block is mined, some templates may still have transaction that the pool itself threw out of its mempool.

    In order for the JDS to prune its pseudo-mempool it could track which inputs are spent in the new block and (recursively) delete entries that spend from it.

  9. plebhash commented at 7:30 am on December 10, 2025: none

    moving discussion from #34013 here, where @Fi3 said

    @plebhash about the stream style design I don’t think that is good idea, I do not want to have every tx in mempool around but just the ones that are actually added to templates.

  10. plebhash commented at 7:37 am on December 10, 2025: none

    reply to @Fi3:


    first, let me explain the rationale behind this feature request

    once DeclareMiningJob arrives, I wanted JDS to be able to validate it as fast as possible

    assuming the best case scenario (where all wtxids have already been seen and no extra ProvideMissingTransactions round-trip is needed), having all txs readily available on JDS memory would allow the validation of DeclareMiningJob to happen ASAP

    if we’re worried about unbounded memory growth, low-fee txs (which are unlikely to be in a template anyways) could always be evicted after they arrive from the stream


    now entertaining your perspective:

    alternatively, we could call getTransactionsByWitnessId (to be potentially introduced via #34020) every time DeclareMiningJob arrives

    while we could be comfortable with this extra round-trip over a local UNIX socket (because it’s fast), we should keep in mind that IPC could potentially be extended to TCP as well, which could introduce room for undesirable latency

    and that’s where the stream-based approach could come in handy, because JDS would be able to have all txs readily available in memory when DeclareMiningJob arrives

  11. Sjors commented at 9:11 am on December 10, 2025: member

    low-fee txs […] could always be evicted after they arrive from the stream

    That’s not so easy, because you can’t judge transactions by their individual fee rate. We introduced the cluster mempool approach to handle this.

    If you want to filter the stream, it’s better to do so on the node side. But we’d need a different interface method for that, one that emits (chunks of) clusters above a certain fee rate threshold.

  12. Fi3 commented at 8:57 am on December 11, 2025: none

    reply to @Fi3:

    first, let me explain the rationale behind this feature request

    once DeclareMiningJob arrives, I wanted JDS to be able to validate it as fast as possible

    assuming the best case scenario (where all wtxids have already been seen and no extra ProvideMissingTransactions round-trip is needed), having all txs readily available on JDS memory would allow the validation of DeclareMiningJob to happen ASAP

    if we’re worried about unbounded memory growth, low-fee txs (which are unlikely to be in a template anyways) could always be evicted after they arrive from the stream

    now entertaining your perspective:

    alternatively, we could call getTransactionsByWitnessId (to be potentially introduced via #34020) every time DeclareMiningJob arrives

    while we could be comfortable with this extra round-trip over a local UNIX socket (because it’s fast), we should keep in mind that IPC could potentially be extended to TCP as well, which could introduce room for undesirable latency

    and that’s where the stream-based approach could come in handy, because JDS would be able to have all txs readily available in memory when DeclareMiningJob arrives

    keep in mind that you will see unknown txs in a very small percentage of the declared job so the overall impact is very very low

  13. plebhash commented at 3:54 pm on December 11, 2025: none

    ok, I guess we can stick with getTransactionsByWitnessId (to be potentially introduced via #34020) for https://github.com/stratum-mining/sv2-apps/issues/24

    if these IPC round-trips ever prove to be an undesirable bottleneck, we can revisit the ideas proposed here.

  14. plebhash closed this on Dec 11, 2025

  15. plebhash closed this on Dec 11, 2025


github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-12-17 06:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me