crash during IPC cleanup after interruptWait() is called #34250

issue plebhash openend this issue on January 10, 2026
  1. plebhash commented at 10:51 pm on January 10, 2026: none

    while working on https://github.com/stratum-mining/sv2-apps/pull/177, I’m noticing a new crash

    steps to reproduce:

    • run Bitcoin Core v30.2 on mainnet
    • git clone https://github.com/plebhash/sv2-apps -b 2026-01-09-bitcoin-core-v30-2
    • edit sv2-apps/pool-apps/pool/config-examples/mainnet/pool-config-bitcoin-core-ipc-example.toml so that unix_socket_path field points to node.sock
    • launch Pool via: cd sv2-apps/pool-apps/pool; cargo run -- -c config-examples/mainnet/pool-config-bitcoin-core-ipc-example.toml
    • let Pool run for a while
    • kill Pool with ctrl+c

    this does not happen deterministically, so a few attempts of launching then killing Pool might be needed

    but eventually Bitcoin Core crashes with a few different log variations:

     02026-01-10T22:48:03Z [ipc] {bitcoin-node-35006/b-capnp-loop-203488} IPC server recv request  [#96](/bitcoin-bitcoin/96/) BlockTemplate.getCoinbaseMerklePath$Params
     12026-01-10T22:48:03Z [ipc] {bitcoin-node-35006/b-capnp-loop-203488} IPC server post request  [#96](/bitcoin-bitcoin/96/) {bitcoin-node-35006/206817 (from )}
     22026-01-10T22:48:03Z [ipc] {bitcoin-node-35006/b-capnp-loop-203488} IPC server send response [#96](/bitcoin-bitcoin/96/) BlockTemplate.getCoinbaseMerklePath$Results
     32026-01-10T22:48:07Z [ipc] {bitcoin-node-35006/b-capnp-loop-203488} IPC server recv request  [#97](/bitcoin-bitcoin/97/) BlockTemplate.waitNext$Params
     42026-01-10T22:48:07Z [ipc] {bitcoin-node-35006/b-capnp-loop-203488} IPC server post request  [#97](/bitcoin-bitcoin/97/) {bitcoin-node-35006/206817 (from )}
     52026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     62026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     72026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     82026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     92026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
    102026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
    112026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
    122026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages6MiningEEE
    132026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server: socket disconnected.
    142026-01-10T22:48:07Z [ipc:info] {bitcoin-node-35006/b-capnp-loop-203488} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages4InitEEE
    152026-01-10T22:48:08Z CreateNewBlock(): block weight: 3998742 txs: 599 fees: 515599 sigops 13307
    16libc++abi: terminating due to uncaught exception of type std::__1::system_error: mutex lock failed: Invalid argument
    17[1]    35006 abort      ./bitcoin-30.2/bin/bitcoin -m node -ipcbind=unix -prune=555 -debug=ipc
    
     02026-01-10T22:22:02Z [ipc] {bitcoin-node-19528/b-capnp-loop-144717} IPC server send response [#134](/bitcoin-bitcoin/134/) BlockTemplate.getCoinbaseMerklePath$Results
     12026-01-10T22:22:03Z [ipc] {bitcoin-node-19528/b-capnp-loop-144717} IPC server recv request  [#135](/bitcoin-bitcoin/135/) BlockTemplate.waitNext$Params
     22026-01-10T22:22:03Z [ipc] {bitcoin-node-19528/b-capnp-loop-144717} IPC server post request  [#135](/bitcoin-bitcoin/135/) {bitcoin-node-19528/147843 (from )}
     32026-01-10T22:22:04Z CreateNewBlock(): block weight: 3998510 txs: 4931 fees: 315281 sigops 999
     42026-01-10T22:22:04Z [ipc:info] {bitcoin-node-19528/b-capnp-loop-144717} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     52026-01-10T22:22:04Z [ipc:info] {bitcoin-node-19528/b-capnp-loop-144717} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     62026-01-10T22:22:04Z [ipc:info] {bitcoin-node-19528/b-capnp-loop-144717} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     72026-01-10T22:22:04Z [ipc:info] {bitcoin-node-19528/b-capnp-loop-144717} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages13BlockTemplateEEE
     82026-01-10T22:22:04Z [ipc:info] {bitcoin-node-19528/b-capnp-loop-144717} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages6MiningEEE
     92026-01-10T22:22:04Z [ipc:info] {bitcoin-node-19528/b-capnp-loop-144717} IPC server: socket disconnected.
    102026-01-10T22:22:04Z [ipc:info] {bitcoin-node-19528/b-capnp-loop-144717} IPC server destroy N2mp11ProxyServerIN3ipc5capnp8messages4InitEEE
    112026-01-10T22:22:05Z CreateNewBlock(): block weight: 3998649 txs: 4928 fees: 336652 sigops 1030
    12Assertion failed: (m_loop), function operator*, file proxy.h, line 59.
    13[1]    19528 abort      ./bitcoin-30.2/bin/bitcoin -m node -ipcbind=unix -prune=555 -debug=ipc 
    

    cc @ryanofsky @ismaelsadeeq

  2. plebhash renamed this:
    crash during IPC cleanup after `interruptWait()` is called: assertion failure in `proxy.h:59`
    crash during IPC cleanup after `interruptWait()` is called
    on Jan 10, 2026
  3. ismaelsadeeq commented at 8:00 pm on January 11, 2026: member

    I was able to reproduce this with the steps provided by @plebhash reliably.

    I attached a debugger and set a breakpoint to identify the issue and determine where the crash occurs on the node side.

    From all my attempts and the crash I noticed, it occurs after the wait has returned, and you CTRL+C.

    From my debugging, it seems the internals of interruptWait are not even executing for this to happen.

    I attached a debugger and added a breakpoint, but it has not reached. I also log a message that it can be reached it is also not reached.

    Commenting all instructions in interruptWait also does not prevent the issue

     02026-01-11T19:53:55Z [net] received: tx (373 bytes) peer=7
     12026-01-11T19:53:55Z [validation] Enqueuing TransactionAddedToMempool: txid=3fac3d8951ebe1b164ba7ffc7240393ff9cb0da2509f04d577f6b8935b6bdee7 wtxid=9eda298e92ddc4fb16900831a8c40cfadb5bb59b7a369f9025336c57db33e1e9
     22026-01-11T19:53:55Z [mempool] AcceptToMemoryPool: peer=7: accepted 3fac3d8951ebe1b164ba7ffc7240393ff9cb0da2509f04d577f6b8935b6bdee7 (wtxid=9eda298e92ddc4fb16900831a8c40cfadb5bb59b7a369f9025336c57db33e1e9) (poolsz 2194 txn, 119807 kB)
     32026-01-11T19:53:55Z [validation] TransactionAddedToMempool: txid=3fac3d8951ebe1b164ba7ffc7240393ff9cb0da2509f04d577f6b8935b6bdee7 wtxid=9eda298e92ddc4fb16900831a8c40cfadb5bb59b7a369f9025336c57db33e1e9
     42026-01-11T19:53:55Z CreateNewBlock(): block weight: 3998780 txs: 935 fees: 510856 sigops 13846
     52026-01-11T19:53:55Z [bench]     - Sanity checks: 0.89ms [0.03s (0.56ms/blk)]
     62026-01-11T19:53:55Z [bench]     - Fork checks: 0.03ms [0.04s (0.60ms/blk)]
     72026-01-11T19:53:55Z [bench]       - Connect 936 transactions: 6.84ms (0.007ms/tx, 0.001ms/txin) [0.66s (10.84ms/blk)]
     82026-01-11T19:53:55Z [bench]     - Verify 13572 txins: 6.84ms (0.001ms/txin) [0.68s (11.19ms/blk)]
     92026-01-11T19:53:55Z [bench] CreateNewBlock() chunks: 0.53ms, validity: 9.41ms (total 9.94ms)
    102026-01-11T19:53:55Z >>>>>>>>>>>>>>> Wait returned
    11bitcoin-node: ipc/libmultiprocess/include/mp/proxy.h:59: EventLoop &mp::EventLoopRef::operator*() const: Assertion `m_loop' failed.
    

    Will investigate more

    Tips for reproducing without multiple runs, do CTRL+C immediately you see “Mempool fees increased! Sending NewTemplate message.” from the pool

  4. ryanofsky commented at 5:15 pm on January 12, 2026: contributor

    Thanks for the report @plebhash and thanks for debugging @ismaelsadeeq. Based on the logs provided, I was able to reproduce the crash with a python IPC client. The logs just show a BlockTemplate.waitNext() call being sent, followed by a disconnect before there is a waitNext response.

    The problem happens when there are fee increases, causing the BlockTemplate.waitNext() call to return a pointer to new BlockTemplate. Because the pointer is non-null, libmultiprocess tries to create a ProxyServer<BlockTemplate> to be able to return the BlockTemplate reference back to the IPC client, but IPC connection is already destroyed by this point, so the ProxyServer construction fails and causes a crash.

    I think it should be straightforward to prevent this crash by having ProxyServer constructor (or code which calls it) check to see if the connection is intact before proceeding, and throw an exception and log an error otherwise.

    The problem can also be prevented on the client side by calling interruptWait and waiting for waitNext reponse before disconnecting. This crash should be preventable on the server side, though, and it is worth doing that. Annotated stack trace is below:

      0[#4](/bitcoin-bitcoin/4/)  0x00007ffff72396f7 in __assert_fail () from /nix/store/xx7cm72qy2c0643cm1ipngd87aqwkcdp-glibc-2.40-66/lib/libc.so.6
      1[#5](/bitcoin-bitcoin/5/)  0x0000555555e153ec in mp::EventLoopRef::operator* (this=0x7fffe8001dc0) at ./ipc/libmultiprocess/include/mp/proxy.h:59
      2
      3>   59      EventLoop& operator*() const { assert(m_loop); return *m_loop; }
      4
      5
      6[#6](/bitcoin-bitcoin/6/)  0x0000555556400cd1 in mp::ProxyContext::ProxyContext (this=0x7fff68000f58, connection=0x7fffe8001dc0) at ./ipc/libmultiprocess/src/mp/proxy.cpp:81
      7
      8>   81  ProxyContext::ProxyContext(Connection* connection) : connection(connection), loop{*connection->m_loop} {}
      9
     10[#7](/bitcoin-bitcoin/7/)  0x0000555555f9c625 in mp::ProxyServerBase<ipc::capnp::messages::BlockTemplate, interfaces::BlockTemplate>::ProxyServerBase (this=0x7fff68000f40,
     11    vtt=0x555556ea8a10 <VTT for mp::ProxyServer<ipc::capnp::messages::BlockTemplate>+16>, impl=std::shared_ptr<interfaces::BlockTemplate> (empty) = {...}, connection=...)
     12    at ./ipc/libmultiprocess/include/mp/proxy-io.h:521
     13
     14   519  template <typename Interface, typename Impl>
     15   520  ProxyServerBase<Interface, Impl>::ProxyServerBase(std::shared_ptr<Impl> impl, Connection& connection)
     16>  521      : m_impl(std::move(impl)), m_context(&connection)
     17   522  {
     18   523      assert(m_impl);
     19   524  }
     20
     21[#8](/bitcoin-bitcoin/8/)  0x0000555555f9c546 in mp::ProxyServerCustom<ipc::capnp::messages::BlockTemplate, interfaces::BlockTemplate>::ProxyServerBase (this=0x7fff68000f40,
     22    vtt=0x555556ea8a08 <VTT for mp::ProxyServer<ipc::capnp::messages::BlockTemplate>+8>) at ./ipc/libmultiprocess/include/mp/proxy.h:197
     23
     24   195  struct ProxyServerCustom : public ProxyServerBase<Interface, Impl>
     25   196  {
     26>  197      using ProxyServerBase<Interface, Impl>::ProxyServerBase;
     27   198  };
     28
     29[#9](/bitcoin-bitcoin/9/)  0x0000555555f9c3db in mp::ProxyServer<ipc::capnp::messages::BlockTemplate>::ProxyServerBase (this=0x7fff68000f40) at capnp/mining.capnp.proxy.h:401
     30
     31   397  template<>
     32   398  struct ProxyServer<ipc::capnp::messages::BlockTemplate> : public ProxyServerCustom<ipc::capnp::messages::BlockTemplate, interfaces::BlockTemplate>
     33   399  {
     34   400  public:
     35>  401      using ProxyServerCustom::ProxyServerCustom;
     36   402      ~ProxyServer();
     37
     38[#10](/bitcoin-bitcoin/10/) 0x0000555555f9c188 in kj::heap<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, std::shared_ptr<interfaces::BlockTemplate>, mp::Connection&> (params=..., params=...)
     39    at /nix/store/8xjrdb1y9ng6w6yknzg9h6fkxd3sb824-capnproto-1.2.0/include/kj/memory.h:609
     40
     41>  609    return Own<T>(new T(kj::fwd<Params>(params)...), _::HeapDisposer<T>::instance);
     42
     43[#11](/bitcoin-bitcoin/11/) 0x0000555555f9c0cf in mp::MakeProxyServer<ipc::capnp::messages::BlockTemplate, interfaces::BlockTemplate> (context=..., impl=std::shared_ptr<interfaces::BlockTemplate> (empty) = {...})
     44    at ./ipc/libmultiprocess/include/mp/type-interface.h:14
     45
     46    11  template <typename Interface, typename Impl>
     47    12  kj::Own<typename Interface::Server> MakeProxyServer(InvokeContext& context, std::shared_ptr<Impl> impl)
     48    13  {
     49>   14      return kj::heap<ProxyServer<Interface>>(std::move(impl), context.connection);
     50    15  }
     51
     52[#12](/bitcoin-bitcoin/12/) 0x0000555555f9ba8a in mp::CustomMakeProxyServer<ipc::capnp::messages::BlockTemplate, interfaces::BlockTemplate> (context=..., impl=...) at ./ipc/libmultiprocess/include/mp/type-interface.h:20
     53
     54    17  template <typename Interface, typename Impl>
     55    18  kj::Own<typename Interface::Server> CustomMakeProxyServer(InvokeContext& context, std::shared_ptr<Impl>&& impl)
     56    19  {
     57>   20      return MakeProxyServer<Interface, Impl>(context, std::move(impl));
     58    21  }
     59
     60[#13](/bitcoin-bitcoin/13/) 0x00005555560325d8 in CustomBuildField<interfaces::BlockTemplate, std::unique_ptr<interfaces::BlockTemplate, std::default_delete<interfaces::BlockTemplate> >, mp::StructField<mp::Accessor<mp::mining_fields::Result, 18>, ipc::capnp::messages::BlockTemplate::WaitNextResults::Builder>&> (invoke_context=..., value=..., output=..., enable=0x0) at ./ipc/libmultiprocess/include/mp/type-interface.h:33
     61
     62    31      if (value) {
     63    32          using Interface = typename decltype(output.get())::Calls;
     64>   33          output.set(CustomMakeProxyServer<Interface, Impl>(invoke_context, std::shared_ptr<Impl>(value.release())));
     65    34      }
     66
     67[#14](/bitcoin-bitcoin/14/) 0x000055555603252d in mp::BuildField<std::unique_ptr<interfaces::BlockTemplate, std::default_delete<interfaces::BlockTemplate> >, mp::InvokeContext, std::unique_ptr<interfaces::BlockTemplate, std::default_delete<interfaces::BlockTemplate> >, mp::StructField<mp::Accessor<mp::mining_fields::Result, 18>, ipc::capnp::messages::BlockTemplate::WaitNextResults::Builder>&> (context=..., output=..., values=...)
     68    at ./ipc/libmultiprocess/include/mp/proxy-types.h:203
     69
     70   202      if (CustomHasValue(context, values...)) {
     71>  203          CustomBuildField(TypeList<LocalTypes...>(), Priority<3>(), context, std::forward<Values>(values)...,
     72   204              std::forward<Output>(output));
     73   205      }
     74
     75[#15](/bitcoin-bitcoin/15/) 0x00005555560324b6 in mp::CustomBuildField<std::unique_ptr<interfaces::BlockTemplate, std::default_delete<interfaces::BlockTemplate> >, std::unique_ptr<interfaces::BlockTemplate, std::default_delete<interfaces::BlockTemplate> >, mp::StructField<mp::Accessor<mp::mining_fields::Result, 18>, ipc::capnp::messages::BlockTemplate::WaitNextResults::Builder> > (invoke_context=..., value=..., output=...)
     76    at ./ipc/libmultiprocess/include/mp/type-decay.h:34
     77
     78>   34      BuildField(TypeList<LocalType>(), invoke_context, output, std::forward<Value>(value));
     79
     80[#16](/bitcoin-bitcoin/16/) 0x0000555556032339 in mp::BuildField<std::unique_ptr<interfaces::BlockTemplate, std::default_delete<interfaces::BlockTemplate> >&&, mp::InvokeContext, std::unique_ptr<interfaces::BlockTemplate, std::default_delete<interfaces::BlockTemplate> >, mp::StructField<mp::Accessor<mp::mining_fields::Result, 18>, ipc::capnp::messages::BlockTemplate::WaitNextResults::Builder> > (context=..., output=..., values=...)
     81    at ./ipc/libmultiprocess/include/mp/proxy-types.h:203
     82
     83>  203          CustomBuildField(TypeList<LocalTypes...>(), Priority<3>(), context, std::forward<Values>(values)...,
     84
     85[#17](/bitcoin-bitcoin/17/) 0x0000555556030aa4 in mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall>::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults> >, node::BlockWaitOptions>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults> >&, mp::TypeList<>, node::BlockWaitOptions&&) const (this=0x7fffe801c1ac,
     86    server_context=..., args=...) at ./ipc/libmultiprocess/include/mp/proxy-types.h:474
     87
     88>  474          BuildField(TypeList<decltype(result)>(), invoke_context, Make<StructField, Accessor>(results),
     89   475              std::forward<decltype(result)>(result));
     90   476      }
     91
     92[#18](/bitcoin-bitcoin/18/) 0x00005555560306cf in mp::PassField<mp::Accessor<mp::mining_fields::Options, 17>, node::BlockWaitOptions, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults> >, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall> const&, mp::TypeList<> >(mp::Priority<0>, mp::TypeList<node::BlockWaitOptions>, mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults> >&, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall> const&, mp::TypeList<>&&) (server_context=..., fn=..., args=...)
     93    at ./ipc/libmultiprocess/include/mp/proxy-types.h:307
     94
     95>  307      fn.invoke(server_context, std::forward<Args>(args)..., static_cast<LocalType&&>(*param));
     96
     97[#19](/bitcoin-bitcoin/19/) 0x000055555602f162 in mp::ServerField<1, mp::Accessor<mp::mining_fields::Options, 17>, mp::ServerRet<mp::Accessor<mp::mining_fields::Result, 18>, mp::ServerCall> >::invoke<mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults> >, mp::TypeList<node::BlockWaitOptions>>(mp::ServerInvokeContext<mp::ProxyServer<ipc::capnp::messages::BlockTemplate>, capnp::CallContext<ipc::capnp::messages::BlockTemplate::WaitNextParams, ipc::capnp::messages::BlockTemplate::WaitNextResults> >&, mp::TypeList<node::BlockWaitOptions>) const (this=0x7fffe801c1ac, server_context=...) at ./ipc/libmultiprocess/include/mp/proxy-types.h:539
     98[#20](/bitcoin-bitcoin/20/) 0x000055555602ee2a in operator() (this=0x7fffe801c188) at ./ipc/libmultiprocess/include/mp/type-context.h:126
     99[#21](/bitcoin-bitcoin/21/) 0x000055555602ec4a in operator() (this=0x7fffe801c180) at /nix/store/8xjrdb1y9ng6w6yknzg9h6fkxd3sb824-capnproto-1.2.0/include/kj/function.h:142
    100[#22](/bitcoin-bitcoin/22/) 0x0000555555e38fd1 in kj::Function<void()>::operator() (this=0x7fff797f9878) at /nix/store/8xjrdb1y9ng6w6yknzg9h6fkxd3sb824-capnproto-1.2.0/include/kj/function.h:119
    101[#23](/bitcoin-bitcoin/23/) 0x0000555555e38cb7 in mp::Unlock<mp::Lock, kj::Function<void()>&> (lock=..., callback=...) at ./ipc/libmultiprocess/include/mp/util.h:209
    102[#24](/bitcoin-bitcoin/24/) 0x0000555556406c56 in mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1}>(mp::Lock&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1})::{lambda()#1}::operator()() const (this=0x7fff797f98e0) at ./ipc/libmultiprocess/include/mp/proxy-io.h:352
    103[#25](/bitcoin-bitcoin/25/) 0x0000555556406b96 in std::condition_variable::wait<mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1}>(mp::Lock&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1})::{lambda()#1}>(std::unique_lock<std::mutex>&, mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1}>(mp::Lock&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1})::{lambda()#1}) (this=0x7fff68000e88, __lock=..., __p=...) at /nix/store/kzq78n13l8w24jn8bx4djj79k5j717f1-gcc-14.3.0/include/c++/14.3.0/condition_variable:104
    104[#26](/bitcoin-bitcoin/26/) 0x0000555556406ac7 in mp::Waiter::wait<mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1}>(mp::Lock&, mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const::{lambda()#1}) (this=0x7fff68000e60,
    105    lock=..., pred=...) at ./ipc/libmultiprocess/include/mp/proxy-io.h:343
    106[#27](/bitcoin-bitcoin/27/) 0x0000555556406979 in mp::ProxyServer<mp::ThreadMap>::makeThread(capnp::CallContext<mp::ThreadMap::MakeThreadParams, mp::ThreadMap::MakeThreadResults>)::$_0::operator()() const (this=0x7fffe801beb8)
    107    at ./ipc/libmultiprocess/src/mp/proxy.cpp:419
    
  5. plebhash commented at 7:56 pm on January 12, 2026: none

    The problem can also be prevented on the client side by calling interruptWait and waiting for waitNext reponse before disconnecting

    but I thought the goal of interruptWait was so that we could avoid having to wait for waitNext to return?

    if we still need to do that, isn’t interruptWait kinda pointless?

  6. ismaelsadeeq commented at 8:08 pm on January 12, 2026: member

    It will return immediately, that’s the point. You should not disconnect before the wait is interrupted.

    But as @ryanofsky mentioned, this can also be prevented on the server side. I will attempt a fix soon.

  7. plebhash commented at 9:25 pm on January 12, 2026: none

    It will return immediately, that’s the point. You should not disconnect before the wait is interrupted.

    cool, thanks for the clarification.

    I added some extra check to make sure we never disconnect (or more precisely, drop the IPC client objects from memory) before the current waitNext is finished.

    But as @ryanofsky mentioned, this can also be prevented on the server side. I will attempt a fix soon.

    after this fix, will it be safe to assume that whenever interruptWait returns, waitNext has also finished its execution?

  8. ryanofsky commented at 9:35 pm on January 12, 2026: contributor

    if we still need to do that, isn’t interruptWait kinda pointless?

    Yes, from a Cap’n Proto client’s perspective interruptWait was always pointless, because Cap’n Proto already provides a way to cancel any call: the client can just delete the Promise<Response> object returned by the call, and the server will be notified and cancel the request.

    But the C++ mining interface which Cap’n Proto exposes is synchronous, not asynchronous, and doesn’t know about about promises or cancellations. By default it will continue executing cancelled requests as if nothing happened.

    In #33575 I suggested a few different ways the c++ server code could be notified about cancellations and stop doing unnecessary work. This was important for waitNext because the work it does currently isn’t very cheap: it generates a new block every second and doesn’t return until the new block has high enough fees.

    One of the solutions suggested in #33575 was to add an interruptWait method. This was approach (2). In longer term I think approach (4) mentioned there supporting Cap’n Proto cancellations would be better, but it would take more thoought and effort to implement. Adding interruptWait was just an easy way of making waitNext calls interruptible while not having to deal with complexity of handling cancellations more generally.

    It will return immediately, that’s the point. You should not disconnect before the wait is interrupted.

    This is probably good advice. For simplicity, bitcoin capnproto interfaces (including the mining interface but also other interfaces in #29409, #10102, #32297) were designed to be used by clients that make blocking, serial calls, and use separate threads if they need things to execute in parallel.

    This works well when client and server code are both blocking and synchronous. It’s what allowed #29432 implementing a stratum v2 template provider in C++ to be split up across separate processes and repositories straightforwardly without needing to deal with sockets or I/O.

    But this has worked less well with the rust client which is more asynchronous and strays pretty far outside the calling conventions used by the c++ client. We should be able to have full support for the rust client, but the more it does things differently the more crashes like this that would not be triggered from the c++ client will be exposed.

    But as @ryanofsky mentioned, this can also be prevented on the server side. I will attempt a fix soon.

    Thanks, I also started working on a fix after posting earlier. The fix involves changing InvokeContext Connection& reference to a pointer and setting it to null when the connection is destroyed and making the CustomReadField function in type-interface.h throw an exception if the connection is null. The fix is not trivial since it requires using Connection::addSyncCleanup to actually set the connection pointer to null on disconnects, and this is very awkward to do from the type-context.h execution thread because the connection could be deleted at any time during that thread, even before execution begins.

    You should definitely feel free to attempt another fix, and maybe there is a simpler one. I’m probably less than halfway done on my fix and planning to finish it tomorrow.

  9. ryanofsky commented at 9:38 pm on January 12, 2026: contributor

    after this fix, will it be safe to assume that whenever interruptWait returns, waitNext has also finished its execution?

    No, you can’t make that assumption about different calls running on different threads. The only way to know when waitNext has finished executing is when the response promise is fulfilled.

  10. ryanofsky commented at 9:42 pm on January 12, 2026: contributor

    I added some extra check to make sure we never disconnect (or more precisely, drop the IPC client objects from memory) before the current waitNext is finished.

    This should be a safe workaround. To be specific though, it is harmless to drop the IPC client object before the waitNext response is returned. The crash only happens when there is a disconnect.

  11. willcl-ark added the label Bug on Jan 13, 2026
  12. willcl-ark added the label interfaces on Jan 13, 2026
  13. ryanofsky referenced this in commit 94b07f5723 on Jan 13, 2026
  14. ryanofsky commented at 5:33 pm on January 13, 2026: contributor

    Update: I implemented a fix in 94b07f57232e72981df6fb399fe21bb93a09b667 (tag) which prevents the crash and is simpler than the fix I mentioned in my comment yesterday #34250 (comment), but I’m not really happy with it. The fix I was working on yesterday tried to detect disconnections specficially and avoid deferencing deleted Connection pointers in the case of a method returning an interface pointer after a connection was broken. The new fix is simpler and handles cancellations in general, instead of only detecting disconnects, and just avoids doing any unnecessary work after a cancellation happens. But I found that unless $allowCancellation annotations are added to the capnp files, disconnecting during a method call does not actually cancel the call, so the fix doesn’t work unless extra $allowCancellation baggage is added to interfaces. Will look into this more and see if there may be a better fix, or maybe go back to the first approach.

    EDIT: The fix in 94b07f57232e72981df6fb399fe21bb93a09b667 is actually broken in another way, because the “Aborting cancelled IPC request” exception it tries to raise to fix the issue is never raised. The reason why the change fixes the crash is just that the $Cxx.allowCancellation annotation for reasons I’m not very clear on causes the onDisconnect handler not to be triggered until after the waitNext method returns, so the crash is avoided. This seems like another downside of the change, because it seems better if the connection can be cleaned up without waiting for pending IPC calls that could take arbitrarily long to finish executing.

  15. ryanofsky referenced this in commit ba5270dce2 on Jan 13, 2026
  16. ryanofsky commented at 9:37 pm on January 13, 2026: contributor
    Update: A better fix is implemented in ba5270dce21050134be5e22df2ddf4529962c7ea (tag) which avoids the need to require $allowCancellation annotations. It works in basically the same way as the last fix aside from that. Will continue working on this and adding tests, etc
  17. ryanofsky commented at 12:02 pm on January 14, 2026: contributor

    Current plan is to open a combined libmultiprocess PR that addresses this issue and #33923. It will:

    • Prevent crashes in response-building code after an unclean disconnect using the approach in ba5270dce21050134be5e22df2ddf4529962c7ea (tag) . This will prevent the crash reported here when a client calls waitNext, disconnects without waiting for the response, and then waitNext later returns a block template.
    • Resolve #33923 and allow an arbitrary number of requests to be sent to the same server thread, getting rid of the “thread busy” exception. I think this can be done pretty simply by using branching promises and chaining the requests as they come in.

    Reason for wanting to combine these is that both change type type-context.h a lot and I think having the second change would simplify the first. Am planning to to have the libmultiprocess PR implemented and ready for review this week, and when updating the subtree, merging to master and 30.x branches as discussed in #33439 so the same libmultiprocess version is used both branches and behavior is the same in both.

  18. ryanofsky referenced this in commit 7666f1cbaf on Jan 14, 2026
  19. ismaelsadeeq commented at 3:59 pm on January 14, 2026: member

    You should definitely feel free to attempt another fix, and maybe there is a simpler one. I’m probably less than halfway done on my fix and planning to finish it tomorrow.

    I have been working on other things over the past few days; I will review your PR when it’s up and suck all the information above. Thank you for all the context and updates @ryanofsky .

  20. ryanofsky referenced this in commit 1835ee877a on Jan 14, 2026
  21. ryanofsky referenced this in commit 568a0d8311 on Jan 15, 2026
  22. ryanofsky referenced this in commit 96fd77bbcc on Jan 15, 2026
  23. ryanofsky referenced this in commit f532e9b2ff on Jan 15, 2026
  24. ryanofsky referenced this in commit ba865d9d7b on Jan 21, 2026
  25. ryanofsky commented at 4:31 am on January 21, 2026: contributor
    The fix from last week is now generalized and part of second commit https://github.com/bitcoin-core/libmultiprocess/commit/ba865d9d7bffc391f66566ccea18f0a2b6be84e0 in https://github.com/bitcoin-core/libmultiprocess/pull/240. There’s also some python test code to trigger this crash added in #34284

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin/bitcoin. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-01-27 06:13 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me