Avoid errors from asynchronous (non-c++) clients #240
pull ryanofsky wants to merge 3 commits into bitcoin-core:master from ryanofsky:pr/promise changing 10 files +337 −70-
ryanofsky commented at 4:06 am on January 21, 2026: collaboratorThe PR avoids errors from non-C++ rust & python clients when they make asynchronous requests (https://github.com/bitcoin/bitcoin/issues/33923) and unclean disconnects (https://github.com/bitcoin/bitcoin/issues/34250). Neither of these errors are easily possible to trigger from libmultiprocess clients because of its blocking interface and RAII semantics, but they are fairly easy to trigger from rust and python and there is a test triggering both of them in https://github.com/bitcoin/bitcoin/pull/34284
-
DrahtBot commented at 4:07 am on January 21, 2026: none
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.
Reviews
See the guideline for information on the review process.
Type Reviewers ACK Sjors Concept ACK ismaelsadeeq If your review is incorrectly listed, please copy-paste <!–meta-tag:bot-skip–> into the comment that the bot should ignore.
Conflicts
Reviewers, this pull request conflicts with the following ones:
- #218 (Better error and log messages by ryanofsky)
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.
LLM Linter (✨ experimental)
Possible typos and grammar issues:
- call.) -> call. [Extra stray parentheses make the sentence fragment confusing; remove the trailing “.)” to read correctly.]
- the the -> the [Duplicate word “the” hinders comprehension; remove the extra “the”.]
2026-02-20 16:59:02
-
ryanofsky force-pushed on Jan 22, 2026
-
ryanofsky commented at 2:40 am on January 22, 2026: collaboratorUpdated ba865d9d7bffc391f66566ccea18f0a2b6be84e0 -> 92a2c5f66282c9f8e283366be221790056c80d22 (
pr/promise.1->pr/promise.2, compare) fixing various ci errors (iwyu, compiler errors) https://github.com/bitcoin-core/libmultiprocess/actions/runs/21196838571 -
c4762c7b51
refactor: Add ProxyServer<Thread>::post() method
The change moves code responsible for posting work to the execution thread, and notifying the event loop thread after execution is finished out of type-context.h and into a new ProxyServer<Thread>::post() method. This commit does not change behavior other than changing log messages, but improvements are made in upcoming commits which extend the new method.
-
ddb5f74196
Allow simultaneous calls on same Context.thread
If multiple IPC requests happen at the same time specifying same Context.thread to run the requests on, queue the requests to execute in the order they are received instead of raising a "thread busy" exception. This change has no effect on C++ clients using libmultiprocess as a client library, since the libmultiprocess client only makes blocking calls and creates a server thread for every client thread, so it's not possible for there to be multiple calls on the same server thread. But this change may be useful for rust and python clients as discussed https://github.com/bitcoin/bitcoin/issues/33923
-
0174450ca2
Prevent crash on unclean disconnect if abandoned IPC call returns interface pointer
This is a fix for https://github.com/bitcoin/bitcoin/issues/34250 which reports that bitcoin node crashes if a rust stratrumv2 mining client calls BlockTemplate.waitNext() and disconnects without waiting for a response from the call, and if mempool fees increased so the call returns a non-null interface BlockTemplate pointer. The node would crash in this case while trying to call MakeProxyServer on the returned BlockTemplate pointer, which would fail because MakeProxyServer would try to use a reference to the Connection object that had been deleted as a result of the disconnect. The fix works by: - Adding a Connection::m_canceler member variable and using it to cancel any IPC response promises that are pending when the connection is destroyed. - Updating ProxyServer<Thread>::post to use promise.attach() as described https://capnproto.org/cxxrpc.html#cancellation to detect cancellation and set a ServerContext::request_canceled variable. - Updating ServerCall to check the ServerContext::request_canceled status after any C++ server method returns, and throw an exception if it is set. - Updating type-context.h PassField() function to deal with the exception by catching and logging it.
-
ismaelsadeeq commented at 10:21 am on January 27, 2026: memberConcept ACK
-
ryanofsky force-pushed on Jan 27, 2026
-
ryanofsky commented at 7:05 pm on January 27, 2026: collaboratorUpdated 92a2c5f66282c9f8e283366be221790056c80d22 -> 1f307fb8fd1aff4a2e97b8f179b4162049f7af50 (
pr/promise.2->pr/promise.3, compare) fixing tsan error caused by deletingready_fulfillerpromise fulfiller outside the event loop thread https://github.com/bitcoin-core/libmultiprocess/actions/runs/21233968144/job/61097851891?pr=240 -
ryanofsky commented at 4:31 am on January 28, 2026: collaborator
Have been debugging a TSAN race condition (https://github.com/bitcoin/bitcoin/actions/runs/21421194207/job/61680781122?pr=34422#step:11:4303) the functional test in https://github.com/bitcoin/bitcoin/pull/34422 detects in the new code, that is not triggered here because this PR does not currently include a test that triggers IPC request cancellation.
The race seems real and fixable, also probably unlikely to cause problems in practice. In the control flow when an IPC gets cancelled, TSAN sees the worker thread doing a
CallContext::getParamscall before the cancellation, triggering a read of the RpcCallContext::request pointer. Then after the cancellation occurs, the I/O thread wakes up and destroys the CallContext object, triggering a write of RpcCallContext::request pointer. This is not actually dangerous because the write always happens before the read, but TSAN is unable to detect any “happens-before” relationship between the read and write events, so considers this to be a race.I think this should be fixable by having the worker thread acquire a lock in the brief period when it reads request parameters, before executing the IPC call, and also acquire the lock after executing the call when it is writing request results. Then the cancellation callback, if triggered, can acquire the same lock and TSAN can be aware of the ordering between the two threads
-
Sjors commented at 2:17 pm on February 10, 2026: member
should be fixable by having the worker thread acquire a lock in the brief period … @ryanofsky is that something you want to add here?
-
in include/mp/proxy-io.h:91 in 1bcf4ac1b5 outdated
87 ~ProxyServer(); 88 kj::Promise<void> getName(GetNameContext context) override; 89+ 90+ //! Run a callback function returning T on this thread. 91+ template<typename T, typename Fn> 92+ kj::Promise<T> post(Fn&& fn);
Sjors commented at 10:47 am on February 11, 2026:In 1bcf4ac1b58b960e57d2c794d9e0df5e1722b3f6 Allow simultaneous calls on same Context.thread: if this function makes sense on it’s own, is it possible to introduce it in a preparation commit? Both commits in this PR make use of it.
ryanofsky commented at 8:02 pm on February 11, 2026:re: #240 (review)
In 1bcf4ac Allow simultaneous calls on same Context.thread: if this function makes sense on it’s own, is it possible to introduce it in a preparation commit? Both commits in this PR make use of it.
Yes, that’s a really nice idea. Added a new refactoring commit to move code into this function before changing it.
ryanofsky commented at 8:05 pm on February 11, 2026: collaboratorre: #240 (comment)
should be fixable by having the worker thread acquire a lock in the brief period …
@ryanofsky is that something you want to add here?
Yes thanks, added a new commit which does this and fixes the TSAN error locally for me. Will update https://github.com/bitcoin/bitcoin/pull/34422 and confirm it fixes the failure there too.
Rebased 1f307fb8fd1aff4a2e97b8f179b4162049f7af50 -> 1d7debfa8ea9a5eddfcf3f8bb694e4971e04d61d (
pr/promise.3->pr/promise.4, compare) addingresult_value.reset()call to try fix CI error https://github.com/bitcoin/bitcoin/actions/runs/21412348994/job/61652239914?pr=34422Updated 1d7debfa8ea9a5eddfcf3f8bb694e4971e04d61d -> 4f42fc4ddcb9202b2ac3de9a2971325c088e5a5c (
pr/promise.4->pr/promise.5, compare) to fix CI error https://github.com/bitcoin/bitcoin/actions/runs/21421194207/job/61680781122?pr=34422ryanofsky force-pushed on Feb 11, 2026theuni commented at 11:35 pm on February 12, 2026: contributor@ryanofsky I’m having a bit of trouble understanding what’s happening on which thread, but it seems like a mutex is not the most straightforward solution here? It’s not immediately obvious to me what it’s guarding, anyway. Perhaps a simplestd::atomic_flagwould be better to communicate “I’ve reached this point”? You could set/notify when the params are safe to delete, then inm_on_cancelyou could just wait for it?ryanofsky commented at 3:24 am on February 13, 2026: collaborator@ryanofsky I’m having a bit of trouble understanding what’s happening on which thread, but it seems like a mutex is not the most straightforward solution here? It’s not immediately obvious to me what it’s guarding, anyway. Perhaps a simple
std::atomic_flagwould be better to communicate “I’ve reached this point”? You could set/notify when the params are safe to delete, then inm_on_cancelyou could just wait for it?Thanks for looking at this! I assume you are asking about the last commit 4f42fc4ddcb9202b2ac3de9a2971325c088e5a5c which is fixing the race condition detected by TSAN.
The issue being fixed in that commit is a race between two threads both potentially accessing capnproto request parameters in an incoming request.
- One thread is a worker thread running the big
invokelambda which is responsible for converting the capnproto request parameters into c++ arguments, and then calling a c++ method with those arguments, and then converting the c++ method’s return value & output arguments or exception into a capnproto response. - The other thread is the capnproto event loop thread running the
on_cancellambda when there is a disconnect or cancellation. After theon_cancellambda returns, capnproto deletes the request parameters.
The point of the mutex is to prevent the event loop thread from deleting the request parameters while the worker thread is still reading them.
The control flow is a little complicated because of the way parameters are unpacked recursively, but the idea of guarding access to the parameters with a mutex is pretty simple. The
params_mutexis locked by the worker thread before it accesses the capnproto parameters, and unlocked by that thread after all the parameters have been unpacked and it is ready to call the c++ method that is supposed to be executed. Then on the event loop thread inside theon_cancellambda, the same mutex is locked before it returns so if there is a cancellation while the worker thread is unpacking parameters, the event loop thread will be blocked, and not delete the parameters until after the worker thread has finished reading them.What is odd about this fix is that the event loop thread is releasing the mutex before it actually deletes the parameters. This is suspicious, because if the mutex is not locked while they are deleted it does nothing to solve the reverse race condition in the case where the cancellation happens before the parameters start being read. But this case is handled by a separate
cancel_monitor.m_cancelledcheck which raises an exception to prevent the worker thread from reading the parameters.So the mutex should solve delete-while-reading race condition in a fairly straightforward way even if the control flow is complicated by a lot of recursion and callbacks, and even if it doesn’t solve the related delete-before-reading race conditions.
Tangent: One observation looking at the code is that while the delete-before-reading reverse case is being handled, it isn’t being handled very well because the exception raised is thrown inside the sync block instead of outside of it, and it should probably be an
InterruptExceptioninstead of an MP_LOG call so it can be caught higher in the stack and the thread can be cleaned up. I’m guessing right now that code will cause the worker thread to hang be leaked, never getting cleaned up. Ideally I’ could write a unit test trying to trigger this, but so far I’ve been struggling to figure out how to test different possible races and make sure they are handled well. It’s not obvious how to arrange the the request being cancelled before parameters are read. Anyway, this is only indirectly related to the TSAN race addressed in the commit, which is also unlikely to happen in practice and probably hasn’t happened, even though TSAN was right to warn the request parameters were previously being accessed from different threads without synchronization between them. (EDIT: This behavior is improved in latest push pr/promise.6 so if a cancellation does happen before parameters are read this will now be handled cleanly and just result in an info message being logged)theuni commented at 4:50 pm on February 13, 2026: contributor@ryanofsky Thanks for the explanation, that was very helpful.
I’m still struggling to understand how this mutex is working though. It seems that since it’s already been unlocked ininvoke(), the mutex will already be unlocked on the worker thread whenparams_lockgoes out of scope, which would cause a double unlock (undefined), no?Sjors commented at 4:55 pm on February 13, 2026: memberI created a branch and had an agent splinter 558ae11d8f2031cb5ae6f441c85621a6d662090e refactor: Add ProxyServer::post() method into multiple tiny commits: sjors/2026/02/558ae11-split
No need to use it here, but it might aid others in reviewing (with
dimmed-zebra). Noticed a small behavior change in that it removesMP_LOGforthread busy. It’s harmless since 92fed35b0b57e229fb5c009561f3f077e832bca0 drops this exception anyway.Also drew a little diagram to orient myself about where this function lives (IIUC):
Review placeholder: the first two commits, up to 92fed35b0b57e229fb5c009561f3f077e832bca0, mostly make sense to me now.
theuni commented at 5:08 pm on February 13, 2026: contributorI’m still struggling to understand how this mutex is working though. It seems that since it’s already been unlocked in
invoke(), the mutex will already be unlocked on the worker thread whenparams_lockgoes out of scope, which would cause a double unlock (undefined), no?Nevermind, I see my mistake now. I was thinking it was the bare mutex that was being unlocked in
invoke(), but it’s the lock itself. Seems sane I suppose, just not exactly straightforward.in include/mp/proxy-io.h:699 in 92fed35b0b outdated
696+ auto ready = kj::newPromiseAndFulfiller<void>(); // Signaled when waiter is idle again. 697+ auto ret = m_thread_ready.then([this, fn = std::forward<Fn>(fn), ready_fulfiller = kj::mv(ready.fulfiller)]() mutable { 698 auto result = kj::newPromiseAndFulfiller<T>(); // Signaled when fn() is called, with its return value. 699- bool posted = m_thread_context.waiter->post([this, fn = std::forward<Fn>(fn), result_fulfiller = kj::mv(result.fulfiller)]() mutable { 700+ bool posted = m_thread_context.waiter->post([this, fn = std::forward<Fn>(fn), ready_fulfiller = kj::mv(ready_fulfiller), result_fulfiller = kj::mv(result.fulfiller)]() mutable { 701+ m_loop->sync([ready_fulfiller = kj::mv(ready_fulfiller)]() mutable {
Sjors commented at 7:10 pm on February 13, 2026:In 92fed35b0b57e229fb5c009561f3f077e832bca0 Allow simultaneous calls on same Context.thread: maybe add a comment that the goal is to add requestedfn()calls to the promise chain as quickly as possible, so we callready_fulfiller->fulfill();before executing it.
ryanofsky commented at 2:57 am on February 18, 2026:re: #240 (review)
maybe add a comment that the goal is to add requested
fn()calls to the promise chain as quickly as possible, so we callready_fulfiller->fulfill();before executing it.Thanks, added comment
in include/mp/proxy-types.h:478 in 0a224251ec
473+ // It's also important to stop executing because the connection may have 474+ // been destroyed as described in 475+ // https://github.com/bitcoin/bitcoin/issues/34250 and there would be a 476+ // crash if execution continued. 477+ // TODO: Note this detection is racy because cancellation could happen 478+ // after this check. However, fixing this would require changing the
Sjors commented at 3:52 pm on February 17, 2026:In 0a224251ecd8cfa53a736d95ead2f623001297da Prevent crash on unclean disconnect if abandoned IPC call returns interface pointer: for
waitNext()the time spent in this race-vulnerable state is minuscule compared to the multi second wait?In that case, let’s not deal with it in this PR.
ryanofsky commented at 3:02 am on February 18, 2026:re: #240 (review)
for
waitNext()the time spent in this race-vulnerable state is minuscule compared to the multi second wait?In that case, let’s not deal with it in this PR.
Yes this race, like the tsan race fixed in 4f42fc4ddcb9202b2ac3de9a2971325c088e5a5c would be pretty difficult to trigger unintentionally (or even intentionally). But it turns out the lock added in 4f42fc4ddcb9202b2ac3de9a2971325c088e5a5c can be used to fix this race as well so I extended it to cover both cases (both event loop deleting params on cancellation while they might be being read, and event loop deleting response on cancellation while it might be being written).
in include/mp/type-context.h:123 in 0a224251ec outdated
119 // broken there is not a race between this thread and 120 // the disconnect handler trying to destroy the thread 121 // client object. 122 server.m_context.loop->sync([&] { 123+ auto self_dispose{kj::mv(self)}; 124+ if (erase_thread) {
Sjors commented at 5:12 pm on February 17, 2026:In 0a224251ecd8cfa53a736d95ead2f623001297da Prevent crash on unclean disconnect if abandoned IPC call returns interface pointer: can you add a comment here to explain why it’s important to call
sync()without doing anything in particular (where!erase_thread)?IIUC it’s to ensure
InterruptExceptiongets thrown as needed, so we can catch it below.No test breaks (for this commit) if I do:
0diff --git a/include/mp/type-context.h b/include/mp/type-context.h 1index 134542e..440a3ef 100644 2--- a/include/mp/type-context.h 3+++ b/include/mp/type-context.h 4@@ -113,5 +113,5 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 5 // makes another IPC call), so avoid modifying the map. 6 const bool erase_thread{inserted}; 7- KJ_DEFER( 8+ KJ_DEFER(if (erase_thread) { 9 // Erase the request_threads entry on the event loop 10 // thread with loop->sync(), so if the connection is 11@@ -121,5 +121,4 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 12 server.m_context.loop->sync([&] { 13 auto self_dispose{kj::mv(self)}; 14- if (erase_thread) { 15 // Look up the thread again without using existing 16 // iterator since entry may no longer be there after 17@@ -133,7 +132,6 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 18 removed = request_threads.extract(server.m_context.connection); 19 } 20- } 21 }); 22- ); 23+ }); 24 KJ_IF_MAYBE(exception, kj::runCatchingExceptions([&]{ 25 try {(but I guess that makes sense because the commit doesn’t touch tests in general)
ryanofsky commented at 3:25 am on February 18, 2026:re: #240 (review)
No test breaks (for this commit) if I do:
Interesting find, and I was surprised that diff works, but I think it only works because
erase_threadis normally true. The only time whenerase_threadshould be false is when a client calls a server method, and the server method calls back into the client (for example through astd::functionparameter`) and then the client makes another call back into the server on the same thread. This doesn’t happen in any mining code but IIRC there are some gui interactions in https://github.com/bitcoin/bitcoin/pull/10102 where it does.The reason it’s important call call
loop->syncunconditionally here is run theauto self_dispose{kj::mv(self)};line so the proxy server object is freed on the event loop thread, not the worker thread, which would not be safe, because the proxy server object is managed by capnproto. I added a comment about theselfvariable to clarify.Sjors commented at 5:38 pm on February 17, 2026: memberThe third commit looks good to me as well.in include/mp/proxy-io.h:695 in 92fed35b0b outdated
691@@ -689,9 +692,14 @@ struct ThreadContext 692 template<typename T, typename Fn> 693 kj::Promise<T> ProxyServer<Thread>::post(Fn&& fn) 694 { 695- { 696+ auto ready = kj::newPromiseAndFulfiller<void>(); // Signaled when waiter is idle again.
ismaelsadeeq commented at 5:59 pm on February 17, 2026:In “Allow simultaneous calls on same Context.thread” 92fed35b0b57e229fb5c009561f3f077e832bca0
Before this commit,
ProxyServer<Thread>::post()threw immediately when the thread was busy:After 92fed35b,
post()queues the call by chaining ontom_thread_readyinstead of throwing, which enables pipelining but removes the only backpressure mechanism. There is now no bound on chain depth. Every call from a fast producer appends a new node to the chain, each holding a captured lambda with serialized arguments plus twokj::Promiseobjects; the chain grows without limit.I’m curious whether it’s something we should worry about in practice as a potential for OOM of the server, either by bug or deliberate?
I modified the existing test to simulate a slow server (10s sleep per request) with a large number of queued calls:
0 setup.server->m_impl->m_int_fn = [&](int n) { 1+ std::this_thread::sleep_for(std::chrono::seconds{10}); 2 assert(n == expected); 3 expected += 100; 4 return n; 5 }; 6 7-std::atomic<size_t> running{3}; 8+std::atomic<size_t> running{1000001};The mptest process consumed 17 GB RSS in under 30 seconds.
With
while(true), the mptest was killed by OOM:0Out of memory: Killed process 2461509 (mptest) 1total-vm:53,439,684kB anon-rss:53,617,500kBWorth noting that in the
mptestprocess the client and server share the same address space, so the 17 GB figure includes client-side memory (unsent/pending capnp requests) as well as server-side promise chain growth?A potential naive solution, if this is deemed worth fixing, is a depth counter with a limit. increment before queuing, throw if the limit is exceeded, and decrement after the result is dispatched
0+ uint64_t m_maximum_promise_depth{1000}; 1+ uint64_t m_pending_tasks{0}; 2 3 template<typename T, typename Fn> 4 kj::Promise<T> ProxyServer<Thread>::post(Fn&& fn) 5 { 6+ if (m_pending_tasks >= m_maximum_promise_depth) { 7+ throw std::runtime_error("maximum promise depth reached"); 8+ } 9+ m_pending_tasks += 1; 10+ 11 auto ready = kj::newPromiseAndFulfiller<void>(); 12 auto ret = m_thread_ready.then([this, fn = std::forward<Fn>(fn), 13 ready_fulfiller = kj::mv(ready.fulfiller)]() mutable { 14 ... 15- m_loop->sync([&result_value, &exception, result_fulfiller = kj::mv(result_fulfiller)]() mutable { 16+ m_loop->sync([this, &result_value, &exception, result_fulfiller = kj::mv(result_fulfiller)]() mutable { 17 KJ_IF_MAYBE(e, exception) { 18 result_fulfiller->reject(kj::mv(*e)); 19 } else { 20 result_fulfiller->fulfill(kj::mv(*result_value)); 21 result_value.reset(); 22 } 23+ m_pending_tasks--; 24 result_fulfiller = nullptr; 25 }); 26 });With the fix applied, running mptest with a modified
running=1001, a 10s server sleep holds RSS at a stable 17 MB and throws cleanly at depth 1000:0remote exception: std::exception: maximum promise depth reached 1VmRSS: 17,484 kBAt
running=1000001with the fix, RSS reached ~9.5 GB from client memory alone.This implies that in a real deployment where client and server are separate processes, the server would be protected by this fix, but the client could still exhaust its own memory independently. This fix addresses the server-side only.
Maybe there is even a more interesting approach with kj?
ryanofsky commented at 3:36 am on February 18, 2026:re: #240 (review)
removes the only backpressure mechanism
Thankfully, this shouldn’t be the case. Cap’n Proto’s API is nonblocking, so there is nothing the server can do to directly slow down the rate of client requests. The way flow control is implemented is by having the server send a response to each request after it is processed, and as long as the client waits for these responses, it can avoid sending requests faster than the server processes them, and can limit the number of requests that are queued up.
I’m curious whether it’s something we should worry about in practice as a potential for OOM of the server, either by bug or deliberate?
I do think it’s something we may want to worry about, and we may want to add counters and hard caps on things like number of objects, connections, threads, and pending requests in the future.
I think your
m_pending_tasksidea is basically the right approach, although I probably we would want the counter to be more global, like one counter per UNIX socket path instead of one counter per thread object. This way badly behaved clients would not be able to circumvent the limit by creating many connections or many threads.Initially in https://github.com/bitcoin/bitcoin/issues/33923, I resisted the idea the clients should be able to send arbitrary numbers of requests and have them queued up on the server. But I changed my mind after realizing this is already how cap’n proto handles requests that run on the event loop thread, so this would just be making requests that run on worker threads behave consistently. Also this behavior is potentially useful if clients need multiple pieces of data from the server, so they can just send all the requests at once instead of staggering them.
in include/mp/proxy-io.h:89 in 558ae11d8f
81@@ -82,9 +82,15 @@ template <> 82 struct ProxyServer<Thread> final : public Thread::Server 83 { 84 public: 85- ProxyServer(ThreadContext& thread_context, std::thread&& thread); 86+ ProxyServer(Connection& connection, ThreadContext& thread_context, std::thread&& thread); 87 ~ProxyServer(); 88 kj::Promise<void> getName(GetNameContext context) override; 89+ 90+ //! Run a callback function returning T on this thread.
ismaelsadeeq commented at 6:01 pm on February 17, 2026:In “refactor: Add ProxyServer::post() method” 558ae11d8f2031cb5ae6f441c85621a6d662090e
nit: clarify that we return a T promise?
ryanofsky commented at 3:57 am on February 18, 2026:re: #240 (review)
nit: clarify that we return a T promise?
Thanks, expanded the comment to describe the behavior and what’s returned.
in src/mp/proxy.cpp:365 in 558ae11d8f outdated
361@@ -362,8 +362,8 @@ ProxyClient<Thread>::~ProxyClient() 362 } 363 } 364 365-ProxyServer<Thread>::ProxyServer(ThreadContext& thread_context, std::thread&& thread) 366- : m_thread_context(thread_context), m_thread(std::move(thread)) 367+ProxyServer<Thread>::ProxyServer(Connection& connection, ThreadContext& thread_context, std::thread&& thread)
ismaelsadeeq commented at 6:02 pm on February 17, 2026:In “refactor: Add ProxyServer::post() method” 558ae11d8f2031cb5ae6f441c85621a6d662090e
Are we passing the whole connection here for future profing, i.e we need more things apart from m_loop later?
ryanofsky commented at 4:04 am on February 18, 2026:re: #240 (review)
Are we passing the whole connection here for future profing, i.e we need more things apart from m_loop later?
That’s one reason, and the other reason is that
ProxyServer<Thread>is a specialization of the genericProxyServerstruct which uses a connection pointer to be able to run cleanup code when the connection is destroyed, and I thought it would be good if differentProxyServerspecializations were more similar. Choice is admittedly pretty arbitrary, though.ismaelsadeeq commented at 6:10 pm on February 17, 2026: memberreviewed the first two commits to 92fed35b0b57e229fb5c009561f3f077e832bca0
Thanks @Sjors for your split of the first commit, and the diagram, it was sweet to use that in the review.
ryanofsky force-pushed on Feb 18, 2026ryanofsky commented at 4:50 am on February 18, 2026: collaboratorThanks for the detailed reviews!
Updated 4f42fc4ddcb9202b2ac3de9a2971325c088e5a5c -> c436ae425066679a945b7b2cd02c60c96537b973 (
pr/promise.5->pr/promise.6, compare) making suggested changes, and merging 3rd and 4th commits and improving them. Instead of just handling the cancellation-while-reading params race case detected by TSAN, the code now handles the cancellation-while-writing-response race case which was previously a TODO, and better handles the cancellation-before-reading-params race case which was previously handled, but not very well, resulting in an uncaught exception. All of these cases would be very unlikely to happen since params and response structs are only accessed very briefly before and after method execution, so accesses by the event thread would be unlikely. But they are good to handle. Other than these change, the other changes were comment, logging, and naming improvements.ryanofsky force-pushed on Feb 18, 2026ryanofsky commented at 6:38 am on February 18, 2026: collaboratorUpdated c436ae425066679a945b7b2cd02c60c96537b973 -> 3fc752c050cec3982fc7abd16f07d0d6122866b9 (pr/promise.6->pr/promise.7, compare) adding more explanatory comments and tweaking the race fix to work more generally in the case where a request is canceled afterServerCall::invokecalls the method but beforecancel_lockis reacquired. Previously this case was handled inServerRet::invokebut this code could by bypassed if the response was being populated from exceptions or output parameter values instead of return values.in include/mp/type-context.h:184 in 3fc752c050 outdated
181+ MP_LOG(*server.m_context.loop, Log::Info) << "IPC server request #" << req << " interrupted (" << e.what() << ")"; 182+ } 183+ })) { 184+ MP_LOG(*server.m_context.loop, Log::Error) << "IPC server request #" << req << " uncaught exception."; 185+ throw exception; 186+ }
Sjors commented at 8:24 am on February 18, 2026:In 3fc752c050cec3982fc7abd16f07d0d6122866b9 Prevent crash on unclean disconnect if abandoned IPC call returns interface pointer: maybe add:
0// End of scope: if KJ_DEFER was reached, it runs hereGiven that this function is quite big with various nesting levels, it’s not as obvious.
ryanofsky commented at 1:53 pm on February 18, 2026:re: #240 (review)
// End of scope: if KJ_DEFER was reached, it runs here
Nice idea, helpful to see where the KJ_DEFER code will run
in include/mp/type-context.h:100 in 3fc752c050 outdated
96@@ -90,25 +97,67 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 97 auto& request_threads = thread_context.request_threads; 98 ConnThread request_thread; 99 bool inserted; 100+ Mutex cancel_mutex;
Sjors commented at 11:30 am on February 18, 2026:In 3fc752c050cec3982fc7abd16f07d0d6122866b9 Prevent crash on unclean disconnect if abandoned IPC call returns interface pointer: many things are called
cancel...now. Consider renaming these to(request_)structs_mutexand(request_)structs_lock.This makes it more clear that
m_calledis not guarded by this mutex.
ryanofsky commented at 1:51 pm on February 18, 2026:re: #240 (review)
Consider renaming these to
(request_)structs_mutexand(request_)structs_lock.In latest push I renamed
canceledtorequest_canceledas suggested but kept namecancel_mutexfor now since I want to emphasize that in normal cases only one thread is accessing the structs, and most code does not need to be referencing or worrying about the mutex when accessing themAlso the name
structs_mutexmight imply that acquiring the mutex is sufficient to protect access to these structs, when it’s not really, because code needs to checkrequest_canceledafter acquiring the mutex to see if the request might have been canceled before the mutex was acquired.This makes it more clear that
m_calledis not guarded by this mutex.Latest push changes
request_canceledfrom an atomic bool to a normal bool, so it actually is now guarded bycancel_lock, but it’s true previously this would have been a good reason to distinguish the names.
Sjors commented at 4:21 pm on February 18, 2026:I see. I think it would easier to follow if
cancel_mutexlived onServerInvokeContextandrequest_canceledwas annotated with aMP_GUARDED_BY(cancel_mutex). This seems to work:0diff --git a/include/mp/proxy-io.h b/include/mp/proxy-io.h 1index 89b4dcb..e8dd1c2 100644 2--- a/include/mp/proxy-io.h 3+++ b/include/mp/proxy-io.h 4@@ -54,15 +54,16 @@ struct ServerInvokeContext : InvokeContext 5 //! reading params (`call_context.getParams()`) or writing results 6 //! (`call_context.getResults()`). 7- Lock* cancel_lock{nullptr}; 8+ Mutex cancel_mutex; 9+ Lock cancel_lock; 10 //! For IPC methods that execute asynchronously, not on the event-loop 11 //! thread, this is set to true if the IPC call was canceled by the client 12 //! or canceled by a disconnection. If the call runs on the event-loop 13 //! thread, it can't be canceled. 14- bool request_canceled{false}; 15+ bool request_canceled MP_GUARDED_BY(cancel_mutex){false}; 16 17 ServerInvokeContext(ProxyServer& proxy_server, CallContext& call_context, int req) 18- : InvokeContext{*proxy_server.m_context.connection}, proxy_server{proxy_server}, call_context{call_context}, req{req} 19- { 20- } 21+ : InvokeContext{*proxy_server.m_context.connection}, proxy_server{proxy_server}, call_context{call_context}, req{req}, 22+ cancel_lock{cancel_mutex} 23+ {} 24 }; 25 26diff --git a/include/mp/proxy-types.h b/include/mp/proxy-types.h 27index 7b4ede2..f5b6968 100644 28--- a/include/mp/proxy-types.h 29+++ b/include/mp/proxy-types.h 30@@ -446,6 +446,6 @@ struct ServerCall 31 decltype(auto) invoke(ServerContext& server_context, TypeList<>, Args&&... args) const 32 { 33- // If cancel_lock is set, release it while executing the method, and 34- // reacquire it afterwards. The lock is needed to prevent params and 35+ // Release cancel_lock while executing the method, and reacquire it 36+ // afterwards. The lock is needed to prevent params and 37 // response structs from being deleted by the event loop thread if the 38 // request is canceled, so it is only needed before and after method 39@@ -453,7 +453,7 @@ struct ServerCall 40 // because the method can take arbitrarily long to return and the event 41 // loop will need the lock itself in on_cancel if the call is canceled. 42- if (server_context.cancel_lock) server_context.cancel_lock->m_lock.unlock(); 43+ server_context.cancel_lock.unlock(); 44 KJ_DEFER( 45- if (server_context.cancel_lock) server_context.cancel_lock->m_lock.lock(); 46+ server_context.cancel_lock.lock(); 47 // If the IPC request was canceled, there is no point continuing to 48 // execute. It's also important to stop executing because the 49diff --git a/include/mp/type-context.h b/include/mp/type-context.h 50index baf0501..53e6ce6 100644 51--- a/include/mp/type-context.h 52+++ b/include/mp/type-context.h 53@@ -98,10 +98,9 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 54 ConnThread request_thread; 55 bool inserted; 56- Mutex cancel_mutex; 57- Lock cancel_lock{cancel_mutex}; 58- server_context.cancel_lock = &cancel_lock; 59+ server_context.cancel_lock.unlock(); 60 server.m_context.loop->sync([&] { 61 // Detect request being canceled before it executes. 62 if (cancel_monitor.m_canceled) { 63+ Lock lock{server_context.cancel_mutex}; 64 server_context.request_canceled = true; 65 return; 66@@ -109,5 +108,5 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 67 // Detect request being canceled while it executes. 68 assert(!cancel_monitor.m_on_cancel); 69- cancel_monitor.m_on_cancel = [&server, &server_context, &cancel_mutex, req]() { 70+ cancel_monitor.m_on_cancel = [&server, &server_context, req]() { 71 MP_LOG(*server.m_context.loop, Log::Info) << "IPC server request #" << req << " canceled while executing."; 72 // Lock cancel_mutex here to block the event loop 73@@ -123,5 +122,5 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 74 // request_canceled after acquiring it to check if 75 // it is still safe to use the structs. 76- Lock{cancel_mutex}; 77+ Lock{server_context.cancel_mutex}; 78 server_context.request_canceled = true; 79 }; 80@@ -131,4 +130,5 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 81 [&] { return context_arg.getCallbackThread(); }); 82 }); 83+ server_context.cancel_lock.lock(); 84 // If an entry was inserted into the request_threads map, 85 // remove it after calling fn.invoke. If an entry was not 86@@ -145,5 +145,5 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 87 // finished and no longer accessing the parameters or 88 // response structs. 89- cancel_lock.m_lock.unlock(); 90+ server_context.cancel_lock.unlock(); 91 // Erase the request_threads entry on the event loop 92 // thread with loop->sync(), so if the connection is
ryanofsky commented at 4:51 pm on February 18, 2026:I see. I think it would easier to follow if
cancel_mutexlived onServerInvokeContextandrequest_canceledwas annotated with aMP_GUARDED_BY(cancel_mutex).Interesting suggestion. I am a little wary of having an unnecessary mutex declared and locked/unlocked in the synchronous case when cancellation is impossible and this unnecessary, but I can see the appeal of being able to remove the
if (cancel_lock)checks and just unconditionally locking and unlocking, even when unnecessary. It also is nice to add the guarded_by annotation, though I wonder if that might be possible anyway. The thing I don’t like about this solution is the need to manually unlock before calling loop->sync() and relock after when it used to be ok to lock continuously. I feel like that’s just being done to appear thread safety annotations and is should not be needed at all. I’m also feel like since there are two ServerInvokeContext objects in the asynchronous case it’s a little confusing two have mutexes declared when only one being used. In the current code where ServerInvokeContext just contains a pointer there are no unused mutexes.I think given all this I’d lean away from applying this change now but will think about it more and see if there might be another way to benefit from thread safety annotations here.
Sjors commented at 5:15 pm on February 18, 2026:The thing I don’t like about this solution is the need to manually unlock before calling loop->sync() and relock after when it used to be ok to lock continuously.
I didn’t like that either. I thought about having the constructor unlock it, but that also feels like a hack.
It might be enough to just have the
request_canceleddoc make it clear what’s expected.in include/mp/proxy-io.h:61 in 3fc752c050
56+ Lock* cancel_lock{nullptr}; 57+ //! For IPC methods that execute asynchronously, not on the event-loop 58+ //! thread, this is set to true if the IPC call was canceled by the client 59+ //! or canceled by a disconnection. If the call runs on the event-loop 60+ //! thread, it can't be canceled. 61+ std::atomic<bool> canceled{false};
Sjors commented at 11:33 am on February 18, 2026:In 3fc752c050cec3982fc7abd16f07d0d6122866b9 Prevent crash on unclean disconnect if abandoned IPC call returns interface pointer: less useful than the above, but maybe call thisrequest_cancelled, to more clearly distinguish fromcancel_monitor.m_canceled.
ryanofsky commented at 1:52 pm on February 18, 2026:re: #240 (review)
In 3fc752c Prevent crash on unclean disconnect if abandoned IPC call returns interface pointer: less useful than the above, but maybe call this
request_cancelled, to more clearly distinguish fromcancel_monitor.m_canceled.Good idea, applied suggestion
Sjors approvedSjors commented at 11:51 am on February 18, 2026: memberACK 3fc752c050cec3982fc7abd16f07d0d6122866b9
I tested https://github.com/bitcoin/bitcoin/pull/34422 using the latest version of this PR and it still works as advertised: the
interface_ipc_mining.pytest passes, and if I connect an SRI pool role andkillall -9 pool_sv2it, I get a nice “IPC server request #11 interrupted (canceled)” instead of a node crash.0diff --git a/include/mp/proxy-io.h b/include/mp/proxy-io.h 1index bba9cfb..617387e 100644 2--- a/include/mp/proxy-io.h 3+++ b/include/mp/proxy-io.h 4@@ -54,10 +54,10 @@ struct ServerInvokeContext : InvokeContext 5 //! reading params (`call_context.getParams()`) or writing results 6 //! (`call_context.getResults()`). 7- Lock* cancel_lock{nullptr}; 8+ Lock* structs_lock{nullptr}; 9 //! For IPC methods that execute asynchronously, not on the event-loop 10 //! thread, this is set to true if the IPC call was canceled by the client 11 //! or canceled by a disconnection. If the call runs on the event-loop 12 //! thread, it can't be canceled. 13- std::atomic<bool> canceled{false}; 14+ std::atomic<bool> request_canceled{false}; 15 16 ServerInvokeContext(ProxyServer& proxy_server, CallContext& call_context, int req) 17diff --git a/include/mp/proxy-types.h b/include/mp/proxy-types.h 18index f39d517..1d4becc 100644 19--- a/include/mp/proxy-types.h 20+++ b/include/mp/proxy-types.h 21@@ -446,14 +446,15 @@ struct ServerCall 22 decltype(auto) invoke(ServerContext& server_context, TypeList<>, Args&&... args) const 23 { 24- // If cancel_lock is set, release it while executing the method, and 25- // reacquire it afterwards. The lock is needed to prevent params and 26- // response structs from being deleted by the event loop thread if the 27- // request is canceled, so it is only needed before and after method 28+ // If structs_lock is set, release it while executing the 29+ // method, and reacquire it afterwards. The lock is needed to prevent 30+ // params and response structs from being deleted by the event-loop 31+ // thread if the request is canceled, so it is only needed before and 32+ // after method 33 // execution. It is important to release the lock during execution 34 // because the method can take arbitrarily long to return and the event 35 // loop will need the lock itself in on_cancel if the call is canceled. 36- if (server_context.cancel_lock) server_context.cancel_lock->m_lock.unlock(); 37- KJ_DEFER(if (server_context.cancel_lock) { 38- server_context.cancel_lock->m_lock.lock(); 39+ if (server_context.structs_lock) server_context.structs_lock->m_lock.unlock(); 40+ KJ_DEFER(if (server_context.structs_lock) { 41+ server_context.structs_lock->m_lock.lock(); 42 // If the IPC request was canceled, there is no point continuing to 43 // execute. It's also important to stop executing because the 44@@ -461,5 +462,5 @@ struct ServerCall 45 // https://github.com/bitcoin/bitcoin/issues/34250 and there would 46 // be a crash if execution continued. 47- if (server_context.canceled) throw InterruptException{"canceled"}; 48+ if (server_context.request_canceled) throw InterruptException{"canceled"}; 49 }); 50 return ProxyServerMethodTraits<typename decltype(server_context.call_context.getParams())::Reads>::invoke( 51diff --git a/include/mp/type-context.h b/include/mp/type-context.h 52index fb47e38..b896773 100644 53--- a/include/mp/type-context.h 54+++ b/include/mp/type-context.h 55@@ -98,19 +98,20 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 56 ConnThread request_thread; 57 bool inserted; 58- Mutex cancel_mutex; 59- Lock cancel_lock{cancel_mutex}; 60- server_context.cancel_lock = &cancel_lock; 61+ // Protects this request's params and response structs. 62+ Mutex structs_mutex; 63+ Lock structs_lock{structs_mutex}; 64+ server_context.structs_lock = &structs_lock; 65 server.m_context.loop->sync([&] { 66 // Detect request being canceled before it executes. 67 if (cancel_monitor.m_canceled) { 68- server_context.canceled = true; 69+ server_context.request_canceled = true; 70 return; 71 } 72 // Detect request being canceled while it executes. 73 assert(!cancel_monitor.m_on_cancel); 74- cancel_monitor.m_on_cancel = [&server, &server_context, &cancel_mutex, req]() { 75+ cancel_monitor.m_on_cancel = [&server, &server_context, &structs_mutex, req]() { 76 MP_LOG(*server.m_context.loop, Log::Info) << "IPC server request #" << req << " canceled while executing."; 77- server_context.canceled = true; 78- // Lock cancel_mutex here to block the the event 79+ server_context.request_canceled = true; 80+ // Lock structs_mutex here to block the event 81 // loop thread and prevent it from deleting the 82 // request's params and response structs if the 83@@ -122,7 +123,7 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 84 // execution thread acquires it. So in addition to 85 // acquiring the lock, the execution thread always 86- // checks server_context.canceled after acquiring it 87+ // checks server_context.request_canceled after acquiring it 88 // to check if it is still safe to use the structs. 89- Lock{cancel_mutex}; 90+ Lock{structs_mutex}; 91 }; 92 // Update requests_threads map if not canceled. 93@@ -131,5 +132,5 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 94 [&] { return context_arg.getCallbackThread(); }); 95 }); 96- if (server_context.canceled) { 97+ if (server_context.request_canceled) { 98 MP_LOG(*server.m_context.loop, Log::Info) << "IPC server request #" << req << " canceled before it could be executed"; 99 return call_context; 100@@ -143,12 +144,12 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 101 const bool erase_thread{inserted}; 102 KJ_DEFER( 103- // Release the cancel lock before calling loop->sync and 104+ // Release structs_lock before calling loop->sync and 105 // waiting for the event loop thread, because if a 106 // cancelation happened it will run the on_cancel 107- // callback above. It's safe to release cancel_lock 108+ // callback above. It's safe to release structs_lock 109 // lock at this point because the fn.invoke() call below 110 // will be finished and no longer accessing the 111 // parameters or response structs. 112- cancel_lock.m_lock.unlock(); 113+ structs_lock.m_lock.unlock(); 114 // Erase the request_threads entry on the event loop 115 // thread with loop->sync(), so if the connection isDrahtBot requested review from ismaelsadeeq on Feb 18, 2026ryanofsky force-pushed on Feb 18, 2026ryanofsky commented at 2:19 pm on February 18, 2026: collaboratorUpdated 3fc752c050cec3982fc7abd16f07d0d6122866b9 -> 11c023e1006dae68f237a77d1b8005f47c8b9196 (
pr/promise.7->pr/promise.8, compare) applying a suggestedrequest_canceledrenaming and making this variable a normal bool instead of an atomic bool. (No reason it needs to be atomic anymore if a mutex is needed to prevent race conditions anyway.) Also improved handling of the “canceled before it could be executed” case intype-context.hto avoid skipping KJ_DEFER cleanup in that case. And added a suggested comment about the KJ_DEFER cleanup and improved other comments.Updated 11c023e1006dae68f237a77d1b8005f47c8b9196 -> 55e9eac794fdb1a6d3a8a1225d3072ca89d9c72c (
pr/promise.8->pr/promise.9, compare) just fixing comment in intermediate commit (change was applied to wrong commit in last push)Updated 55e9eac794fdb1a6d3a8a1225d3072ca89d9c72c -> 186a6d358d42f22cada8f6bdf6bd72de2adfcddb (
pr/promise.9->pr/promise.10, compare) fixing up more commentsryanofsky force-pushed on Feb 18, 2026ryanofsky force-pushed on Feb 18, 2026in include/mp/type-context.h:99 in 186a6d358d
96@@ -90,25 +97,62 @@ auto PassField(Priority<1>, TypeList<>, ServerContext& server_context, const Fn& 97 auto& request_threads = thread_context.request_threads; 98 ConnThread request_thread; 99 bool inserted;
Sjors commented at 3:37 pm on February 18, 2026:Shouldinsertedbe initialized?
ryanofsky commented at 4:36 pm on February 18, 2026:Should
insertedbe initialized?Good catch. Looks like that needs to be initialized now due to the early return added added in the lambda below.
Also, probably unrelatedly, with the current version of this PR, ASAN reports a memory leak in the interface_ipc.py from a leaked waitNext return value in https://github.com/bitcoin/bitcoin/pull/34422 https://github.com/bitcoin/bitcoin/actions/runs/22145238258/job/64020426758?pr=34422, and I don’t see anything that could lead to this happening, so I need to try to reproduce it locally to debug and will mark PR as a draft.
Would rather https://github.com/bitcoin/bitcoin/pull/34568 and https://github.com/bitcoin/bitcoin/pull/34284 be reviewed ahead of this PR anyway to get rid of these extra dependencies.
ryanofsky commented at 12:25 pm on February 19, 2026:ryanofsky marked this as a draft on Feb 18, 2026ryanofsky force-pushed on Feb 19, 2026ryanofsky marked this as ready for review on Feb 19, 2026ryanofsky commented at 12:17 pm on February 19, 2026: collaboratorUpdated 186a6d358d42f22cada8f6bdf6bd72de2adfcddb -> 43242189b926a1adfb945f55cfd903ff9e951d96 (
pr/promise.10->pr/promise.11, compare) to avoid memory leak detected by ASAN https://github.com/bitcoin/bitcoin/actions/runs/22145238258/job/64020426758?pr=34422 and caused by clang bug described https://github.com/bitcoin/bitcoin/pull/34422#issuecomment-3923603344. This should be ready for review again with the fixUpdated 43242189b926a1adfb945f55cfd903ff9e951d96 -> 337fa9cac34ebc4f7cbf80e7b9a886d280098445 (
pr/promise.11->pr/promise.12, compare) fixing uninitialized variable pointed out #240 (review)ryanofsky force-pushed on Feb 19, 2026Sjors commented at 1:06 pm on February 19, 2026: memberThe newCallThenhelper looks good to me, but I’ll wait for the sanitizer issue to resolved: https://github.com/bitcoin/bitcoin/pull/34422#issuecomment-3927099828ryanofsky referenced this in commit c51b2fd654 on Feb 19, 2026ryanofsky commented at 7:48 pm on February 19, 2026: collaboratorThe new
CallThenhelper looks good to me, but I’ll wait for the sanitizer issue to resolved: bitcoin/bitcoin#34422 (comment)Should be resolved now! (With help from Marco identifying the issue and probably saving me a lot of time debugging.)
Sjors commented at 7:41 am on February 20, 2026: memberACK 337fa9cac34ebc4f7cbf80e7b9a886d280098445ryanofsky force-pushed on Feb 20, 2026ryanofsky commented at 4:24 pm on February 20, 2026: collaboratorUpdated 337fa9cac34ebc4f7cbf80e7b9a886d280098445 -> ec788f1fbc936571f3989344d7559e8858a75b91 (
pr/promise.12->pr/promise.13, compare) just making small tweaks: renames, improved comments, updated commit messages, and improving exception loggingUpdated ec788f1fbc936571f3989344d7559e8858a75b91 -> 0174450ca2e95a4bd1f22e4fd38d83b1d432ac1f (
pr/promise.13->pr/promise.14, compare) to fix iwyu errors https://github.com/bitcoin-core/libmultiprocess/actions/runs/22231928222/job/64313628070?pr=240ryanofsky force-pushed on Feb 20, 2026Sjors commented at 5:03 pm on February 20, 2026: memberutACK 0174450ca2e95a4bd1f22e4fd38d83b1d432ac1f if CI is happy. The additional “uncaught exception ([actual message])” seems useful.ryanofsky referenced this in commit f7e3dccc55 on Feb 20, 2026ryanofsky referenced this in commit 4e7142f9f0 on Feb 20, 2026ryanofsky commented at 8:25 pm on February 20, 2026: collaboratorPlanning to merge this soon to unblock https://github.com/bitcoin/bitcoin/pull/34422 and #218, but would still welcome any more review and happy to make followup changes!ryanofsky merged this on Feb 20, 2026ryanofsky closed this on Feb 20, 2026
github-metadata-mirror
This is a metadata mirror of the GitHub repository bitcoin-core/libmultiprocess. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2026-03-09 11:30 UTC
This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me