Fix “disconnected: write(m_post_fd, &buffer, 1): Broken pipe” EventLoop shutdown races. #129

pull ryanofsky wants to merge 1 commits into bitcoin-core:master from ryanofsky:pr/disconnected changing 2 files +32 −19
  1. ryanofsky commented at 1:40 pm on January 23, 2025: collaborator

    The EventLoop shutdown sequence has race conditions that could cause it to shut down right before a removeClient write(m_post_fd, ...) call is about to happen, if threads run in an unexpected order, and cause the write to fail.

    Cases where this can happen are described in https://github.com/bitcoin/bitcoin/issues/31151#issuecomment-2609686156 and the possible causes are that (1) EventLoop::m_mutex is not used to protect some EventLoop member variables that are accessed from multiple threads, particularly (m_num_clients and m_async_fns) and (2) the removeClient method can do unnecessary write(m_post_fd, ...) calls before the loop is supposed to exit because it is not checking the m_async_fns.empty() condition, and these multiple write calls can make the event loop exit early and cause the final write() call to fail. In practice, only the second cause seems to actually trigger this bug, but PR fixes both possible causes.

    Fixes https://github.com/bitcoin/bitcoin/issues/31151

  2. Fix "disconnected: write(m_post_fd, &buffer, 1): Broken pipe" EventLoop shutdown races.
    The EventLoop shutdown sequence has race conditions that could cause it to shut
    down right before a `removeClient` `write(m_post_fd, ...)` call is about to
    happen, if threads run in an unexpected order, and cause the write to fail.
    
    Cases where this can happen are described in
    https://github.com/bitcoin/bitcoin/issues/31151#issuecomment-2609686156 and the
    possible causes are that (1) `EventLoop::m_mutex` is not used to protect some
    EventLoop member variables that are accessed from multiple threads,
    particularly (`m_num_clients` and  `m_async_fns`) and (2) the `removeClient`
    method can do unnecessary `write(m_post_fd, ...)` calls before the loop is
    supposed to exit because it is not checking the `m_async_fns.empty()`
    condition, and these multiple write calls can make the event loop exit early
    and cause the final `write()` call to fail. In practice, only the second cause
    seems to actually trigger this bug, but PR fixes both possible causes.
    
    Fixes https://github.com/bitcoin/bitcoin/issues/31151
    
    Co-authored-by: Vasil Dimov <vd@FreeBSD.org>
    0e4f88d3f9
  3. ryanofsky commented at 3:24 pm on January 23, 2025: collaborator
    I’ve been testing this by running echoipc and stop_node in a loop with this fix and seeing if https://github.com/bitcoin/bitcoin/issues/31151 errors or any others are triggered. I ran with a TSAN build and a normal non-TSAN build and so far haven’t seen any errors, so this appears to be working.
  4. Sjors commented at 7:20 am on January 24, 2025: member
  5. Sjors referenced this in commit 4593733f0a on Jan 24, 2025
  6. in src/mp/proxy.cpp:200 in 5a6c9e0f71 outdated
    209+        if (read_bytes != 1) throw std::logic_error("EventLoop wait_stream closed unexpectedly");
    210+        std::unique_lock<std::mutex> lock(m_mutex);
    211+        if (m_post_fn) {
    212+            Unlock(lock, *m_post_fn);
    213+            m_post_fn = nullptr;
    214+            m_cv.notify_all();
    


    vasild commented at 10:10 am on January 24, 2025:
    Previously m_cv.notify_all() would have been called regardless of whether m_post_fn was nullptr or not. Now it will only be called if m_post_fn is not nullptr. Is this intended?

    ryanofsky commented at 4:49 pm on January 24, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1928446166

    Previously m_cv.notify_all() would have been called regardless of whether m_post_fn was nullptr or not. Now it will only be called if m_post_fn is not nullptr. Is this intended?

    Yep, that’s intended. The only reason to notify a condition variable is after updating some shared state, like changing a reference count or adding or removing something from a queue. The notify_all() call here is made right after setting m_post_fn to notify waiting threads that it changed. In the alternate case, no state is changing and there’s nothing notify about.

  7. in src/mp/proxy.cpp:280 in 5a6c9e0f71 outdated
    266@@ -268,6 +267,12 @@ void EventLoop::startAsyncThread(std::unique_lock<std::mutex>& lock)
    267     }
    268 }
    269 
    270+bool EventLoop::done(std::unique_lock<std::mutex>& lock)
    271+{
    272+    assert(m_num_clients >= 0);
    273+    return m_num_clients == 0 && m_async_fns.empty();
    274+}
    


    vasild commented at 10:17 am on January 24, 2025:

    What is the point of passing lock when it is not going to be used? Looks weird. Is this to show that m_mutex must be locked when calling this? If yes, then engaging the clang annotations would be more clear. Because now lock could be anything, it may not own a mutex or may own any mutex other than m_mutex. Maybe add these:

    0  bool EventLoop::done(std::unique_lock<std::mutex>& lock)
    1  {
    2      assert(m_num_clients >= 0);
    3+     assert(lock.mutex() == &m_mutex);
    4+     assert(lock.owns_lock());
    5      return m_num_clients == 0 && m_async_fns.empty();
    6  }
    

    ryanofsky commented at 4:53 pm on January 24, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1928455500

    Maybe add these:

    Thanks, added.

    You are right that code is written in this style to provide the similar benefits as clang annotations, so it is a compile error for outside callers access shared state without the needed lock. So few existing EventLoop methods take lock parameters, and this new method follows that pattern.

    For background, the EventLoop code is basically some of the first code I wrote in https://github.com/bitcoin/bitcoin/pull/10102 and I’m pretty sure it written before annotations were used much in bitcoin core. It probably would be a very good idea to add clang annotations here too, so that could be a followup, but in the meantime this convention prevents the common bug of forgetting to lock something before accessing shared state. (I think a bug where you would somehow find a different mutex and lock the wrong mutex would be more rare.)

  8. in src/mp/proxy.cpp:188 in 5a6c9e0f71 outdated
    187@@ -188,29 +188,27 @@ void EventLoop::loop()
    188 
    


    vasild commented at 10:37 am on January 24, 2025:

    (posting in a random place coz github wouldn’t allow me to comment on the destructor)

    Not related to this PR, just an observation. The destructor is:

    0EventLoop::~EventLoop()
    1{
    2    if (m_async_thread.joinable()) m_async_thread.join();
    3    std::lock_guard<std::mutex> lock(m_mutex);
    4    KJ_ASSERT(m_post_fn == nullptr);
    5    KJ_ASSERT(m_async_fns.empty());
    6    KJ_ASSERT(m_wait_fd == -1);
    7    KJ_ASSERT(m_post_fd == -1);
    8    KJ_ASSERT(m_num_clients == 0);
    

    locking m_mutex shouldn’t be necessary because there cannot be two threads destroying the same object concurrently. Or if there are then this is a serious bug elsewhere that will cause a double free bug after the destructor completes. Also, there cannot be one thread destroying the object while another one is calling a method of that object.


    ryanofsky commented at 4:57 pm on January 24, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1928480720

    locking m_mutex shouldn’t be necessary because there cannot be two threads destroying the same object concurrently. Or if there are then this is a serious bug elsewhere that will cause a double free bug after the destructor completes. Also, there cannot be one thread destroying the object while another one is calling a method of that object.

    I think that is basically true and the lock is probably unnecessary here.

    But there might be scenarios where threadsanitizer will throw errors without a lock, if it sees a variable being written here after being read from another thread with no synchronization in between. I am also not sure if clang thread safety annotations would complain without a lock here.

    Also to be pedantic though I think “there cannot be one thread destroying the object while another one is calling a method of that object” is not true. The state in the object is still valid while the destructor method is running, and it should be fine for other threads to access it. The destructor method is not different from other methods in this respect.


    vasild commented at 1:39 pm on January 27, 2025:

    Clang thread safety is aware of this:

     0class A
     1{
     2public:
     3    A()
     4    {
     5        x = 1;
     6    }
     7
     8    ~A()
     9    {
    10        x = 2;
    11    }
    12
    13    void f()
    14    {
    15        x = 3;
    16    }
    17
    18    std::mutex m;
    19    int x __attribute__((guarded_by(m)));
    20};
    

    Generates only a warning for f():

    0t.cc:194:9: warning: writing variable 'x' requires holding mutex 'm' exclusively [-Wthread-safety-analysis]
    1  194 |         x = 3;
    2      |         ^
    31 warning generated.
    

    I assume the thread sanitizer is the same even though I couldn’t demonstrate this easily.

    The state in the object is still valid while the destructor method is running, and it should be fine for other threads to access it.

    True, but then the problem arises that after the destructor finishes, without further synchronization, the memory that contains the object is freed. So it would be a serious read-after-free bug if another method is being executed while the destructor is being called because it could happen that the destructor finishes and the memory freed while the other method is still executing. Any mutexes within the object locked inside the destructor will be unlocked when the destructor finishes and before the memory is freed, so they cannot be used for synchronization with the other method.


    ryanofsky commented at 2:12 pm on January 27, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1930545346

    Clang thread safety is aware of this:

    Thanks, this is interesting. I’m a little unsure how clang knows that no mutex is required to access the variable in the destructor. It doesn’t seem like a totally safe assumption to make because I could easily think of cases where locking the mutex in a destructor would be required, and then clang wouldn’t diagnose any problem. But maybe it is just assuming that typical destructors aren’t blocking or doing anything complicated, and the mutex is about to be destroyed anyway, and that is enough proof of ownership.

    I assume the thread sanitizer is the same even though I couldn’t demonstrate this easily.

    This could also be the case, but I wouldn’t assume it. Thread sanitizer (as far as I know) does not treat destructors differently than other methods, and is operating at a lower level just looking at what mutexes are held during reads and write and what synchronization events are happening between reads and writes.

    It’s definitely possible you are right and dropping locks from this code is perfectly ok. So just for the sake of this PR I don’t want to expand it by dropping an already existing lock, but, that could be a good change to make in a followup.

    The state in the object is still valid while the destructor method is running, and it should be fine for other threads to access it.

    True, but then the problem arises that after the destructor finishes, without further synchronization, the memory that contains the object is freed. So it would be a serious read-after-free bug if another method is being executed while the destructor is being called because it could happen that the destructor finishes and the memory freed while the other method is still executing. Any mutexes within the object locked inside the destructor will be unlocked when the destructor finishes and before the memory is freed, so they cannot be used for synchronization with the other method.

    This is all true but it is ok for a destructor to block and wait for other events and other threads. And as long as it is waiting, it is ok for other threads to access the object and call its methods.

  9. in src/mp/proxy.cpp:191 in 5a6c9e0f71 outdated
    187@@ -188,29 +188,27 @@ void EventLoop::loop()
    188 
    189     kj::Own<kj::AsyncIoStream> wait_stream{
    190         m_io_context.lowLevelProvider->wrapSocketFd(m_wait_fd, kj::LowLevelAsyncIoProvider::TAKE_OWNERSHIP)};
    191+    int post_fd{m_post_fd};
    


    vasild commented at 10:44 am on January 24, 2025:
    Why make a copy of m_post_fd? This code here is the only one that can change it. That is, for sure, at the end where close(post_fd) is called, post_fd will be equal to m_post_fd, right?

    ryanofsky commented at 5:00 pm on January 24, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1928489254

    Why make a copy of m_post_fd? This code here is the only one that can change it. That is, for sure, at the end where close(post_fd) is called, post_fd will be equal to m_post_fd, right?

    This is just saving a copy of the variable while the lock is held. There are three methods (loop, post, removeClient) where m_post_fd is saved to a temporary variable before the lock is release and the descriptor is used.

    As you say, this should never be necessary because the code that changes this variable can only run after the code that reads it, but at least when I started debugging this issue, thread sanitizer complained that this variable was being read and written to without synchronization, so I stopped accessing it without a lock to prevent this.

    Going forward it might also be clearer to access m_post_fd with locks, and this might make it easier to add clang annotations too (not sure).

    I wouldn’t object to a followup accessing m_post_fd directly without locks, but I would want make sure this is compatible with the tools we use (tsan, thread safety annotations) given their limitations.


    vasild commented at 2:00 pm on January 27, 2025:

    This is just saving a copy of the variable while the lock is held.

    Hmm, but the lock is not held when making the copy at line 191?

    https://github.com/chaincodelabs/libmultiprocess/blob/0e4f88d3f9c66c678cd56154e11827d5c9ec9f9e/src/mp/proxy.cpp#L191-L214

    0191 int post_fd{m_post_fd};
    1...
    2213 KJ_SYSCALL(::close(post_fd));
    3214 std::unique_lock<std::mutex> lock(m_mutex);
    

    ryanofsky commented at 2:20 pm on January 27, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1930576629

    Hmm, but the lock is not held when making the copy at line 191?

    Oh, that is a good point, I didn’t notice that. For consistency it would be good to just acquire the lock whereever the variable is used, even when it not needed, or have some other more consistent rule. I think a good followup would be to add clang annotations and add the lock everywhere it says they are required, and remove it places where it isn’t necessary and the annotations don’t say it is required. This would add an unnecessary lock here as you are suggesting, and remove an unnecessary lock in the destructor as you have also suggested.


    vasild commented at 3:51 pm on January 27, 2025:
    Would it be possible to use std::atomic_int for m_post_fd? If it does not have to be in sync with other variables…

    ryanofsky commented at 4:11 pm on January 27, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1930755652

    Would it be possible to use std::atomic_int for m_post_fd? If it does not have to be in sync with other variables…

    Definitely possible. I think it wouldn’t be a performance difference because in practice whenever m_post_fd is used some state update is being posted, and a lock is needed anyway to update the shared state. But there could be other reasons for preferring an atomic reasons like style reasons. Very possible writing this code in a different style could make it clearer. I think at very least it should be using clang thread safety annotations which would make intent more clear.


    vasild commented at 12:35 pm on January 28, 2025:
    Yeah, I think std::atomic wouldn’t make a difference wrt performance but would make the code more clear.
  10. in src/mp/proxy.cpp:230 in 5a6c9e0f71 outdated
    219@@ -222,9 +220,10 @@ void EventLoop::post(const std::function<void()>& fn)
    220     std::unique_lock<std::mutex> lock(m_mutex);
    221     m_cv.wait(lock, [this] { return m_post_fn == nullptr; });
    222     m_post_fn = &fn;
    223+    int post_fd{m_post_fd};
    224     Unlock(lock, [&] {
    225         char buffer = 0;
    226-        KJ_SYSCALL(write(m_post_fd, &buffer, 1));
    227+        KJ_SYSCALL(write(post_fd, &buffer, 1));
    


    vasild commented at 10:49 am on January 24, 2025:
    Am I right that the problem with this code was that the loop() method may finish just before the write() call and set m_post_id to -1 and this may end up calling write(-1, ...)? If yes, then this change is not sufficient - if the same happens, then it will call write(n, ... where n is not -1 but is some number, a stale file descriptor, that has already been closed. Even worse, n may represent a newly opened file if the file descriptor numbers are reused by the OS.

    ryanofsky commented at 5:07 pm on January 24, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1928495005

    Am I right that the problem with this code was […]

    This was actually not the problem. The problem wasn’t that the code was trying to write(-1, ...), although that could have been a possible outcome of the bug. The problem was not even that it was trying to write to a closed file descriptor, although that could have been another possible outcome the bug. Instead, the actual problem, whenever it happened, was that the write() call failed because it was writing to a pipe that was closed on the reading side.

    The cause of all these problems (theoretical and actual) was that removeClient was only checking the m_num_clients == 0 condition to see if it should write to the pipe before this PR, instead of checking the checking m_num_clients == 0 && m_async_fns.empty() condition like it is doing after this PR. So previously, it could write to the pipe multiple times before the loop was actually supposed to exit, and depending on thread scheduling this could occasionally cause the loop to exit slightly too early, immediately before the final write() call was made, causing the write() call to fail.


    vasild commented at 2:14 pm on January 27, 2025:

    Ok, so there is another problem in master which is not addressed in this PR:

    1. EventLoop::post() is executing in thread T1
    2. EventLoop::loop() is executing in thread T2
    3. Lets say m_post_id is 5
    4. post() / T1 makes a copy of m_post_fd, post_fd = 5 and unlocks m_mutex
    5. loop() / T2 does close(5) and assigns -1 to m_post_fd
    6. post() / T1 calls write(post_fd, ...) with post_fd being 5. At this point 5 does not refer to an opened file so the call fails, or worse, another file has been opened by the process and the number 5 has been reused for it and now 5 refers to an opened file which is different than the file associated with 5 when the copy post_fd was made (in 4. above)

    Right? Or do I miss some high level context here (I am new to the internals of this library)?


    ryanofsky commented at 2:20 pm on January 27, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1930597952

    Right? Or do I miss some high level context here (I am new to the internals of this library)?

    Yes, the high level context that prevents what you described from happening is the addClient/removeClient reference counting. If you call post() without calling addClient(), the race condition you described will be present, but that is a higher level bug. The eventloop code is assuming that addClient and removeClient are being used whenever post() is used.

    The bug being fixed in this PR is a lower-level bug that only happens during shutdown when the reference count is 0 and at the very last second, right very last removeClient call in the program is about to call write(m_post_fd), the event loop wakes up because of previous spurious writes, and closes the read end of the file descriptor (before closing m_post_fd) as well.

    The cause of the spurious writes is that previously removeClient was only checking the m_num_clients == 0 condition so it was doing spurious write(m_post_fd) calls which could cause the event loop to wake up too early and exit slightly before it was supposed to exit. Now removeClient is checking m_num_clients == 0 && m_async_fns.empty() so it will not do spurious writes.


    vasild commented at 10:05 am on January 28, 2025:

    Ok, conclude this. I have another question. Posting here because it is related:

    This pattern, used in post() and removeClient():

    1. lock mutex that protects m_post_fd
    2. make a local copy of m_post_id in post_fd
    3. unlock
    4. use the local copy: write(post_fd)

    looks odd to me. Assuming the usual semantics that the mutex is protecting the variable - it is not allowed to change while holding the mutex and can change when not holding the mutex. This means that at the time of 4. m_post_fd could have changed after the unlock in 3. Above you said that a higher level logic implies that loop() which is the only one which changes m_post_fd, when it finishes will not finish while post() is executing. Ok, but then that means m_post_fd does not need any protection. Is this just trying to silence some false alarm by the thread sanitizer?


    ryanofsky commented at 12:41 pm on January 28, 2025:

    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#discussion_r1931878892

    Is this just trying to silence some false alarm by the thread sanitizer?

    Not exactly. I think you keep basically asking the question again and again “Why is m_post_fd accessed with a lock? It doesn’t need to be locked.” And sorry I haven’t given a more direct answer. It just has to do with behavior of thread sanitizer with earlier versions of this code. When I first enabled thread sanitizer to debug this issue, I fixed immediate tsan errors that happened running the code by restructuring the loop in the way it is currently written, but without the post_fd variable. But then when I ran the test in a loop, after around 20-30 minutes eventually there were tsan errors complaining both about the m_post_fd variable and the underlying file descriptor having conflicting read/write accesses from multiple threads without synchronization. So I added the int post_fd = m_post_fd variables, and that fixed the errors about the m_post_fd variable, but not the errors about the underlying file descriptor. Eventually I figured out the underlying problem which was spurious writes in removeClient and fixed that issue too, but I did not go back and remove the post_fd = m_post_fd variables.

    I did not feel any need to remove these variables because I was thinking of the m_post_fd, m_num_clients, and m_async_fns as a unit and wanted to follow a simple rule of using a single lock to protect them. You’re completely right though that a lock is probably not needed to access m_post_fd, although I still think clang thread annotation might complain if it is accessed without a lock outside of the destructor, and I am not 100% sure that tsan would not complain if I removed the post_fd variables, but I am 99% sure so that it would be ok, so that change could definitely be a followup.

    Thanks for asking this and feel free to ask about anything else that isn’t clear or could be improved.

  11. vasild commented at 10:54 am on January 24, 2025: contributor

    From the OP:

    the possible causes are that (1) EventLoop::m_mutex is not used to protect some EventLoop member variables that are accessed from multiple threads

    Which variables?

  12. ryanofsky force-pushed on Jan 24, 2025
  13. ryanofsky commented at 5:13 pm on January 24, 2025: collaborator

    Thanks for the detailed review!

    Updated 5a6c9e0f7111e0eb033aee79c20a6a7429c91133 -> 0e4f88d3f9c66c678cd56154e11827d5c9ec9f9e (pr/disconnected.1 -> pr/disconnected.2, compare) adding suggested asserts and updating some comments.


    re: https://github.com/chaincodelabs/libmultiprocess/pull/129#pullrequestreview-2572235134

    From the OP:

    the possible causes are that (1) EventLoop::m_mutex is not used to protect some EventLoop member variables that are accessed from multiple threads

    Which variables?

    Updated the description with this information, but checking previous version of the code it looks like variables accessed from multiple threads and used without a lock were m_num_clients, m_async_fns, and m_post_fd.

    Also, to be clear, the intermittent failure that’s being fixed here was not caused by lack of locking as far as I can tell. The failure was caused by a race between the startAsyncThread and removeClient calls at the end of the ~Connection destructor as described in https://github.com/bitcoin/bitcoin/issues/31151#issuecomment-2609686156

  14. Sjors referenced this in commit 67417894dd on Jan 27, 2025
  15. Sjors referenced this in commit 7388d1c09c on Jan 27, 2025
  16. ryanofsky commented at 2:50 pm on January 27, 2025: collaborator

    Thanks for reviewing and keeping discussion going on this. The bug is a pretty complicated and hard to explain, but I want to try to make everything clear as possible so the change is understood.

    I am probably about to merge this PR shortly, because the bug is not straightforward to reproduce and I am pretty confident this change fixes it, and 100% confident this change improves locking in general and fixes a bunch of immediate -fsanitize=thread bugs. I also want to bump the libmultiprocess version in bitcoin core to be able support building libmultiprocess as a git subtree, and would like this fix to be included to avoid any potential issues in CI in that PR and other multirprocess bitcoin core PRs.

    I would like to continue to improve and discuss this code and talk about it here, or make a followup PR that adds clang thread safety annotations, and could be another place to make related improvements.

  17. Sjors commented at 2:59 pm on January 27, 2025: member
    If you do need to make further changes here, can you rebase it? That makes it easier to update https://github.com/bitcoin/bitcoin/pull/30975.
  18. ryanofsky merged this on Jan 27, 2025
  19. ryanofsky closed this on Jan 27, 2025

  20. ryanofsky referenced this in commit eb27f5918f on Jan 27, 2025
  21. Sjors referenced this in commit 57e60735cf on Jan 27, 2025
  22. Sjors referenced this in commit 71be9377ee on Jan 27, 2025
  23. vasild commented at 3:53 pm on January 27, 2025: contributor
    Post merge ACK, the change looks good and the discussions above drifted to things that are out of the scope of this PR.
  24. ryanofsky referenced this in commit 90b116bd70 on Jan 27, 2025
  25. ryanofsky referenced this in commit 2221c8814d on Jan 27, 2025
  26. Sjors referenced this in commit b66fe2fc03 on Jan 28, 2025
  27. fanquake referenced this in commit ad2f9324c6 on Jan 29, 2025
  28. ryanofsky referenced this in commit ce4814f46d on Feb 10, 2025
  29. ryanofsky commented at 5:30 am on February 10, 2025: collaborator
    To follow up on all the discussion about clang annotations above, some basic ones are now added in #160
  30. ryanofsky referenced this in commit 5f6313251c on Apr 24, 2025
  31. ryanofsky referenced this in commit a848ec60d4 on Apr 24, 2025
  32. ryanofsky referenced this in commit 997d1948bd on Apr 24, 2025
  33. ryanofsky referenced this in commit 8bd7edb91f on May 15, 2025
  34. ryanofsky referenced this in commit bbb3793341 on May 15, 2025
  35. ryanofsky referenced this in commit e1d5403f7c on May 30, 2025
  36. ryanofsky referenced this in commit ac14f7e3c0 on May 30, 2025
  37. ryanofsky referenced this in commit 6ecd3c63d5 on Jun 5, 2025
  38. ryanofsky referenced this in commit 749e4f295c on Jun 13, 2025
  39. ryanofsky referenced this in commit 1254c5e6bd on Jun 13, 2025
  40. ryanofsky referenced this in commit 7888088940 on Jun 13, 2025
  41. ryanofsky referenced this in commit 607006189b on Jun 13, 2025
  42. ryanofsky referenced this in commit 9b8ed3dc5f on Jun 16, 2025
  43. ryanofsky referenced this in commit 258a617c1e on Jun 19, 2025
  44. janus referenced this in commit 311822f35f on Sep 1, 2025

github-metadata-mirror

This is a metadata mirror of the GitHub repository bitcoin-core/libmultiprocess. This site is not affiliated with GitHub. Content is generated from a GitHub metadata backup.
generated: 2025-12-04 20:30 UTC

This site is hosted by @0xB10C
More mirrored repositories can be found on mirror.b10c.me