Even if we abandon the approach here in favor of a sidecar (see experimental below), it is useful to review the above changes. They are fully reusable as demonstrated in https://github.com/Sjors/bitcoin/pull/48. And until we actually ship something else, the above is being used by people, including on (very limited) mainnet.
The pull requests below enable a sidecar approach:
#27433 (getblocktemplate improvements for segwit and sigops by Sjors)
#26812 (test: add end-to-end tests for CConnman and PeerManager by vasild)
#26697 (logging: use bitset for categories by LarryRuane)
#26619 (log: expand BCLog::LogFlags (categories) to 64 bits by LarryRuane)
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.
DrahtBot added the label
CI failed
on Feb 14, 2024
DrahtBot
commented at 12:27 pm on February 14, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
For the time being I just added handling for MSG_MORE (on e.g. macOS sequential messages are sent separately while on Linux they’re combined). I also made the timeouts a bit longer.
Hopefully that does the trick. This can be revisited closer to the time when the Template Provider is ready for its own PR.
bee6bdf0a5 test: put the generic parts from StaticContentsSock into a separate classf42e4f3b3btest: add a mocked Sock that allows inspecting what has been Send() to it
and then how to use that in the last commit of the same PR:
8b10990aa0 test: add unit tests exercising full call chain of CConnman and PeerManager
With those it is possible to send/receive raw bytes to/from the (mocked) socket, or NetMsgs, e.g.:
TSAN is tripping up somewhere around here. The last thing it logs is - Connect 2 transactions:. It doesn’t get to - Verify ... txins:. I wonder if this is related to mock time, which I’m testing in https://github.com/Sjors/bitcoin/pull/34
When running this locally on Ubuntu with clang 16.0.6 I get a WARNING: ThreadSanitizer: data race and significantly more details (still a bit cryptic, but hopefully enough to figure out what’s happening).
maflcko
commented at 11:39 am on February 15, 2024:
Ah, I guess the unit tests don’t capture the tsan output?
maflcko
commented at 11:41 am on February 15, 2024:
maflcko
commented at 11:42 am on February 15, 2024:
Maybe good to re-check this when/after the cmake migration is done?
Sjors force-pushed
on Feb 15, 2024
Sjors force-pushed
on Feb 15, 2024
Sjors
commented at 2:07 pm on February 15, 2024:
member
Added m_tp_mutex to Sv2TemplateProvider.
Sjors
commented at 3:12 pm on February 15, 2024:
member
Bumping macOS to 14 on the CI does not help (tried in https://github.com/Sjors/bitcoin/pull/35). I also can’t reproduce this failure on my own Intel macOS machines, not on 13.6.4 and not on 14.2.1. A Sock mock is probably the most robust solution, but it’d be nice to find another workaround.
Another option to consider is using the functional test framework instead, since these are not really unit tests. However that involves implementing the sv2 noise protocol in Python and a bunch of other work to export transport functions to the functional test framework. If anyone feels up to that challenge, let me know…
Sjors force-pushed
on Feb 15, 2024
DrahtBot removed the label
CI failed
on Feb 15, 2024
Sjors force-pushed
on Feb 16, 2024
Sjors
commented at 1:05 pm on February 16, 2024:
member
Added m_clients_mutex similar to the one in Connman. Fixed a deadlock on g_best_block_mutex when we receive a fresh block simultaneously from a peer and a client (SubmitSolution message).
Sjors
commented at 2:28 pm on February 16, 2024:
member
Windows test fails with:
0test/sv2_noise_tests.cpp(62): fatal error: in "sv2_noise_tests/certificate_test": critical check !alice_certificate.Validate(XOnlyPubKey(alice_authority_key.GetPubKey())) has failed
DrahtBot added the label
CI failed
on Feb 16, 2024
DrahtBot
commented at 2:46 pm on February 16, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
DrahtBot removed the label
CI failed
on Feb 16, 2024
DrahtBot added the label
CI failed
on Feb 29, 2024
DrahtBot
commented at 8:21 pm on February 29, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
Sjors
commented at 11:52 am on March 1, 2024:
member
Rebased and hopefully satisfied new bitcoin-config.h linter rule.
The kind of incidental failure in sv2_template_provider_tests can probably be avoided with the mock socket introduced / improved in ##26812.
DrahtBot removed the label
CI failed
on Mar 1, 2024
DrahtBot added the label
CI failed
on Mar 17, 2024
DrahtBot
commented at 6:23 am on March 17, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
Sjors
commented at 8:01 pm on March 18, 2024:
member
Rebased after #27375. Changed the default binding from 0.0.0.0 to 127.0.0.1 to make it abundantly clear that it’s currently unsafe to expose this to the big bad internet. The previous behaviour can be achieved with the new -sv2bind=0.0.0.0
Finally, we immediately listen for new client connections. Previously we’d wait for IBD to finish. However we hold off on sending template updates. Except for Signet, because you may be the only miner and after a day of not mining the chain would get stuck.
DrahtBot removed the label
CI failed
on Mar 19, 2024
DrahtBot added the label
Needs rebase
on Mar 20, 2024
Sjors force-pushed
on Mar 22, 2024
DrahtBot removed the label
Needs rebase
on Mar 22, 2024
DrahtBot added the label
Needs rebase
on Apr 2, 2024
Sjors force-pushed
on Apr 5, 2024
DrahtBot removed the label
Needs rebase
on Apr 5, 2024
DrahtBot added the label
CI failed
on Apr 5, 2024
DrahtBot
commented at 11:17 pm on April 5, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
(ignoring CI failure, as I mentioned above I plan on making the tests more robust after #26812)
Sjors referenced this in commit
2125fc5616
on May 3, 2024
DrahtBot added the label
Needs rebase
on May 15, 2024
Sjors force-pushed
on May 30, 2024
Sjors
commented at 9:20 am on May 30, 2024:
member
I dropped 3559085dcc9ee687c347383a0c2a63f3ea16f001 and 3f4c1d1e515a840c4b119e3ba65a8391d4fc7f6b which introduced common/transport.h. It’s difficult to maintain such a large diff and it’s currently causing a link error. Here’s a branch with both of them: https://github.com/Sjors/bitcoin/tree/2024/05/transport
I’d still like to move Transport out of libbitcoin_node into libbitcoin_common, but this is probably better done in a parallel PR.
At the request of people testing Stratum v2 I’m adding testnet4 to this branch, but bear in mind it’s not final.
I still need to do some manual testing on this last push, so be careful.
Sjors force-pushed
on May 30, 2024
DrahtBot removed the label
Needs rebase
on May 30, 2024
Sjors force-pushed
on Jun 7, 2024
Sjors
commented at 2:25 pm on June 7, 2024:
member
Updated to use the interface proposed in #30200. This also fixes a small bug: -blockmintxfee and -blockmaxweight are no longer ignored. When the latter argument is unset, by default blocks have a 4000 byte safety margin, just like with getblocktemplate.
Sjors force-pushed
on Jun 7, 2024
Sjors force-pushed
on Jun 7, 2024
Sjors force-pushed
on Jun 7, 2024
DrahtBot added the label
Needs rebase
on Jun 12, 2024
Sjors force-pushed
on Jun 18, 2024
Sjors
commented at 3:29 pm on June 18, 2024:
member
Rebased, squashed Mining interface usage into the existing commits and fixed the usage of coinbase_output_max_additional_size.
DrahtBot removed the label
Needs rebase
on Jun 18, 2024
Sjors force-pushed
on Jun 18, 2024
Sjors force-pushed
on Jun 18, 2024
Sjors force-pushed
on Jun 18, 2024
Sjors force-pushed
on Jun 20, 2024
Sjors force-pushed
on Jun 20, 2024
Sjors
commented at 1:56 pm on June 20, 2024:
member
I split the commit that introduces sv2_messages.h in two parts:
9fcc95d837dbd20c1003b4c2d3c90b202e0629bd: introduces a single simple message Sv2CoinbaseOutputDataSizeMsg and the plumbing it needs
00d1408cc7cb263fb6e956649a41e0b02e2378aa: adds the other messages
This makes it easier to introduce Sv2Transport in a separate PR, which only needs the first commit.
DrahtBot removed the label
CI failed
on Jun 20, 2024
stickies-v
commented at 1:45 pm on June 21, 2024:
contributor
I’m strongly in favour of having Template Provider (TP) support for SV2, so thank you for your hard work on this @Sjors. I am very concerned about the scope of this approach though, and therefore leaning to an Approach NACK (but open to being persuaded otherwise).
From my reading of the discussion in #27854, it seems the main technical hurdle to just having a separate sv2tpd daemon using bitcoind’s existing interfaces is getblocktemplates having serious limitations as outlined here. That’s a sensible concern, and one we need to address.
Can the getblocktemplate concerns not (at least largely) be addressed by upgrading the existing RPC method, or adding a new one that’s minimal in scope and not SV2 opinionated? I understand this will still be a pretty significant overhaul, but probably still significantly smaller than what is proposed at the moment?
I’m happy to be convinced otherwise, but as it stands, implementing a separate sv2tpd process that uses bitcoinds (minimally) improved interfaces seems like a much more sensible approach to me, and afaict would still address all points listed in https://github.com/Sjors/bitcoin/blob/sv2/doc/stratum-v2.md#advantage-over-getblocktemplate-rpc . I’m curious to hear your thoughts on this.
DrahtBot added the label
Needs rebase
on Jun 21, 2024
Sjors
commented at 4:11 pm on June 21, 2024:
member
upgrading the existing RPC method, or adding a new one that’s minimal in scope and not SV2 opinionated
I don’t think our RPC is suitable for this, because it’s inherently poll based. We want to push out new block templates as soon as something better is found.
Another major change compared to the getblocktemplate RPC is that the TP holds on to templates for a while (currently until after the next block is found, though could be shorter). It needs to do this because solutions for custom block templates have to be submitted by the node that created it. The pool will try, but it might fail if it’s missing transactions.
implementing a separate sv2tpd process
This PR also aims to run in a separate process, eventually anyway. I guess you’re saying it should be separate software altogether, like SRI?
It would move some of the complexity from this PR elsewhere, mainly the encryption and transport layers. It could poll once a second, we’d need changes to hold on to previously generated templates, a new RPC to submit a solution that refers to a previous template and a few other things.
As a practical matter I’m not sure who would be implementing that; SRI is already a pretty complicated codebase with a shortage of developer resources. Some people have been testing this branch in combination with SRI and seem quite happy with it, despite the code being nowhere near merge-ready. But in principle it’s possible.
Sjors force-pushed
on Jun 21, 2024
Sjors
commented at 4:58 pm on June 21, 2024:
member
Rebased after #30314 and #30202. Incorporated the (light) transport refactor from #30315 to make sure that didn’t break anything.
DrahtBot removed the label
Needs rebase
on Jun 21, 2024
stickies-v
commented at 5:48 pm on June 21, 2024:
contributor
I don’t think our RPC is suitable for this, because it’s inherently poll based. We want to push out new block templates as soon as something better is found.
What about a combination of extending the zmq and rest interface and/or using longpollid, as suggested e.g. here? It seems like that would already improve things significantly, without having to add the encryption and transport layer?
It needs to do this because solutions for custom block templates have to be submitted by the node that created it. The pool will try, but it might fail if it’s missing transactions.
I’m not sure I understand this. This PR is not an sv2 discussion, so if my understanding is wrong we don’t need to talk about it here, but are you saying the pool will approve a proposed block template without having/keeping all the transactions? That seems undesired?
Regardless, I’m not sure the TP necessarily needs to hold on to block templates when sv2tpd caches all block template transactions and add any missing ones back to core through RPC before submitting a block? Perhaps the existing interfaces aren’t sufficient currently, but then again - perhaps we could improve them instead of wholesale implementing sv2 logic?
I guess you’re saying it should be separate software altogether
Yes, a separate project and a separate process would be my strong preference. Again, I do think it’s important Bitcoin Core supports SV2 well. But if the bitcoind interfaces are lacking for sv2tpd to be a separate project, I think we should first exhaust the possibilities of improving the interfaces before just incorporating (part of) the downstream project into Bitcoin Core.
As a practical matter I’m not sure who would be implementing that
Wouldn’t it be more straightforward to implement this as a separate sv2tpd daemon versus incorporating it into Core, giving more freedom in terms of language, dependencies, review, release process, …?
sipa
commented at 6:02 pm on June 21, 2024:
member
But if the bitcoind interfaces are lacking for sv2tpd to be a separate project, I think we should first exhaust the possibilities of improving the interfaces before just incorporating (part of) the downstream project into Bitcoin Core.
FWIW, my view is that indeed the current Bitcoin Core interfaces are inadequate for implementing template providers, and while it’s probably possible to make a number of improvements to existing ones to improve the situation, the result will either be far from ideal still (RPC is not binary, ZMQ is only one-directional, …), or quite different from anything we have now. And if we’re going to make a substantially different interface for the purpose of template providing, it probably makes sense to just pick Sv2 to be that interface as it’s designed exactly for that purpose. The encryption aspect of Sv2 isn’t necessary for that, but if all sv2tpd would do is be an Sv2 encrypting proxy, it might be logistically just simpler to have Bitcoin Core also do the encryption aspect.
So this doesn’t exclude the possibility that future block template providing protocols be implemented in external proxy servers, which then talk still talk Sv2 to Bitcoin Core, if that is still appropriate.
Sjors
commented at 6:40 pm on June 21, 2024:
member
I’m not sure I understand this. This PR is not an sv2 discussion, so if my understanding is wrong we don’t need to talk about it here, but are you saying the pool will approve a proposed block template without having/keeping all the transactions? That seems undesired?
The Job Declaration Protocol has a param async_mining_allowed which, if set, allows miners to immediately submit work even before the template has been approved. Afaik SRI currently just hardcodes this to true for simplicity (cc @fi3).
Approval takes time, because we have to send the pool any transactions it’s missing, and it needs to check those (SRI currently only checks that all transactions are present, no additional checks - I suspect that’s a can of worms on its own, since there may be conflicts with the pool node mempool). During this time you would either have to mine on a pool template, or solo mine.
Regardless of the merits of async mining, it’s more robust for the miner to broadcast a found block themselves (this is done via the SubmitSolution message).
Example of this actually going wrong (in testing): #912
I’m not sure the TP necessarily needs to hold on to block templates when sv2tpd caches all block template transactions and add any missing ones back to core through RPC before submitting a block?
Maybe. But we also need to hold on to everything that was in the mempool at the time (an aspect I haven’t thoroughly tested yet). From what I’ve seen in SRI, the Job Declarator Server (pool side) implements some sort of pseudo-mempool for that purpose.
Wouldn’t it be more straightforward to implement this as a separate sv2tpd daemon versus incorporating it into Core
I’m not sure if it is. The two most practical approaches are improving this PR vs. expanding SRI. Building something from scratch is probably more work. Using a language other than Rust of c++ means re-writing the Noise implementation and other reusable code. The Bitcoin Core codebase has some very useful building blocks, e.g. the Transport class, BIP324 implementation (which we use parts of), well tested TCP socket handling code (unfortunately very tightly coupled with p2p, but refactoring that can be orthogonally useful), process separation in the pipeline (so it can crash without taking the node down - though so can a separate tool), good CI, and deterministic builds (so I can easily and relatively safely ship this PR as its own binary).
DrahtBot added the label
CI failed
on Jun 21, 2024
DrahtBot
commented at 8:34 pm on June 21, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
DrahtBot added the label
Needs rebase
on Jun 24, 2024
ryanofsky referenced this in commit
323b0acfcb
on Jun 24, 2024
Sjors force-pushed
on Jun 25, 2024
Sjors
commented at 12:18 pm on June 25, 2024:
member
Rebased after #30200 landed! 🎉 Reorganized commits so it’s now based on #30332.
DrahtBot removed the label
Needs rebase
on Jun 25, 2024
stickies-v
commented at 1:43 pm on June 25, 2024:
contributor
…it probably makes sense to just pick Sv2 to be that interface as it’s designed exactly for that purpose
I agree we shouldn’t reinvent the wheel and just implement the parts of sv2 that we need to improve our getblocktemplate functionality. I would just prefer to be much more strict in cutting out all the parts that Core doesn’t require to support sv2 downstream, i.e. I don’t think the “out-of-the-box” aspect should at at all be a priority when that can reasonably be addressed with external tooling.
…it might be logistically just simpler to have Bitcoin Core also do the encryption aspect.
It probably would be, but I’d much rather we keep the scope of Core limited. Where we draw the line is up for debate, but tooling for a highly specialized process (I think we’re only expecting a small minority of nodes to be running SV2?) that can be kept outside of Core absolutely should be, in my view.
…well tested TCP socket handling code (unfortunately very tightly coupled with p2p, but refactoring that can be orthogonally useful), process separation in the pipeline (so it can crash without taking the node down - though so can a separate tool), good CI, and deterministic builds (so I can easily and relatively safely ship this PR as its own binary).
As much as I sympathize that setting up a new project includes a bunch of painful overhead that requires even more scarce developer resources, I don’t think any of these are good reasons to bundle more code than necessary into Core, where it has a massive maintenance cost - even if it’s less visible now.
I think I’m going to withdraw from the conceptual discussion until I have more in-depth knowledge of the relevant code and protocol. To summarize: I think Core should have good support for being a SV2 Template Provider, and implementing parts of SV2 to do that makes sense. It still seems to me like the current approach is too invasive and too much of an unnecessary maintenance burden on this project.
DrahtBot removed the label
CI failed
on Jun 25, 2024
Sjors
commented at 3:05 pm on June 25, 2024:
member
a highly specialized process (I think we’re only expecting a small minority of nodes to be running SV2?)
I’d like to see a lot more users mine, and do so with their own block template. In order to achieve that we have to lower the barrier to entry.
There’s a generic case to be made for reducing the scope of Bitcoin Core, and not further increasing it. I think we still lack the tooling for this. Something like a plugin system, or a convenient way to ship a release tag plus custom functionality.
Having the user download and install two things is fine in theory, as long as it just works(tm). HWI is a working example of that, but that specific approach won’t work here.
I’m not very keen on starting from scratch with a new approach myself, but I can test and review alternative approaches that others work on.
For now I plan to keep rebasing this PR, occasionally releasing a binary. As long as I can get some prerequisite refactoring merged, that should be sustainable for a while.
DrahtBot added the label
Needs rebase
on Jun 27, 2024
Sjors force-pushed
on Jun 28, 2024
DrahtBot removed the label
Needs rebase
on Jun 28, 2024
Sjors
commented at 9:00 am on June 28, 2024:
member
I opened #30356 with some BlockAssembler::Options and mining interface refactoring that should make this PR smaller.
Sjors force-pushed
on Jun 28, 2024
Sjors force-pushed
on Jul 1, 2024
DrahtBot added the label
CI failed
on Jul 1, 2024
DrahtBot
commented at 6:56 pm on July 1, 2024:
contributor
🚧 At least one of the CI tasks failed. Make sure to run all tests locally, according to the
documentation.
Possibly this is due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
Leave a comment here, if you need help tracking down a confusing failure.
Sjors
commented at 8:15 pm on July 2, 2024:
member
I refactored TemplateProvider to move the lower level connection handling code into the newly introduced Sv2Connman from #30332.
I noticed a deadlock that was already present in the previous push (f7bf25d12d73471287c8758482014e2a429b4ff7), where if you use Bitcoin QT and start mining, an RPC call (at least to getblockchaininfo) will freeze and new blocks won’t be submitted.
DrahtBot added the label
Needs rebase
on Jul 2, 2024
lozanopo approved
in
src/node/sv2_template_provider.cpp:182
in
e65a074312outdated
I was reading it as “check every 50ms” but I just noticed that it is also using a condvar, so it’ll get notified when g_best_block changes.
dergoegge
commented at 1:28 pm on July 3, 2024:
member
Thank you for all the work you’ve put into this.
Approach NACK
I refuse to believe that re-implementing the entire networking stack, adding a noise protocol implementation and a new networking protocol is the best solution to having a non-RPC/non-poll based template provider.
After looking at this PR, I would propose the following interface additions to Bitcoin Core:
We add a new zmq publisher, e.g. -zmqpubtemplate, which publishes block templates as soon as they become available.
We add a new rpc updatetemplatepub to configure the template publisher, e.g. for setting the coinbase output data size.
This should address the two main shortcomings of the bitcoin core interfaces (non-binary, poll-based).
A separate piece of software not maintained by us (e.g. stratumv2d) uses these interfaces to implement stratum v2. Template caching, encryption, and anything else stratum v2 related is implemented here.
I don’t see anything in the previous discussion here that wouldn’t make this work but feel free to point out what I am missing.
As a practical matter I’m not sure who would be implementing that; SRI is already a pretty complicated codebase with a shortage of developer resources.
If stratum v2 is so great, then surely someone is willing to implement this. There are at least two new stratum v2 pools that should probably feel responsible for this. I don’t see why this burden should fall on this project, which is already extremely bottle necked.
Encrypted, safer (for now: less unsafe) to expose on the public internet
I would strongly discourage us from expanding on public facing interfaces that process untrusted inputs. We are busy enough testing the existing interfaces.
Sjors
commented at 9:00 am on July 4, 2024:
member
We add a new zmq publisher, e.g. -zmqpubtemplate, which publishes block templates as soon as they become available.
We add a new rpc updatetemplatepub to configure the template publisher, e.g. for setting the coinbase output data size.
This approach would mostly work. I opened a tracking issue on the SRI side: https://github.com/stratum-mining/stratum/issues/1032. Two downsides that come to mind (in addition to the need to install a separate tool):
it limits the number of connected stratum clients to 1, since there’s only one ZMQ template feed (that seems ok for the standard suggested sv2 configurations)
it precludes the ability to make the template provider public facing, even with an external tool
I refuse to believe that re-implementing the entire networking stack, adding a noise protocol implementation and a new networking protocol is the best solution to having a non-RPC/non-poll based template provider.
I think much of the complexity comes from that we don’t have a stateful alternative to RPC, nor an authenticated alternative to p2p. This PR builds that from scratch. If BIP324, along with an authentication mechanism, had existed before Stratum v2 then it could have been built on top of that. In that case the Template Provider would just add a few (authenticated) p2p messages.
It’s possible that authenticated p2p / stateful RPC has more use cases than just Stratum v2. Perhaps external wallets, lightning nodes, etc could use it - by adding a new stratum subprotocol. They could then e.g. subscribe to mempool updates for a set scriptPubKeys, something that currently requires painful mempool polling or using our wallet and then polling it.
But even if such use cases existed, maybe we’d want a different protocol for those, based on BIP324 - or even just unix sockets since these examples typically run on the same machine. So even that might not justify the complexity. A mobile wallet or lightning node that connects to your home node could use this.
If stratum v2 is so great, then surely someone is willing to implement this. There are at least two new stratum v2 pools that should probably feel responsible for this.
I think it’s just one new pool: DMND.work (they have someone working on SRI). And one existing pool Braiins (their production firmware is about a year behind on spec changes). It remains to be seen if adopting stratum v2 gives them any business advantage. My impression is that if we leave it to just market forces, most pools will simply fix the encryption problem of stratum v1, which has clear market value, and ignore the decentralisation features that stratum v2 packages with that fix.
That’s not to say that Bitcoin Core must carry this burden, but if some open source devs don’t make it a reality, the status quo of very few pools won’t change - at least until something bad happens censorship wise.
Additionally we’d need a submitsolution RPC, and the node needs to hold on to previously generated templates. This replaces the “push blocks to :8333” in your diagram. It’s a much shorter message than a full block, and at least in the current design the Template Provider doesn’t (need to) know the full block. This your design already requires RPC access, adding submitsolution and having the TP software call it wouldn’t be a difficult addition.
dergoegge
commented at 1:37 pm on July 4, 2024:
member
it limits the number of connected stratum clients to 1, since there’s only one ZMQ template feed (that seems ok for the standard suggested sv2 configurations)
I’m sure we could design the interface in a way that allows for more than one template config (if needed), e.g. the new rpc could accept a list of template configs and the zmq publisher publishes a template for each config.
it precludes the ability to make the template provider public facing, even with an external tool
Why? In my mind it would be perfectly safe to expose “startumv2d” publicly (while the bitcoin core interface remains non-public). I just don’t think that expanding the public interfaces of Bitcoin Core is a good idea in general (unless absolutely necessary).
Additionally we’d need a submitsolution RPC, and the node needs to hold on to previously generated templates.
I was thinking that Bitcoin Core does not remember the templates it pushed but “stratumv2d” would. That seems plausible to me, looking at the current implementation here (the templates are a few MB in size and the cache only holds on to a few of them). When a sv2 client submits a solution, stratumv2d gets the corresponding template from its cache and publishes it to Bitcoin Core on port :8333.
It’s a much shorter message than a full block, and at least in the current design the Template Provider doesn’t (need to) know the full block
So in my design the short messages would still happen between “stratumv2d” and its clients, while the full block is pushed to the bitcoin core node (or any number of other nodes), essentially broadcasting the block (which happens in any case).
Sjors
commented at 2:11 pm on July 4, 2024:
member
the new rpc could accept a list of template configs and the zmq publisher publishes a template for each config.
I suppose the external application could forward each to the right client.
it precludes the ability to make the template provider public facing, even with an external tool
Why?
Because each consumer of this interface will have its own coinbase_tx_additional_output_size, which it can’t communicate without RPC access. This can be avoided by picking a single size for everyone, e.g. the current default of 4000, which might be acceptable.
When a sv2 client submits a solution, stratumv2d gets the corresponding template from its cache
This implies that stratumv2d has access to the all the transactions in the block, which it doesn’t in the current design. The NewTemplate message uses a merkle path. Imo the new ZQM interface should send the same information and no more, especially if we want to send multiple messages for each requested coinbase_tx_additional_output_size (and sigops) permutation.
The Template Provider application can then use the existing -zmqpubhashblock feed to send SetNewPrevHash efficiently.
In an alternative design the Template Provider could hold on to the entire proposed block. This would take more time, because it has to parse the whole block and reconstruct the merkle path it needs for NewTemplate.
Sending it back over p2p doesn’t seem like a good idea, because it makes the configuration more complicated, making the Template Provider software connect over p2p, rpc and zmq, and requiring the user to whitelist that inbound p2p connection. And because finding a block is an extremely rare event, it would have to keep the p2p connection alive and make sure it actually works. It seems safer to use the submitblock RPC despite its overhead.
The number of full blocks to hold on to could also be quite large. Once cluster mempool is in place, we can push out new templates for every incremental sat in fees - the bottlenecks are the overhead an ASIC might have to switch work and for the Job Declarator Server to inspect it.
To take a perhaps unrealistic scenario: if we push one template per second for ten connected clients and there’s one hour between blocks, that’s 140 GB. Whereas Bitcoin Core can hold on to the same number of templates with much less overhead: CTransactionRef pointers plus holding on to included transactions that aren’t in the mempool anymore.
In both approaches either we or the Template Provider have to be a bit smarter though, to let go of templates N seconds after we’ve sent a new one (with some allowance for the possibly that not all downstream machines pick up on every single new template). Otherwise large mempool churn could cause an OOM.
Sjors
commented at 4:06 pm on July 4, 2024:
member
There’s also the RequestTransactionData(.Success) message flow to consider. This message is separate from NewTemplate so that a miner can immediately start hashing while the back and forth the Job Declarator Server is happening.
In the approach where the Template Provider software only gets the details it needs to send a NewTemplate message, we’ll need an additional RPC method of ZMQ feed to provide the information needed in a RequestTransactionData.Success message. In the approach where we give it the whole block template, then that’s already included.
lozanopo approved
lozanopo approved
Sjors
commented at 8:37 am on July 5, 2024:
member
A different approach that stays closer to the original design but does not add noise encryption and a dedicated network stack:
(Maybe) introduce SocketTransport to enable p2p messages over unix sockets with less overhead (but how to handle multiple connections?)
Add a whitelist permission mining
Peers with that permission have an extra CNode field (akin to Sv2Client)
m_coinbase_output_data_size
We send these peers a HEADERS message for every tip update (equivalent to SetNewPrevHash)
Add the following p2p messages (NetMsgType):
cb-constr (equivalent of CoinbaseOutputDataSize)
template (equivalent of NewTemplate)
gettempltxs (equivalent of RequestTransactionData)
templtxs (equivalent of RequestTransactionData.Success)
solution (equivalent of SubmitSolution)
A separate piece of software translates these p2p messages to their corresponding stratum v2 messages and adds noise encryption.
Once we feel confident enough to make these p2p messages public facing, we can simply drop the mining whitelist requirement - or encourage users to set it to 0.0.0.0. Additionally that would be the time to make a BIP for the new p2p messages.
On the SRI side this may be simpler to implement; it just needs to convert between our p2p messages and their stratum v2 equivalent.
For a user this is easier to configure (though we still have the downside of having to install a separate piece of software, configure it and keep it running).
A few things may make it (a lot) more complicated:
It needs to implement our p2p handshake and keepalive stuff, though SocketTransport could dispense with some of that. I don’t know how well the rust-bitcoin v1 p2p message handling works, e.g. if it takes care of this stuff. RPC is much easier to integrate with, not sure about ZMQ.
It needs to maintain a mapping between its incoming stratum v2 clients and p2p connections to the node. Especially for a public facing Template Provider. For that SocketTransport would need a way to represent multiple connections, either by connecting via multiple sockets or maybe by prefixing each message with a peer id.
@dergoegge’s proposal has similar issues, especially the variant that uses p2p to submit the block (hence my preference there to use RPC for that, despite the performance hit)
TheBlueMatt
commented at 2:55 pm on July 5, 2024:
contributor
I refuse to believe that re-implementing the entire networking stack, adding a noise protocol implementation and a new networking protocol is the best solution to having a non-RPC/non-poll based template provider.
I left a comment at https://github.com/stratum-mining/stratum/issues/1032#issuecomment-2211001835, but I don’t buy that ZMQ is a reasonable way forward here. To be clear, the goal is not to just have “a non-RPC/non-poll based template provider”, but rather to decentralize mining, where one part of that is to support providing work from Bitcoin Core to Sv2 mining proxies in a more efficient way. One important goal there is to support providing that work to untrusted open connections easily - miners may decide they don’t want to select work and instead wish to outsource this to a third party, which needs to be as absolutely trivial to build as possible. We’ve seen time and time again “just run this sidecar software next to Bitcoin Core” is not a thing that anyone does.
I also mention in that post that Bitcoin Core needs to eventually support providing work to miners - its an absolute travesty that today, in order to solo mine, you need to try 5 different unmaintained software stacks and hope one works (they probably dont), write your own software, or pay a few % fee to some “solo pool” (which find blocks even at today’s hashrate!). The fact that there is no easy way to support the decentralization of the network for small miners who wish to do so is a total failure of the Bitcoin development community. That could be an external daemon, but again we have a long history of encouraging people to run contrib scripts or sidecar daemons and basically no one ever does. They might well just stick with “solo pools” instead :facepalm:.
Sjors
commented at 6:12 pm on July 5, 2024:
member
We’ve seen time and time again “just run this sidecar software next to Bitcoin Core” is not a thing that anyone does.
Yet another approach for said sidecar is to use IPC. It offers first class access to the node, e.g. via the newly added mining interface. It can probably automagically find a running Bitcoin Core process or spin one up. This would require more (review) progress in #10102 and additional tooling so external application can easily use IPC.
TheBlueMatt
commented at 8:19 pm on July 5, 2024:
contributor
I don’t think the particular software design really impacts it nearly as much as people simply not wanting to deal with two or three pieces of software.
lozanopo approved
dergoegge
commented at 10:38 am on July 8, 2024:
member
It looks like we agree that in terms of efficiency the sidecar approach is not a problem, so I’ll argue the other points.
One important goal there is to support providing that work to untrusted open connections easily
From my perspective, the sidecar approach allows to solve this in a much nicer way (no additional burden on bitcoin core, sidecar can be written in a memory safe language, …).
We’ve seen time and time again “just run this sidecar software next to Bitcoin Core” is not a thing that anyone does.
This seems to be the main argument and I don’t find this to be true at all. All lightning implementations, ecash mints, electrum servers, alternative GUIs and block explorers are sidecars. Sure, putting all of these into Bitcoin Core would be convenient for users but it would be an absolute nightmare for Bitcoin Core as a software project. I think this just boils down to how useful the sidecar actually is: if there is demand for the sidecar then users will run it.
its an absolute travesty that today, in order to solo mine, you need to try 5 different unmaintained software stacks and hope one works
Maintaining software is hard and by implementing all of this here, you’re asking Bitcoin Core to take on this burden. There are always 300-400 open PRs and only 10-20 people working full time on this project. Take a look at the traction that the stratumv2 PRs have gotten so far, does that look like the project has the capacity to review and maintain all of this?
but rather to decentralize mining
This is obviously an important goal, and also a goal that one would think bitcoin users care enough about to be willing to install a second piece of software.
TheBlueMatt
commented at 12:19 pm on July 8, 2024:
contributor
This seems to be the main argument and I don’t find this to be true at all. All lightning implementations, ecash mints, electrum servers, alternative GUIs and block explorers are sidecars. Sure, putting all of these into Bitcoin Core would be convenient for users but it would be an absolute nightmare for Bitcoin Core as a software project. I think this just boils down to how useful the sidecar actually is: if there is demand for the sidecar then users will run it.
And compared to Bitcoin Core no one runs these things, and certainly not in conjunction with Bitcoin Core). Obviously I’m not claiming all of these things should be in Bitcoin core, but there’s some cost to them being outside - users rely on public block explorers and electrum servers giving up privacy, they use wallets that do not track and validate the full UTXO set (preventing proof-of-reserves for mints), etc. Now, some of that is because they want mobile wallets and apps rather than desktop software, but we’ve never managed to get sidecars of any form to be deployed broadly - only for those who are highly motivated.
Maintaining software is hard and by implementing all of this here, you’re asking Bitcoin Core to take on this burden. There are always 300-400 open PRs and only 10-20 people working full time on this project. Take a look at the traction that the stratumv2 PRs have gotten so far, does that look like the project has the capacity to review and maintain all of this?
Indeed, it is absolutely a maintenance burden, but we’ve seen time and time and time again - nearly every project in Bitcoin outside of Bitcoin Core has not been sustainable and eventually turns into abandonware, especially in the mining world. Maybe Sv2/SRI survives but I’d be astounded if it survived with the level of resources it has now, Bitcoin Core will always have way more resources than other things in the ecosystem (and that’s great, it needs them). Again, of course, this isn’t to say that Bitcoin Core should “do all the things”, that’s not realistically either, but Bitcoin Core can’t dismiss things which are critical to the Bitcoin system as “someone else will do it”.
This is obviously an important goal, and also a goal that one would think bitcoin users care enough about to be willing to install a second piece of software.
Then you misunderstood my argument - miners do not care about decentralizing mining, we have ten years of proof of this. If decentralizing mining isn’t trivial and the default option, they’ll never do it.
I also mention in that post that Bitcoin Core needs to eventually support providing work to miners - me
I believe you missed this point - Bitcoin Core needs to support providing work, it’s not excusable that it does not today (outside of “no one has contributed the code yet”). An obvious way to do that would be to add code here for the Sv2 framing and then also add the Sv2 work protocol (which is similarly pretty trivial once we have the framing in place). Alternatively Bitcoin Core could support Sv1 but that’s an undocumented mess and missing lots of critical features.
dergoegge
commented at 4:27 pm on July 9, 2024:
member
And compared to Bitcoin Core no one runs these things
Because not everyone needs to run these things but if they want to they can. Cramming everything into one software project is not sustainable.
Indeed, it is absolutely a maintenance burden, but we’ve seen time and time and time again - nearly every project in Bitcoin outside of Bitcoin Core has not been sustainable and eventually turns into abandonware.
If it would end up unmaintained outside of Bitcoin Core, then it’d also end up unmaintained inside of it.
especially in the mining world
I’ll repeat what I said above. There are several startups that are using stratumv2, they should be interested in maintaining the software that they plan to build their business on. There are also several billion dollar mining companies that should be interested in paying for the maintenance of the software they depend on.
People having to install two pieces of software is not a blocker for mining decentralization. That is an outrageous claim. With this line of reasoning we’ll just end up shipping a whole operating system at some point. I’m not really interested in debating this further.
In any case, looking at how things usually work in this project, it’ll likely be years until the current approach would be merged anyway (due to a lack of resources). Would it not be better to focus on an approach that will be operational much sooner?
ryanofsky
commented at 5:51 pm on July 9, 2024:
contributor
Yet another approach for said sidecar is to use IPC. It offers first class access to the node, e.g. via the newly added mining interface. It can probably automagically find a running Bitcoin Core process or spin one up. This would require more (review) progress in #10102 and additional tooling so external application can easily use IPC.
I don’t actually think much in #10102 is needed, because the bulk of #10102 is wrapping the Wallet interface so the gui can use it, and the Chain interface so the wallet can use it. The mining interface is much smaller than the chain/node/wallet interfaces so it should be much easier to expose. If it would help, I could make a draft PR for a test program that will do what you describe: connect to bitcoin-node over a unix socket if it is running, or start bitcoin-node if it is not running, and then call a mining interface function like getTipHash() and print the result.
Niklas, everything you’re saying may be true but it doesn’t seem self evident. I could imagine there being difficulties with the current approach, but I can also imagine it working out well. It also seems like there is a spectrum of intermediate possibilities between current monolithic design, and your zmq + rpc suggestion and sjors p2p over local socket design, or other possibilities.
So I am curious what design you think would be simplest, best for users, and most maintainable in the long run if there was not a lack of resources in the short run? I agree with you the lack resources is real, and maybe will completely dictate any decision. But I wonder if there is actual disagreement over what the best software design would be if not for that? I do think miner centralization is a problem that is worth spending significant effort to solve, that miners are probably not well equipped to solve themselves even if they are motivated. From my own perspective, I think there’s not really a point in bitcoin existing if mining becomes and remains very centralized.
dergoegge
commented at 3:04 pm on July 10, 2024:
member
Niklas, everything you’re saying may be true but it doesn’t seem self evident.
Certainly! I think most of what I (and Matt) said are opinions and not ground truths. It does look like most of the actual discussion is philosophical in nature (software maintenance, willingness of users to run more than one piece of software, …) and the eventual route we go will depend on a judgement call, which is why I find it important that more contributors with opinions chime in.
I could imagine there being difficulties with the current approach, but I can also imagine it working out well.
Yes, I wouldn’t definitively say that the current approach can’t work, but it is my opinion that it would be better for Bitcoin Core and stratumv2 users to go a different route.
So I am curious what design you think would be simplest, best for users, and most maintainable in the long run if there was not a lack of resources in the short run?
I think our lack of resources will be the key factor in making a decision but there are other things I’m concerned about as well, which could inform the design.
Widening the public facing attack surface
DRY violations: e.g. the stratumv2 encryption looks quite similar to BIP324, yet another networking server (that is architecturally a copy of the already existing stack), …
Lack of separation of concerns. Producing good block templates is one concern, implementing the most functional & secure/safe way to serve them is another one.
These are mostly maintenance concerns, that exist even with plenty of available resources.
A multi-process architecture would be the best way to go in my opinion, i.e. a separate process talking to Bitcoin Core through some interface(s) acting as a proxy for the stratumv2 side. This approach should address concern 3. fully, it cleanly separates the concern of producing templates and serving them to untrusted third parties. It lessens concern 1. since bugs in the stratumv2 process won’t affect general node operations (e.g. a crash bug doesn’t take down the whole node). Depending on the specific implementation detail choices it can also address concern 2. Generally, there are a few implementation details that would have to be answered for this approach:
a) How does the separate process communicate with the node process? (ipc, p2p, rpc, zmq, …)
I think any interface would technically work and all besides p2p would be appropriate (ipc, rpc or zmq are primarily meant for exchanging data with a trusted (local) endpoint, and are therefore suitable). A combination of these interfaces allows for a request/response pattern (like I described in my initial comment). Changes to these interfaces are possible, to accommodate the use-case of the stratumv2 process.
b) Does it share code with Bitcoin Core? If not, is there a need to write it in C++?
I think if we’re already green fielding a stratumv2 server we might as well do it in a memory-safe language, which probably means no code sharing.
c) Is the process a child of the node process or is it externally spawned (e.g. manually by a user)? How is this packaged in a user friendly way?_
These questions mostly concern themselves with the user experience, i.e. how will this be shipped and how is it supposed to be operated. There are so many ways of doing this and I’m sure we could come up with a way to make it easy for users. It depends on what the answer to b) is, e.g. we might end up being able to spawn the stratumv2 process through fork() or if it is a separate piece of software entirely, we might be able to auto-detect the installation and go from there.
TLDR: I would suggest a design very similar to my original suggestion (implementation details may vary), although, given the assumption of no lack of resources, we could end up maintaining and shipping the separate tool along side bitcoind as well.
I think there’s not really a point in bitcoin existing if mining becomes and remains very centralized.
Agreed but I don’t see how the specific software architecture choices discussed here so far have a catastrophic impact on the outcome. They do however, have a sizable impact on Bitcoin Core.
TheBlueMatt
commented at 9:10 pm on July 10, 2024:
contributor
Because not everyone needs to run these things but if they want to they can. Cramming everything into one software project is not sustainable.
I don’t think you read what I wrote. I said “Obviously I’m not claiming all of these things should be in Bitcoin core, but there’s some cost to them being outside”. Which implies, obviously I agree with you that “cramming everything into one software project is not sustainable”, but claiming that the reason people don’t run these projects is because they “dont need to” is absurd. There is very real cost for each of the examples I gave that implies people should run these things. Again, for practical purposes it likely doesn’t make sense for them to exist in Bitcoin Core, but there’s a line, and it isn’t “nothing that could be in an external project”.
If it would end up unmaintained outside of Bitcoin Core, then it’d also end up unmaintained inside of it.
Oh come on. There are drastically different levels of “unmaintained”. The mining software I refer to here is software that is not runnable, that does not work with current versions of Bitcoin Core, sometimes doesn’t build on modern toolchains, etc. That level of maintenance any part of Bitcoin Core would absolutely receive.
If it would end up unmaintained outside of Bitcoin Core, that may mean that it would get one patch a year inside of it, which would be great! Means very little burden on any Bitcoin Core maintainers or contributors, but it would keep building and basic functionality covered by tests would continue to work!
I’ll repeat what I said above. There are several startups that are using stratumv2, they should be interested in maintaining the software that they plan to build their business on. There are also several billion dollar mining companies that should be interested in paying for the maintenance of the software they depend on.
I very, very, very, very much wish I had your level of optimism about the world and about open source software in general. But I really don’t see where your optimism comes from…
In any case, looking at how things usually work in this project, it’ll likely be years until the current approach would be merged anyway (due to a lack of resources). Would it not be better to focus on an approach that will be operational much sooner?
To be clear, people are using this patchset already, and will likely continue to do so. Upstreaming it just means that people can benefit from the review and secure builds processes of Bitcoin Core, rather than the Bitcoin mining ecosystem all trusting a random binary posted by Sjors…
Sjors
commented at 9:09 am on July 11, 2024:
member
If it would help, I could make a draft PR for a test program that will do what you describe: connect to bitcoin-node over a unix socket if it is running, or start bitcoin-node if it is not running, and then call a mining interface function like getTipHash() and print the result.
Yes that would be very helpful.
At least one large objection I have to the sidecar approach is that it tends to be very clunky, having to deal with our p2p, rpc and zmq interfaces which are not a good fit for the job. That hurts usability and maintainability.
I’d like to try the IPC approach. If you can make the demo in rust that would be ideal, because it would allow me to reuse parts of SRI and other useful bitcoin libraries. I think @josibake is working on something similar.
That said, I suspect our CConnman and Transport classes are in better shape than any Bitcoin Rust library out there including SRI. Not being able to use (derivatives of) them is a big downside.
@dergoegge wrote:
A multi-process architecture would be the best way to go in my opinion, i.e. a separate process talking to Bitcoin Core through some interface(s) acting as a proxy for the stratumv2 side.
Note that process seperation is something I want to achieve with the current approach as well. That is, even if Bitcoin Core produces and serves the templates, that should be a seperate process. Initially this PR had some preliminary refactoring towards that goal, but that got out of control, see https://github.com/Sjors/bitcoin/pull/47.
@TheBlueMatt wrote:
Oh come on. There are drastically different levels of “unmaintained”. The mining software I refer to here is software that is not runnable, that does not work with current versions of Bitcoin Core, sometimes doesn’t build on modern toolchains, etc. That level of maintenance any part of Bitcoin Core would absolutely receive.
I think the current getblocktemplate code is a useful reference point. Afaik it’s used by all miners, with various levels of custom patching that they don’t always contribute back. We’ve made very little changes to it, but it still works after a decade because it rides on the maintenance of the rest of our codebase. (that said, I wouldn’t rule out that there’s a bug and nuisance in it that every miner knows about, patches and doesn’t tell us)
<philosophy>A biologist might call that RPC a parasite or symbiosis depending on the perspective. Being part of the Bitcoin Core codebase can be a way for a smaller piece of code to get better maintainence than being independent. Though there are plenty of examples where the opposite might be true, e.g. libsecp lives on its own is used by many other projects, which might give it more maintaince than it would otherwise get.</philosophy>
(I plan to rebase this PR soon, but would like to incorporate #30205 into #30332 first to get CI reliably green)
fjahr
commented at 10:40 am on July 11, 2024:
contributor
I am just catching up on this thread so I haven’t really made up my mind on the conceptual question overall. But if we could wave our magic wand to have multiprocess plus sv2 based on that that sounds like a great achievement for the project overall. Adding multiprocess support would take its time though, these would then be two large projects that have been around for a long time but struggled to get enough review power. If sv2 could help move multiprocess forward that would be great but given that it’s struggling to get review on its own I don’t see that happening unfortunately.
An approach that came to my mind and I don’t think I have seen mentioned yet: Could everything sv2 related at least be self-contained so that we can build without it like --disable-sv2? Then there could be a release policy to start with where we release core without sv2 as a priority and then there is an additional release with sv2 on a best-effort basis. I.e. if the code is broken and nobody steps up to fix it and miners don’t want to pay anyone to do it, then there will be no sv2 release. Maybe this “miner release” could even be without the GUI to further ensure that this is not the version normal users should use. They could have their own set of builders too and maybe the binary is not even released on bitcoin-core.org but somewhere else. Having another set of binaries flying around is not ideal either but at least this could decouple sv2 support from the general engineering bottleneck. Having this clear separation should also make it much easier to move sv2 into its own process when we get multiprocess support later.
EDIT: This would also mean that sv2 support should be easy to remove again if this “experiment” fails.
josibake
commented at 11:02 am on July 11, 2024:
member
It seems to me two topics are being debated here, one about software design and the other about how we get users to run software.
Technical design
I agree with @dergoegge that there have been no technical objections to a multi-process approach, i.e., a stratumv2 sidecar talking to bitcoind via some sort of communication protocol. Instead, I find the arguments for a multi-process architecture to be compelling and the correct technical design. Specifically, exposing a somewhat generic mining interface not specific to SV1/SV2 is the correct long-term choice. Of the proposed multi-process approaches, I’ve been experimenting with talking to a bitoin-node process over a socket using rust + capnproto and have been pleasantly surprised at how performant and powerful this approach is. As @Sjors mentioned, there are existing rust libraries for stratum and bitcoin related stuff in rust, so it seems we could vet this approach easily and compare it to alternatives.
As @ryanofsky mentioned, I think the only thing needed to explore this route further is defining a capnproto schema for the mining interface and exposing it over a socket. I’ve already got a working example of launching a bitcoin-node process and talking to it from a rust process over a socket. From what I can tell, if we have the mining interface defined, it should be trivial to adapt this example to an SV2 proof of concept. Out of the proposed architectures, this is the approach I find to be the most technically sound and interesting, so happy to help if people are interested in exploring this route further.
All the other stuff
I sympathize with the argument that the harder we make things for the user, the less likely they are to run the software. I strongly disagree, however, with the conclusion that the best solution then is to put everything into a single binary for the user. All that matters for the user is that it Just Works :tm: and there are plenty of ways we can achieve this regardless of the technical design.
As a Bitcoin Core user, if I’m not mining then I don’t want mining code in my binary. If I’m a Bitcoin Core user who is mining, there isn’t any reason I can’t install one piece of mining software that automatically launches a bitcoin-node process or discovers an already running one.
fjahr
commented at 11:23 am on July 11, 2024:
contributor
It seems to me two topics are being debated here, one about software design and the other about how we get users to run software
I think the discussion here is mostly about engineering resources in Bitcoin Core
(Edited for clarity, this was not about my opinion but about what we are disagreeing about here. As I said in my first message I am undecided which way we should go and what arguments weight heavier)
Sjors force-pushed
on Jul 11, 2024
Sjors
commented at 12:20 pm on July 11, 2024:
member
Rebased after #29274 was merged. Now using the new waitTipChanged() interface method from #30409: faf0c8ec1701d0b497e962033477d2aed8c556c3 -> c3a0a51aece3f9174be80eea81795140bb2415b3 is a nice simplification in ThreadSv2Handler().
@fjahr wrote:
Could everything sv2 related at least be self-contained so that we can build without it like --disable-sv2?
Yes, but I don’t think that’s useful. If you don’t start with -sv2 none of this code is run. That should be even more clear if it has its own libbitcoin_sv2 library (which needs something like https://github.com/Sjors/bitcoin/pull/47).
@josibake wrote:
it should be trivial to adapt this example to an SV2 proof of concept
I wouldn’t say trivial, but certainly worth exploring.
@fjahr wrote:
They could have their own set of builders too and maybe the binary is not even released on bitcoin-core.org but somewhere else.
Currently I use the opposite approach, where I build binaries based on this PR while it’s not merge ready. With the right code separation it shouldn’t be too hard to keep this rebased.
In any case it will take time for any of the suggested sidecar approaches to reach feature parity with this PR. Since people are actually use this, I’ll have to keep maintaining it for a while and release those binaries anyway.
Sjors force-pushed
on Jul 11, 2024
Sjors
commented at 1:06 pm on July 11, 2024:
member
Addressed linter issues and fixed deadlock regression in 02f1f86b6b6d46d8ab5e17d5d824b0c1bba5d789 -> 00201802a19124cc8eb671b7f9311326f99f999d.
Something cured the deadlock I ran into previously #29432 (comment), maybe using the interface.
Sjors force-pushed
on Jul 11, 2024
DrahtBot removed the label
Needs rebase
on Jul 11, 2024
TheBlueMatt
commented at 2:48 pm on July 11, 2024:
contributor
I sympathize with the argument that the harder we make things for the user, the less likely they are to run the software. I strongly disagree, however, with the conclusion that the best solution then is to put everything into a single binary for the user. All that matters for the user is that it Just Works ™️ and there are plenty of ways we can achieve this regardless of the technical design.
Right, I don’t at think anyone was making the argument that it has to be in the same process, but rather in the same project so that Just Works is even an option. In principle I have no strong opinion about multi-process (as long as the interface is sufficiently performant - one very old complaint about getblocktemplate is we end up wasting multiple milliseconds encoding all the crap we shove in there, then wasting another multi milliseconds dealing with JSON parsing on the other end, all right when we’re trying to reserve precious CPU time to validate blocks and update the mempool and all kinds of other crap), but only that it must Just Work, and getting multi-process to Just Work is a very nontrivial task unto itself. Blocking any Sv2 support on that seems like it will take something really critical for the Bitcoin system that is already gonna take a while and make it take forever instead.
josibake
commented at 4:08 pm on July 11, 2024:
member
but rather in the same project so that Just Works is even an option
I don’t agree that everything being in the same project is necessary for things to “Just Work.” However, I think it’s entirely premature to be talking about where the code lives, what project, etc until we’ve at least decided on a technical design.
(as long as the interface is sufficiently performant - one very old complaint about getblocktemplate is we end up wasting multiple milliseconds encoding all the crap we shove in there, then wasting another multi milliseconds dealing with JSON parsing on the other end, all right when we’re trying to reserve precious CPU time to validate blocks and update the mempool and all kinds of other crap)
This is why I mentioned the multiprocess IPC + capnproto route I’ve been exploring since I believe it addresses most if not all of these performance concerns. There is no encoding/decoding and things that would take multiple round trips using something like ZMQ/JSON RPC can be collapsed into a single roundtrip via promise pipelining.
and getting multi-process to Just Work is a very nontrivial task unto itself
I think you’re overestimating the amount of work required here. Secondly, even if it is more work, process separation has been a project that has had conceptual buy in for quite some time. Given the limited review capacity, we should be looking for two birds, one stone opportunities wherever they arise. If building out SV2 on top of the multiprocess work helps get more review and attention on mulitprocess, this feels like a win to me. Furthermore, from what I understood, there is existing SV2 code in rust. Pursuing a multiprocess approach allows us to reuse that code, rather than rewriting everything in C++ so it can be in bitcoind. I don’t want to reopen a debate about which project stuff should live under, but given there are currently engineers working on SV2 stuff in rust, it seems more productive to let them apply their SV2 knowledge in their language of choice to building a stratumv2d process that can talk to a mining interface in bitcoind.
DrahtBot removed the label
CI failed
on Jul 11, 2024
Sjors
commented at 4:48 pm on July 11, 2024:
member
I took another look at the Template Provider to see what else the interface needs to do for an IPC sidecar to work. In addition to #30356 and #30409 (first commit):
Expand createNewBlock to (optionally, only for sv2) return a vector of serialized transactions. Since the node could let go of mempool transactions at any time, and because it doesn’t hold on to generated templates, we need to copy all template transactions over to the other process.
This means we lose some of the advantage of having separate NewTemplate (very small, gives the miner something to do) and RequestTransactionData.Success (big, used for declaring the template to the pool - synchronous or asynchronous) messages. But without benchmarking I have no idea if we’re talking milliseconds or shorter.
Unless there’s way to halt processing of an IPC message mid-way, push out NewTemplate and then process the remainder that we need (much later) to reply to RequestTransactionData.Success.
The other side, e.g. a rust app, should store the serialized transactions in a hash lookup table with reference counting, hopefully such a thing already exists. Once a template is no longer needed, unreferenced transactions can be purged.
Probably also expand createNewBlock to (optionally, only for sv2) return the result of GetMerklePathForCoinbase. Though this could be implemented on the other side, I’d sleep better if made sure it’s correct and hand it over.
ryanofsky
commented at 10:57 pm on July 11, 2024:
contributor
Unless there’s way to halt processing of an IPC message mid-way, push out NewTemplate and then process the remainder that we need (much later) to reply to RequestTransactionData.Success.
I’m not sure if this is a question about the multiprocess IPC framework, or you’re just deciding what the mining IPC interface should look like, but the framework is nonblocking, so there is no problem with the client making multiple calls at the same time, and the client can also decide what threads in the server the calls run on. So any interface you would like to implement should be possible to implement.
If it would help, I could make a draft PR for a test program that will do what you describe: connect to bitcoin-node over a unix socket if it is running, or start bitcoin-node if it is not running, and then call a mining interface function like getTipHash() and print the result.
Yes that would be very helpful.
I got a c++ test program working that connects to the bitcoin node over a unix socket and calls the getTipHash() function.
It will also spawn the bitcoin-node process if it is not currently running. This is written on top of my combined IPC branch, so the branch has a lot of other changes that are unneeded and unrelated. But I think I can trim it down to a reasonable sized PR. I’ll try trimming it down tomorrow. (EDIT: The trimmed down branch is posted in #30437)
It would be possible to write a similar program in rust, too, but I basically don’t know any rust, so this was easier for me to get working in c++. The rust program would have to start bitcoin-node with the -ipcbind=unix option, connect to the <datadir>/sockets/bitcoin-node.sock socket, use the rust capnproto runtime to call Init.makeMining over the socket and then call Mining.getTipHash on the returned Mining interface. That would be equivalent to what the c++ program is doing.
Sjors
commented at 8:02 am on July 12, 2024:
member
the framework is nonblocking, so there is no problem with the client making multiple calls at the same time, and the client can also decide what threads in the server the calls run on. So any interface you would like to implement should be possible to implement.
I’m not sure if that helps. I was also a bit confused, so let me rephrase it:
The current interface createNewBlock returns a template, which contains 4 MB of serialized transaction data. Ideally I would not include those transactions, but only the result of GetMerklePathForCoinbase, which is all we need for the NewTemplate message.
I would then make another call to the interface to get those transactions, but by that that time the node has already forgotten the template it gave me and the mempool may have dropped some of the transactions it included.
I don’t think this can be solved with making both calls at the same time; the second call needs to reference a specific template that the first call generated.
Later on the GUI could have a check box “Enable IPC” under “Enable RPC server”, which under the hood ensures the node process listens.
(Even fancier: it always listens and as soon anything tries to connect it prompts for permission and remembers the “pairing”)
Ease of implementation aside, automagically starting the node from a miner application might also lead to confusion. Desktop users are not used to Bitcoin Core having a separate daemon (we can introduce this later, but starting from the GUI).
If we ship this functionality before full GUI IPC support is ready, then presumably the user has to run bitcoin-node and shut it down before starting the non-IPC bitcoin-qt. Or we might add IPC support to bitcoind and bitcoin-qt, but only for the miner interface. In either case it’s better if the user themselves decides if they want to launch bitcoind or the gui. (or bitcoin-miner could prompt the user which one they want to launch)
I would then make another call to the interface to get those transactions, but by that that time the node has already forgotten the template it gave me and the mempool may have dropped some of the transactions it included.
Can you cache the tx data and keep them for the time that the template is valid. If you have more templates you save shared txs just one time.
ryanofsky
commented at 11:03 am on July 12, 2024:
contributor
I don’t think this can be solved with making both calls at the same time; the second call needs to reference a specific template that the first call generated.
A natural way to do this would be to have createNewBlock return an interface instead of a struct:
Implementation of MiningBlock could own all the data it needs and release it when it is destroyed.
I got a c++ test program working
Will take it for a spin!
Great! Please see #30437 for a cleaned up branch and implementation.
Update: it works! If it helps getting IPC stuff merged quicker, it’s fine by me if the miner can’t spin up a node process (see e.g. Sjors@b6dfa5f). Instead it could simply check every second and in the meanwhile show a message:
That would simplify the implementation a bit. In addition to bitcoin-mine program being a little simpler, I could drop the commits in #30437 related to allowing a spawned process to detach and not exit when the last IPC connection is closed.
Later on the GUI could have a check box “Enable IPC” under “Enable RPC server”, which under the hood ensures the node process listens.
Yeah that would be pretty natural. I’m not sure if maybe it could just listen on the unix socket by default, but for now it doesn’t do that.
(Even fancier: it always listens and as soon anything tries to connect it prompts for permission and remembers the “pairing”)
This could be a good idea. For now I was just relying on unix file permissions to control access to the socket.
Ease of implementation aside, automagically starting the node from a miner application might also lead to confusion. Desktop users are not used to Bitcoin Core having a separate daemon (we can introduce this later, but starting from the GUI).
If we ship this functionality before full GUI IPC support is ready, then presumably the user has to run bitcoin-node and shut it down before starting the non-IPC bitcoin-qt. Or we might add IPC support to bitcoind and bitcoin-qt, but only for the miner interface. In either case it’s better if the user themselves decides if they want to launch bitcoind or the gui. (or bitcoin-miner could prompt the user which one they want to launch)
The changes to the GUI in #19461 would make this smoother. If the node process was listening on a socket and the GUI process was started, it would just connect to it automatically. Even without those changes, I think as long as GUI displays a clear error message if it can’t start because node process is running and explains how to resolve the problem, I don’t think that would be too confusing. It doesn’t seem that much worse than status quo of bitcoind and bitcoin-qt not being able to run at the same time.
Sjors
commented at 11:36 am on July 12, 2024:
member
Implementation of MiningBlock could own all the data it needs and release it when it is destroyed.
Thanks, that makes sense.
I could drop the commits in #30437 related to allowing a spawned process to detach and not exit when the last IPC connection is closed.
I think that’s a good idea. Will study the PR.
TheBlueMatt
commented at 3:48 pm on July 14, 2024:
contributor
Expand createNewBlock to (optionally, only for sv2) return a vector of serialized transactions.
Any IPC thing that requires serializing all the transactions for all work is going to be too slow. Bitcoin Core needs to be able to track the templates it gave out and keep them around (to keep the Transaction shared_ptrs in a block ready to go) until they time out.
Sjors
commented at 8:05 am on July 15, 2024:
member
Sjors
commented at 1:34 pm on July 15, 2024:
member
I incorporated the interface changes from #30440. In particular I made sure we only call getBlock() in the context of a RequestTransactionData message. The avoids sending the full serialized block over the interface as much as possible.
Sjors force-pushed
on Jul 15, 2024
Sjors
commented at 3:41 pm on July 15, 2024:
member
I also incorporated the new waitFeesChanged() mining interface method from #3044, which required a bit an overhaul of 76693a03d61ddd4ff19cf0745d0a13bb0b84b5bc: I ended up adding a second event loop ThreadSv2MempoolHandler that deals exclusively with mempool updates.
Sjors force-pushed
on Jul 15, 2024
DrahtBot
commented at 3:49 pm on July 15, 2024:
contributor
Make sure to run all tests locally, according to the documentation.
The failure may happen due to a number of reasons, for example:
Possibly due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
A sanitizer issue, which can only be found by compiling with the sanitizer and running the
affected test.
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
DrahtBot added the label
CI failed
on Jul 15, 2024
DrahtBot added the label
Needs rebase
on Jul 16, 2024
Sjors force-pushed
on Jul 17, 2024
DrahtBot removed the label
Needs rebase
on Jul 17, 2024
DrahtBot removed the label
CI failed
on Jul 17, 2024
Sjors force-pushed
on Jul 17, 2024
Sjors
commented at 10:31 am on July 17, 2024:
member
Rebased to include the latest interface changes and to take advantage of mocked sockets in #30205.
Sjors force-pushed
on Jul 17, 2024
Sjors
commented at 3:10 pm on July 17, 2024:
member
I split off the Template Provider class into #30475. The non-base commits in this PR start the TP from init.cpp and add some other things that are useful for testing (e.g. testnet4).
Make sure to run all tests locally, according to the documentation.
The failure may happen due to a number of reasons, for example:
Possibly due to a silent merge conflict (the changes in this pull request being
incompatible with the current code in the target branch). If so, make sure to rebase on the latest
commit of the target branch.
A sanitizer issue, which can only be found by compiling with the sanitizer and running the
affected test.
An intermittent issue.
Leave a comment here, if you need help tracking down a confusing failure.
maflcko removed the label
Needs CMake port
on Sep 2, 2024
DrahtBot added the label
Needs rebase
on Sep 2, 2024
Sjors
commented at 5:33 pm on September 3, 2024:
member
I pushed the rebase, so that stratum v2 miners will produce valid testnet4 blocks. See #30786. Tagged sv2-tp-0.1.8 and will push binaries shortly.
Sjors force-pushed
on Sep 3, 2024
Sjors force-pushed
on Sep 3, 2024
Sjors
commented at 6:35 pm on September 3, 2024:
member
1f720ffe25443b37624d4d52e040d29ecf93278a adds a temporary fix for a crash after finding a block. This was introduced by the interface changes. Will investigate later, as part of https://github.com/Sjors/bitcoin/pull/49.
DrahtBot removed the label
Needs rebase
on Sep 3, 2024
DrahtBot added the label
Needs rebase
on Sep 3, 2024
DrahtBot removed the label
CI failed
on Sep 4, 2024
Sjors
commented at 8:56 am on September 5, 2024:
member
The merge conflicts are addressed in #29032 and #29346. I plan to do another rebase round when more of the interface pull requests are merged, e.g. #30409.
achow101 referenced this in commit
3f66642820
on Sep 16, 2024
Sjors force-pushed
on Sep 20, 2024
Sjors
commented at 5:21 pm on September 20, 2024:
member
Rebased. You now need cmake --build WITH_SV=ON to enable the Stratum v2 functionality. Also slightly improved the mining instructions, e.g. to use -testnet4.
achow101 referenced this in commit
dabc74e86c
on Sep 23, 2024
bitcoin deleted a comment
on Sep 24, 2024
net: reduce CAddress usage to CService or CNetAddr
* `CConnman::CalculateKeyedNetGroup()` needs `CNetAddr`, not `CAddress`,
thus change its argument.
* Both callers of `CConnman::CreateNodeFromAcceptedSocket()` create a
dummy `CAddress` from `CService`, so use `CService` instead.
1bfc1ca9b6
net: split CConnman::BindListenPort() off CConnman
Introduce a new low-level socket managing class `SockMan`
and move the `CConnman::BindListenPort()` method to it.
Also, separate the listening socket from the permissions -
they were coupled in `struct ListenSocket`, but the socket
is protocol agnostic, whereas the permissions are specific
to the application of the Bitcoin P2P protocol.
41c87ddb3d
style: modernize the style of SockMan::BindListenPort()
It was copied verbatim from `CConnman::BindListenPort()` in the previous
commit. Modernize its variables and style and log the error messages
from the caller.
b5362b73ec
net: split CConnman::AcceptConnection() off CConnman
Move the `CConnman::AcceptConnection()` method to `SockMan` and split
parts of it:
* the flip-to-CJDNS part: to just after the `AcceptConnection()` call
* the permissions part: at the start of `CreateNodeFromAcceptedSocket()`
708398cbe8
style: modernize the style of SockMan::AcceptConnection()0398a8a017
net: move the generation of ids for new nodes from CConnman to SockMan5d4920f630
net: move CConnman-specific parts away from ThreadI2PAcceptIncoming()
CConnman-specific or in other words, Bitcoin P2P specific. Now
the `ThreadI2PAcceptIncoming()` method is protocol agnostic and
can be moved to `SockMan`.
189827785b
net: move I2P-accept-incoming code from CConnman to SockManb94f9d338f
net: index nodes in CConnman by id
Change `CConnman::m_nodes` from `std::vector<CNode*>` to
`std::unordered_map<NodeId, CNode*>` because interaction
between `CConnman` and `SockMan` is going to be based on
`NodeId` and finding a node by its id would better be fast.
As a nice side effect the existent search-by-id operations in
`CConnman::AttemptToEvictConnection()`,
`CConnman::DisconnectNode()` and
`CConnman::ForNode()` now become `O(1)` (were `O(number of nodes)`),
as well as the erase in `CConnman::DisconnectNodes()`.
b96beb27d2
net: isolate P2P specifics from GenerateWaitSockets()
Move the parts of `CConnman::GenerateWaitSockets()` that are specific to
the Bitcoin-P2P protocol to dedicated methods:
`ShouldTryToSend()` and `ShouldTryToRecv()`.
This brings us one step closer to moving `GenerateWaitSockets()` to the
protocol agnostic `SockMan` (which would call `ShouldTry...()` from
`CConnman`).
bb5b91d430
net: isolate P2P specifics from SocketHandlerConnected() and ThreadSocketHandler()
Move some parts of `CConnman::SocketHandlerConnected()` and
`CConnman::ThreadSocketHandler()` that are specific to the Bitcoin-P2P
protocol to dedicated methods:
`EventIOLoopCompletedForNode()` and `EventIOLoopCompletedForAllPeers()`.
This brings us one step closer to moving `SocketHandlerConnected()` and
`ThreadSocketHandler()` to the protocol agnostic `SockMan` (which would
call `EventIOLoopCompleted...()` from `CConnman`).
50cb52470e
net: isolate all remaining P2P specifics from SocketHandlerConnected()
Introduce 4 new methods for the interaction between `CConnman` and
`SockMan`:
* `EventReadyToSend()`:
called when there is readiness to send and do the actual sending of data.
* `EventGotData()`, `EventGotEOF()`, `EventGotPermanentReadError()`:
called when the corresponing recv events occur.
These methods contain logic that is specific to the Bitcoin-P2P protocol
and move it away from `CConnman::SocketHandlerConnected()` which will
become a protocol agnostic method of `SockMan`.
Also, move the counting of sent bytes to `CConnman::SocketSendData()` -
both callers of that method called `RecordBytesSent()` just after the
call, so move it from the callers to inside
`CConnman::SocketSendData()`.
96974ffb3b
net: split CConnman::ConnectNode()
Move the protocol agnostic parts of `CConnman::ConnectNode()` into
`SockMan::ConnectAndMakeNodeId()` and leave the Bitcoin-P2P specific
stuff in `CConnman::ConnectNode()`.
Move the protocol agnostic `CConnman::m_unused_i2p_sessions`, its mutex
and `MAX_UNUSED_I2P_SESSIONS_SIZE` to `SockMan`.
Move `GetBindAddress()` from `net.cpp` to `sockman.cpp`.
3bb0145514
net: tweak EventNewConnectionAccepted()
Move `MaybeFlipIPv6toCJDNS()`, which is Bitcoin P2P specific from the
callers of `CConnman::EventNewConnectionAccepted()` to inside that
method.
Move the IsSelectable check, the `TCP_NODELAY` option set and the
generation of new node id out of `CConnman::EventNewConnectionAccepted()`
because those are protocol agnostic. Move those to a new method
`SockMan::NewSockAccepted()` which is called instead of
`CConnman::EventNewConnectionAccepted()`.
9d1b352a4d
net: move sockets from CNode to SockMan
Move `CNode::m_sock` and `CNode::m_i2p_sam_session` to `SockMan::m_connected`.
Also move all the code that handles sockets to `SockMan`.
`CNode::CloseSocketDisconnect()` becomes
`CConnman::MarkAsDisconnectAndCloseConnection()`.
`CConnman::SocketSendData()` is renamed to
`CConnman::SendMessagesAsBytes()` and its sockets-touching bits are moved to
`SockMan::SendBytes()`.
`CConnman::GenerateWaitSockets()` goes to
`SockMan::GenerateWaitSockets()`.
`CConnman::ThreadSocketHandler()` and
`CConnman::SocketHandler()` are combined into
`SockMan::ThreadSocketHandler()`.
`CConnman::SocketHandlerConnected()` goes to
`SockMan::SocketHandlerConnected()`.
`CConnman::SocketHandlerListening()` goes to
`SockMan::SocketHandlerListening()`.
c133a634d1
net: move-only: improve encapsulation of SockMan
`SockMan` members
`AcceptConnection()`
`NewSockAccepted()`
`GetNewNodeId()`
`m_i2p_sam_session`
`m_listen private`
are now used only by `SockMan`, thus make them private.
70c2f13f83
achow101 referenced this in commit
0c2c3bb3f5
on Oct 10, 2024
ci: skip Github CI on branch pushes for forks
Similar to e9bfbb5414ab14ca14d8edcfdf77f28c9ed67c33 for Cirrus, but
not opt-in, because Github CI lacks custom variables.
42ffbfe4df
Add sv2 log category for Stratum v21b84e85e2e
build: libbitcoin_sv2 scaffolde6b22e0be7
Add sv2 noise protocol
Co-Authored-By: Christopher Coverdale <chris.coverdale24@gmail.com>
04396cb767
Add sv2 message CoinbaseOutputDataSize
This commit adds the simplest stratum v2 message. The remaining messages are introduced in later commits.
Co-Authored-By: Christopher Coverdale <chris.coverdale24@gmail.com>
9ba1a18817
Move CNetMessage and Transport headers to common
This avoids a circular dependency between bitcoin-sv2 and bitcoin-node.
362d3d1f41
Convert between Sv2NetMsg and CSerializedNetMsg
This allows us to subclass Transport.
666b3bc54c
Introduce Sv2Transport
Implemented starting from a copy of V2Transport and the V2TransportTester,
modifying it to fit Stratum v2 and Noise Protocol requirements.
Co-Authored-By: Christopher Coverdale <chris.coverdale24@gmail.com>
Co-Authored-By: Fi3
5880322882
Add sv2 SETUP_CONNECTION messages
Co-Authored-By: Christopher Coverdale <chris.coverdale24@gmail.com>
c211a16dac
Add strings for Sv2MsgType818e124563
test: move the implementation of StaticContentsSock to .cpp
Move the implementation (method definitions) from `test/util/net.h` to
`test/util/net.cpp` to make the header easier to follow.
c085339f0e
test: put the generic parts from StaticContentsSock into a separate class
This allows reusing them in other mocked implementations.
ae25a757df
test: add a mocked Sock that allows inspecting what has been Send() to it
And also allows gradually providing the data to be returned by `Recv()`
and sending and receiving net messages (`CNetMessage`).
e7733c0077
Merge remote-tracking branch 'vasild/sockman' into 2024/06/sv2_connection9ee662c8a3
Move i2p from node to common558c9e08e8
init storage to make MSAN happy28d0915004
Add Sv2Connman
Co-Authored-By: Vasil Dimov <vd@FreeBSD.org>
a1f543b31b
tmp: fix silent merge conflict with bitcoin/bitcoin#30937e8d8aead6a
Stratum v2 template provider scaffold
The template provider will listen for a Job Declarator client.
It can establish a connection and detect various protocol errors.
Co-Authored-By: Christopher Coverdale <chris.coverdale24@gmail.com>
Co-Authored-By: Fi3
f6d8a2a1cf
Chainparams: add default sv2 port
Co-authored-by: Christopher Coverdale <chris.coverdale24@gmail.com>
9b54e1ff23
Add remaining sv2 messages for TemplateProvider16c3a72441
Sv2: construct and submit block templates10e74c6cff
Handle REQUEST_TRANSACTION_DATAe319553420
Reject RequestTransactionData for stale templates7ae2642ee2
This is a metadata mirror of the GitHub repository
bitcoin/bitcoin.
This site is not affiliated with GitHub.
Content is generated from a GitHub metadata backup.
generated: 2024-12-25 15:12 UTC
This site is hosted by @0xB10C More mirrored repositories can be found on mirror.b10c.me