I’m creating this issue to contain some of the assorted ongoing plans for review pipeline work to achieve a initial non-experimental release. We’re still a considerable way off from potentially using this software in production for verification in Bitcoin (deployment first /requires/ a canonical signatures soft-fork), but when that time comes to avoid delays we need to make sure we’re not stuck waiting on an assurance process.
We have funding available we can use for both external review, as well as bounties. I’m somewhat concerned that there is a non-trivial equality issue with respect to the hardworking ongoing contributors, who’ve delivered demonstrated results, not getting paid when we’d think of paying a drive by review; so we need to think carefully about what value we’re getting.
One of the things that I’ve said I’d like to do with respect to bounties is to turn it around: Part of the reason that review on mature software is often unrewarding is because the probability of finding something is fairly low. By release time our testing infrastructure alone should be (and perhaps already is) close to a point where any the possibility of any plausible bug is excluded. To test that theory, we could have a bounty not (just) for bugs in the software, but for the ability to add any plausible mistake in the software which the tests are unable to detect. Since, for the most part, the tests are inherently not sound it should at least be theoretically possible to do this, and at least the odds may be better, so it may be more fun to participate.
Some of the announcements with respect to things like performance may also bring in some more participants in the project, so we ought to think about how best to time that.
ae9c5916ffe9cb5bca3d6bb5b2afbacd9b690dc880e1918ae4d5261f35d371db