In populating the b58 vector, don't need to go through the whole vector but just those valid bytes.
Signed-off-by: Huang Le 4tarhl@gmail.com
In populating the b58 vector, don't need to go through the whole vector but just those valid bytes.
Signed-off-by: Huang Le 4tarhl@gmail.com
Signed-off-by: Huang Le <4tarhl@gmail.com>
In my local testing this patch can improve b58 encoding performance by around 85%, it doesn't achieve the 100% theoretical gain due to extra comparing ops introduced in the iteration..
How does this compare with luke-jr/bfgminer#540's implementation? About the same? I was thinking we should throw the C base58 encode/decode in a simple, small, library, and just use it in both.
Signed-off-by: Huang Le <4tarhl@gmail.com>
I wanted to avoid complicating the code for something so little performance critical, but I'm fine with improving it.
How was this benchmarked? Large number of address-sized data, or very large amounts of data to convert? I would expect the actual conversion loop to be fast compared to other overhead.
Also, decoding can get the same optimization.
Not sure about this. We don't have a requirement for high-performance base58 encoding and decoding in bitcoin core. I'd like to avoid over-optimizing things not on the critical path. Security and ease of understanding is more important here.
FWIW, I've imported the C version to https://github.com/luke-jr/libbase58 , which could be used either as a shared library or a subtree.
Automatic sanity-testing: PASSED, see http://jenkins.bluematt.me/pull-tester/p4713_f2c3cb626e37542d0cedd958e2bcc3b7d6c71938/ for binaries and test log. This test script verifies pulls every time they are updated. It, however, dies sometimes and fails to test properly. If you are waiting on a test, please check timestamps to verify that the test.log is moving at http://jenkins.bluematt.me/pull-tester/current/ Contact BlueMatt on freenode if something looks broken.
Closing this; sorry for that, but as said optimizing base58 is not that critical for us, and I don't like the risk of the extra complexity introducing a subtle bug.