-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance Large Input Decoding Performance #101
Comments
Base58 isn't really intended to be used for large values. The primary point of using it is human readability and transcribability which I would estimate starts to break down due to sheer size at ~32 bytes, for larger values which don't need to be as human readable base64 gives much better efficiency. |
Looking at the benchmarks included in this project, using integers appears faster. For inputs between 0-10 characters, using u64 is faster. For inputs between 10-20 characters, u128 is faster. For encoded outputs 32 bytes and larger, BigUint is faster. Teal is using u64/u128/BigUint, and orange uses the original nested loops. |
For decoding the following input sizes, using the corresponding intermediate are comparable or faster than nested loops over output
For 32 bytes, similar to Bitcoin addresses, See #102 |
@Nemo157 is correct. See also https://digitalbazaar.github.io/base58-spec/#rfc.section.7.1
|
I understand the poor performance of encoding to a base that does not align to the source. With a target of ~32 bytes, the performance can be increased by using There is increased complexity in handling much larger inputs, but with decreased resource utilization. This was my concern, as there are other uses of base58 where I cannot control the size of the input. |
Using rug or num_bigint for larger inputs significantly increases performance. BigUint is slower for 10 bytes, but for 255 bytes it is ~10x faster, and for 10k bytes, it is ~50x faster
Related issue: kevinheavey/based58#5
The text was updated successfully, but these errors were encountered: