We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Part of my reason for #7 was to make it easier to extend. Some other functions we should provide:
dot_kbn
mean_kbn
mean_kbn(fill(0.1,10)) == 0.1
The text was updated successfully, but these errors were encountered:
I'm interested in #7, because it allows simple parallelization with something like
psum_kbn(f, X) = singleprec(Folds.mapreduce(f, InitialValues.asmonoid(plus_kbn), X)) psum_kbn(X) = psum_kbn(identity, X)
Is there anything wrong with the PR besides the fact that singleprec(x::TwicePrecisionN{T}) where {T} = x.hi - x.nlo is better suited for AD?
singleprec(x::TwicePrecisionN{T}) where {T} = x.hi - x.nlo
Sorry, something went wrong.
No branches or pull requests
Part of my reason for #7 was to make it easier to extend. Some other functions we should provide:
dot_kbn
: compute the dot-products using double-double, then sum up using Kahan summationmean_kbn
: divide the extended-precision sum using a 2-div style, so thatmean_kbn(fill(0.1,10)) == 0.1
The text was updated successfully, but these errors were encountered: