You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is related to #98 — Float64 precision seems to be baked into the package, whereas it would be more flexible (and more Julian) to use the precision of the arguments. For example, using BigFloat in the following example only gives 16 digits of accuracy:
julia>using FiniteDifferences
julia>extrapolate_fdm(central_fdm(2, 1), sin, big"1.0")[1] -cos(big"1.0")
-6.71174699531887290713204926350530055924229547805016487900793424335727248883486e-17
In contrast, "manually" calling Richardson extrapolation with a 2nd-order finite-difference approximation gives 70 digits (in 10 iterations):
So, someplace in FiniteDifferences is either hard-coding a tolerance (rather than using eps(float(x))), or "contaminating" the calculation with an inexact Float64 literal.
This is true.
It is hard coded to use Float64 in a bunch of places, like when storing the precomputed grids.
It would take a fair bit to fix, but would be good if someone was interested in doing so.
I don't have the need or the time right now
This is related to #98 —
Float64
precision seems to be baked into the package, whereas it would be more flexible (and more Julian) to use the precision of the arguments. For example, usingBigFloat
in the following example only gives 16 digits of accuracy:In contrast, "manually" calling Richardson extrapolation with a 2nd-order finite-difference approximation gives 70 digits (in 10 iterations):
So, someplace in FiniteDifferences is either hard-coding a tolerance (rather than using
eps(float(x))
), or "contaminating" the calculation with an inexactFloat64
literal.cc @hammy4815
The text was updated successfully, but these errors were encountered: