Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

precision function #48

Closed
edljk opened this issue Nov 11, 2024 · 2 comments
Closed

precision function #48

edljk opened this issue Nov 11, 2024 · 2 comments

Comments

@edljk
Copy link

edljk commented Nov 11, 2024

.. does not seem to be available anymore for MP:

julia> precision(Float64x4)
ERROR: MethodError: no method matching _precision_with_base_2(::Type{MultiFloat{Float64, 4}})
The function `_precision_with_base_2` exists, but no method is defined for this combination of argument types.

Closest candidates are:
  _precision_with_base_2(::BigFloat)
   @ Base mpfr.jl:957
  _precision_with_base_2(::Type{BigFloat})
   @ Base mpfr.jl:962
  _precision_with_base_2(::Type{Float64})
   @ Base float.jl:872
  ...
@dzhang314
Copy link
Owner

Hey @edljk, thanks for pointing this out! This is because the Julia authors have once again changed the way that Base.precision(::Type{T}) is internally implemented. It looks like Base._precision has been renamed to Base._precision_with_base_2.

It's annoying that these changes occur with no notification to package authors. I will publish a fix in the forthcoming MultiFloats.jl v3.0 release, which will also include new arithmetic algorithms fixing #42. In the meantime, you can use this patch to restore previous behavior:

using MultiFloats

@inline Base.precision(::Type{MultiFloat{T,N}}) where {T,N} =
    N * precision(T) + (N - 1) # implicit bits of precision between limbs

Please be aware that MultiFloat types work with a fundamentally different representation than IEEEFloat and BigFloat types, and their precision requires special interpretation. The gap between x and the next representable MultiFloat value depends on the exponent of the trailing (least significant) limb of x, as opposed to the leading (most significant) limb. Base.precision(Float64xN) is intended to provide an average-case estimate; it is neither an upper or lower bound.

@edljk
Copy link
Author

edljk commented Nov 11, 2024

Perfect, thanks!

@edljk edljk closed this as completed Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants