Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jacobi elliptic functions and complete elliptic integral of the first kind #79

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
3 changes: 3 additions & 0 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,9 @@ libraries.
| [`besselix(nu,z)`](@ref SpecialFunctions.besselix) | scaled modified Bessel function of the first kind of order `nu` at `z` |
| [`besselk(nu,z)`](@ref SpecialFunctions.besselk) | modified [Bessel function](https://en.wikipedia.org/wiki/Bessel_function) of the second kind of order `nu` at `z` |
| [`besselkx(nu,z)`](@ref SpecialFunctions.besselkx) | scaled modified Bessel function of the second kind of order `nu` at `z` |
| [`ellipj(u,m)`](@ref SpecialFunctions.ellipj) | Jacobi elliptic functions `sn,cn,dn` |
| `jpq(u,m)` | Jacobi elliptic function `pq` |
| [`ellipK(m)`](@ref SpecialFunctions.ellipj) | Complete elliptic integral of the first kind |

## Installation

Expand Down
2 changes: 2 additions & 0 deletions docs/src/special.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,4 +46,6 @@ SpecialFunctions.besselk
SpecialFunctions.besselkx
SpecialFunctions.eta
SpecialFunctions.zeta
SpecialFunctions.ellipj
SpecialFunctions.K
```
5 changes: 5 additions & 0 deletions src/SpecialFunctions.jl
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,12 @@ end
export sinint,
cosint

export ellipj,
jss,jsc,jsd,jsn,jcs,jcc,jcd,jcn,jds,jdc,jdd,jdn,jns,jnc,jnd,jnn,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are all these exports?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see, they're generated by macros. The names are a bit cryptic: are these standard? Maybe prefix them (elliptic_jss) or stick them in a module (Elliptic.jss)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are 16 Jacobi elliptic functions, namely all pairs of the letters scdn. Since cd is already taken, I decided to put a j (for Jacobi) in front of the function to avoid name collision. Elliptic.jl solved this issue by putting these functions into a module Jacobi, but I'd say jsn(u,m) is more convenient than Jacobi.sn(u,m). Might be worth discussing, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah ok, how about jacobisn? This matches nicely with besselk/besselj, etc.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

jacobisn possibly makes it hard to distinguish the functions, and it wouldn't really save much typing compared to Jacobi.sn. It might be worth to get input from someone who uses these functions regularly, though.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at other packages:

  • Mathematica: JacobiSN
  • Matlab & SciPy provides them all at once via ellipj function.
  • GSL provides them all via gsl_sf_elljac_e

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might also want to rename the K and iK functions. It's just too much of a hassle not to be able to use K for anything else. For reference:

  • Mathematica: EllipticK
  • Matlab: ellipticK
  • Scipy: ellipk
  • GSL: gsl_sf_ellint_Kcomp
    I'd vote for ellipK, as the K is usually uppercase in mathematical notation. Definitely not ellipticK unless we rename ellipj.

Also, I am a bit unsure about providing iK and how it should be named. If you look at the code, it definitely makes sense to provide iK because K(m) is really just iK(1-m) and so K(1-m) = iK(1-(1-m)) which is just stupid. But AFAIK no other software package provides this, which means I might be doing something wrong. And the commonly used mathematical notation for K(1-m) seems to be K', but obviously we can't use that so we need to think of something else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd still prefer to rename the functions to longer names.

ellipK, ellipiK

include("bessel.jl")
include("elliptic.jl")
include("erf.jl")
include("sincosint.jl")
include("gamma.jl")
Expand Down
245 changes: 245 additions & 0 deletions src/elliptic.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,245 @@
# References:
# [1] Abramowitz, Stegun: Handbook of Mathematical Functions (1965)
ettersi marked this conversation as resolved.
Show resolved Hide resolved
# [2] ellipjc.m in Toby Driscoll's Schwarz-Christoffel Toolbox


#------------------------------------------------
# Descending and Ascending Landen Transformation

descstep(m) = m/(1+sqrt(1-m))^2

@generated function shrinkm(m,::Val{N}) where {N}
# [1, Sec 16.12] / https://dlmf.nist.gov/22.7.i
quote
f = one(m)
Base.Cartesian.@nexprs $N i->begin
k_i = descstep(m)
m = k_i^2
f *= 1+k_i
end
return (Base.Cartesian.@ntuple $N k), f, m
end
end

function ellipj_smallm(u,m)
# [1, Sec 16.13] / https://dlmf.nist.gov/22.10.ii
if VERSION < v"0.7-"
sinu = sin(u)
cosu = cos(u)
else
sinu, cosu = sincos(u)
end
sn = sinu - m*(u-sinu*cosu)*cosu/4
cn = cosu + m*(u-sinu*cosu)*sinu/4
dn = 1 - m*sinu^2/2;
return sn,cn,dn
end
function ellipj_largem(u,m1)
# [1, Sec 16.15] / https://dlmf.nist.gov/22.10.ii
sinhu = sinh(u)
coshu = cosh(u)
tanhu = sinhu/coshu
sechu = 1/coshu
sn = tanhu + m1*(sinhu*coshu-u)*sechu^2/4
cn = sechu - m1*(sinhu*coshu-u)*tanhu*sechu/4
dn = sechu + m1*(sinhu*coshu+u)*tanhu*sechu/4
return sn,cn,dn
end

function ellipj_growm(sn,cn,dn, k)
ettersi marked this conversation as resolved.
Show resolved Hide resolved
# [1, Sec 16.12] / https://dlmf.nist.gov/22.7.i
for kk in reverse(k)
sn,cn,dn = (1+kk)*sn/(1+kk*sn^2),
cn*dn/(1+kk*sn^2),
(1-kk*sn^2)/(1+kk*sn^2)
# ^ Use [1, 16.9.1]. Idea taken from [2]
end
return sn,cn,dn
end
function ellipj_shrinkm(sn,cn,dn, k::NTuple{N,<:Any}) where {N}
# [1, Sec 16.14] / https://dlmf.nist.gov/22.7.ii
for kk in reverse(k)
sn,cn,dn = (1+kk)*sn*cn/dn,
(cn^2-kk*sn^2)/dn, # Use [1, 16.9.1]
(cn^2+kk*sn^2)/dn # Use [1, 16.9.1]
end
return sn,cn,dn
end

function ellipj_viasmallm(u,m,::Val{N}) where {N}
k,f,m = shrinkm(m,Val{N}())
sn,cn,dn = ellipj_smallm(u/f,m)
return ellipj_growm(sn,cn,dn,k)
end
function ellipj_vialargem(u,m,::Val{N}) where {N}
k,f,m1 = shrinkm(1-m,Val{N}())
sn,cn,dn = ellipj_largem(u/f,m1)
return ellipj_shrinkm(sn,cn,dn,k)
end


#----------------
# Pick algorithm

function ellipj_dispatch(u,m, ::Val{N}) where {N}
if abs(m) <= 1 && real(m) <= 0.5
return ellipj_viasmallm(u,m, Val{N}())
elseif abs(1-m) <= 1
return ellipj_vialargem(u,m, Val{N}())
elseif imag(m) == 0 && real(m) < 0
# [1, Sec 16.10]
sn,cn,dn = ellipj_dispatch(u*sqrt(1-m),-m/(1-m), Val{N}())
return sn/(dn*sqrt(1-m)), cn/dn, 1/dn
else
# [1, Sec 16.11]
sn,cn,dn = ellipj_dispatch(u*sqrt(m),1/m, Val{N}())
return sn/sqrt(m), dn, cn
end
end

Base.@pure puresqrt(x::Float64) = sqrt(x)
Base.@pure function nsteps(m,ε)
i = 0
while abs(m) > ε
m = descstep(m)^2
i += 1
end
return i
end
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5]
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic.
Comment on lines +101 to +110
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still don't think the @pures here are necessary (unless you have benchmarks suggesting otherwise

Copy link
Contributor

@MasonProtter MasonProtter Jul 14, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These functions aren’t pure anyways...

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, this needs to be looked into.

function ellipj_nsteps(u,m)
# Compute the number of Landen steps required to reach machine precision.
# For all FloatXX types, this can be done at compile time, while for
# BigFloat this has to be done at runtime.
T = promote_type(typeof(u),typeof(m))
ε = puresqrt(Float64(eps(real(typeof(m)))))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
ε = puresqrt(Float64(eps(real(typeof(m)))))
ε = sqrt(Float64(eps(real(typeof(m)))))

should be sufficient in most cases.

N = nsteps(ε,typeof(m))
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T}
end
Comment on lines +101 to +119
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi. Sorry if this isn't in the correct place. I was hoping to this would be finished. From my reading of this pull request the major stumbling block in this pull request are these lines that use a lot of @pure. When I tried to test this I found that the @pure where needed to prevent dynamic dispatch. I tried to modify the code a bit to prevent this and wound up at the following

Suggested change
Base.@pure function nsteps(m,ε)
i = 0
while abs(m) > ε
m = descstep(m)^2
i += 1
end
return i
end
Base.@pure nsteps(ε,::Type{<:Real}) = nsteps(0.5,ε) # Guarantees convergence in [-1,0.5]
Base.@pure nsteps(ε,::Type{<:Complex}) = nsteps(0.5+sqrt(3)/2im,ε) # This is heuristic.
function ellipj_nsteps(u,m)
# Compute the number of Landen steps required to reach machine precision.
# For all FloatXX types, this can be done at compile time, while for
# BigFloat this has to be done at runtime.
T = promote_type(typeof(u),typeof(m))
ε = puresqrt(Float64(eps(real(typeof(m)))))
N = nsteps(ε,typeof(m))
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T}
end
Base.@pure function nsteps(M)
m = _convfac(M)
ε = _vareps(M)
i = 0
while abs(m) > ε
m = descstep(m)^2
i += 1
end
return i
end
@inline _convfac(::Type{<:Real}) = 0.5
@inline _convfac(::Type{<:Complex}) = 0.5 + sqrt(3)/2im
@inline _vareps(::Type{<:Complex{T}}) where {T} = Float64(sqrt(eps(T)))
@inline _vareps(T::Type{<:Real}) = Float64(sqrt(eps(T)))
function ellipj_nsteps(u,m)
# Compute the number of Landen steps required to reach machine precision.
# For all FloatXX types, this can be done at compile time, while for
# BigFloat this has to be done at runtime.
T = promote_type(typeof(u),typeof(m))
N = nsteps(T)
return ellipj_dispatch(u,m,Val{N}())::NTuple{3,T}
end

This seems to work in my testing. For versions < 1.7 I still needed the @pure to prevent dynamic dispatch. I am not sure if this usage of @pure is alright, but it seems to be more inline with what I read in the docs. The good news is that for 1.7 everything infers correctly so the @pure is not needed.

What's the chances that this gets implemented soonish? I can try to take over this pull request @ettersi if any additional work is needed. I just need this implementation of the jacobi functions for some research I am doing.



#-----------------------------------
# Type promotion and special values

function ellipj_check(u,m)
if isfinite(u) && isfinite(m)
return ellipj_nsteps(u,m)
else
T = promote_type(typeof(u),typeof(m))
return (T(NaN),T(NaN),T(NaN))
end
end

ellipj(u::Real,m::Real) = ellipj_check(promote(float(u),float(m))...)
function ellipj(u::Complex,m::Real)
T = promote_type(real.(typeof.(float.((u,m))))...)
return ellipj_check(convert(Complex{T},u), convert(T,m))
end
ellipj(u,m::Complex) = ellipj_check(promote(float(u),float(m))...)

"""
ellipj(u,m) -> sn,cn,dn

Jacobi elliptic functions `sn`, `cn` and `dn`.

Convenience function `jpq(u,m)` with `p,q ∈ {s,c,d,n}` are also
provided, but this function is more efficient if more than one elliptic
function with the same arguments is required.
"""
function ellipj end


#-----------------------
# Convenience functions

chars = ("s","c","d")
for (i,p) in enumerate(chars)
pn = Symbol("j"*p*"n")
np = Symbol("jn"*p)
@eval begin
$pn(u,m) = ellipj(u,m)[$i]
$np(u,m) = 1/$pn(u,m)
end
end
for p in (chars...,"n")
pp = Symbol("j"*p*p)
@eval $pp(u,m) = one(promote_type(typeof.(float.((u,m)))...))
end

for p in chars, q in chars
p == q && continue
pq = Symbol("j"*p*q)
pn = Symbol(p*"n")
qn = Symbol(q*"n")
@eval begin
function $pq(u::Number,m::Number)
sn,cn,dn = ellipj(u,m)
return $pn/$qn
end
end
end

for p in (chars...,"n"), q in (chars...,"n")
pq = Symbol("j"*p*q)
@eval begin
"""
$(string($pq))(u,m)

Jacobi elliptic function `$($p)$($q)`.

See also `ellipj(u,m)` if more than one Jacobi elliptic function
with the same arguments is required.
"""
function $pq end
end
end


#----------------------------------------------
# Complete elliptic integral of the first kind

function ellipiK_agm(m)
# [1, Sec 17.6]
T = typeof(m)
m == 0 && return T(Inf)
isnan(m) && return T(NaN)
a,b = one(m),sqrt(m)
while abs(a-b) > 2*eps(abs(a))
a,b = (a+b)/2,sqrt(a*b)
end
return T(π)/(2*a) # https://github.com/JuliaLang/julia/issues/26324
end
ellipiK(m) = ellipiK_agm(float(m))
ellipK(m::Real) = ellipiK(1-m)
function ellipK(m::Complex)
# Make sure we hit the "right" branch of sqrt if imag(m) == 0.
# Here, "right" is defined as being consistent with mpmath.
if imag(m) == 0
return ellipiK(complex(1-real(m),imag(m)))
else
return ellipiK(1-m)
end
end

"""
ellipiK(m1)

Evaluate `ellipK(1-m1)` with better precision for small values of `m1`.
"""
function ellipiK end

doc"""
ellipK(m)

Complete elliptic integral of the first kind ``K``.

```math
\begin{aligned}
K(m)
&= \int_0^1 \big((1-t^2)\,(1-mt^2)\big)^{-1/2} \, dt \\
&= \int_0^{π/2} (1-m \sin^2\theta)^{-1/2} \, d\theta
\end{aligned}
```
"""
function ellipK end
Loading