diff --git a/dev/index.html b/dev/index.html index 3d9d9b9..565769c 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,30 +1,30 @@ LogExpFunctions · LogExpFunctions.jl

LogExpFunctions

Various special functions based on log and exp moved from StatsFuns.jl into a separate package, to minimize dependencies. These functions only use native Julia code, so there is no need to depend on librmath or similar libraries. See the discussion at StatsFuns.jl#46.

The original authors of these functions are the StatsFuns.jl contributors.

LogExpFunctions supports InverseFunctions.inverse and ChangesOfVariables.test_with_logabsdet_jacobian for log1mexp, log1pexp, log2mexp, logexpm1, logistic, logit, and logcosh (no inverse).

LogExpFunctions.xlogxFunction
xlogx(x)
 

Return x * log(x) for x ≥ 0, handling $x = 0$ by taking the downward limit.

julia> xlogx(0)
-0.0
source
LogExpFunctions.xlog1pyFunction
xlog1py(x, y)
 

Return x * log(1 + y) for y ≥ -1 with correct limit at $x = 0$.

julia> xlog1py(0, -1)
-0.0
source
LogExpFunctions.xexpyFunction
xexpy(x, y)
 

Return x * exp(y) for y > -Inf, or zero if y == -Inf or if x == 0 and y is finite.

julia> xexpy(1.0, -Inf)
-0.0
source
LogExpFunctions.logisticFunction
logistic(x)
-

The logistic sigmoid function mapping a real number to a value in the interval $[0,1]$,

\[\sigma(x) = \frac{1}{e^{-x} + 1} = \frac{e^x}{1+e^x}.\]

Its inverse is the logit function.

source
LogExpFunctions.logitFunction
logit(x)
-

The logit or log-odds transformation, defined as

\[\operatorname{logit}(x) = \log\left(\frac{x}{1-x}\right)\]

for $0 < x < 1$.

Its inverse is the logistic function.

source
LogExpFunctions.logcoshFunction
logcosh(x)
-

Return log(cosh(x)), carefully evaluated without intermediate calculation of cosh(x).

The implementation ensures logcosh(-x) = logcosh(x).

source
LogExpFunctions.logabssinhFunction
logabssinh(x)
-

Return log(abs(sinh(x))), carefully evaluated without intermediate calculation of sinh(x).

The implementation ensures logabssinh(-x) = logabssinh(x).

source
LogExpFunctions.log1pmxFunction
log1pmx(x)
-

Return log(1 + x) - x.

Use naive calculation or range reduction outside kernel range. Accurate ~2ulps for all x. This will fall back to the naive calculation for argument types different from Float64.

source
LogExpFunctions.logmxp1Function
logmxp1(x)
-

Return log(x) - x + 1 carefully evaluated. This will fall back to the naive calculation for argument types different from Float64.

source
LogExpFunctions.logaddexpFunction
logaddexp(x, y)
-

Return log(exp(x) + exp(y)), avoiding intermediate overflow/undeflow, and handling non-finite values.

source
LogExpFunctions.logsumexpFunction
logsumexp(X)
-

Compute log(sum(exp, X)).

X should be an iterator of real or complex numbers. The result is computed in a numerically stable way that avoids intermediate over- and underflow, using a single pass over the data.

See also logsumexp!.

References

Sebastian Nowozin: Streaming Log-sum-exp Computation

source
logsumexp(X; dims)
-

Compute log.(sum(exp.(X); dims=dims)).

The result is computed in a numerically stable way that avoids intermediate over- and underflow, using a single pass over the data.

See also logsumexp!.

References

Sebastian Nowozin: Streaming Log-sum-exp Computation

source
+0.0source
LogExpFunctions.logisticFunction
logistic(x)
+

The logistic sigmoid function mapping a real number to a value in the interval $[0,1]$,

\[\sigma(x) = \frac{1}{e^{-x} + 1} = \frac{e^x}{1+e^x}.\]

Its inverse is the logit function.

source
LogExpFunctions.logitFunction
logit(x)
+

The logit or log-odds transformation, defined as

\[\operatorname{logit}(x) = \log\left(\frac{x}{1-x}\right)\]

for $0 < x < 1$.

Its inverse is the logistic function.

source
LogExpFunctions.logcoshFunction
logcosh(x)
+

Return log(cosh(x)), carefully evaluated without intermediate calculation of cosh(x).

The implementation ensures logcosh(-x) = logcosh(x).

source
LogExpFunctions.logabssinhFunction
logabssinh(x)
+

Return log(abs(sinh(x))), carefully evaluated without intermediate calculation of sinh(x).

The implementation ensures logabssinh(-x) = logabssinh(x).

source
LogExpFunctions.log1psqFunction
log1psq(x)
+

Return log(1+x^2) evaluated carefully for abs(x) very small or very large.

source
LogExpFunctions.log1pexpFunction
log1pexp(x)
+

Return log(1+exp(x)) evaluated carefully for largish x.

This is also called the "softplus" transformation, being a smooth approximation to max(0,x). Its inverse is logexpm1.

See:

source
LogExpFunctions.log1mexpFunction
log1mexp(x)
+

Return log(1 - exp(x))

See:

Note: different than Maechler (2012), no negation inside parentheses

source
LogExpFunctions.log2mexpFunction
log2mexp(x)
+

Return log(2 - exp(x)) evaluated as log1p(-expm1(x))

source
LogExpFunctions.logexpm1Function
logexpm1(x)
+

Return log(exp(x) - 1) or the “invsoftplus” function. It is the inverse of log1pexp (aka “softplus”).

source
LogExpFunctions.log1pmxFunction
log1pmx(x)
+

Return log(1 + x) - x.

Use naive calculation or range reduction outside kernel range. Accurate ~2ulps for all x. This will fall back to the naive calculation for argument types different from Float64.

source
LogExpFunctions.logmxp1Function
logmxp1(x)
+

Return log(x) - x + 1 carefully evaluated. This will fall back to the naive calculation for argument types different from Float64.

source
LogExpFunctions.logaddexpFunction
logaddexp(x, y)
+

Return log(exp(x) + exp(y)), avoiding intermediate overflow/undeflow, and handling non-finite values.

source
LogExpFunctions.logsubexpFunction
logsubexp(x, y)
+

Return log(abs(exp(x) - exp(y))), preserving numerical accuracy.

source
LogExpFunctions.logsumexpFunction
logsumexp(X)
+

Compute log(sum(exp, X)).

X should be an iterator of real or complex numbers. The result is computed in a numerically stable way that avoids intermediate over- and underflow, using a single pass over the data.

See also logsumexp!.

References

Sebastian Nowozin: Streaming Log-sum-exp Computation

source
logsumexp(X; dims)
+

Compute log.(sum(exp.(X); dims=dims)).

The result is computed in a numerically stable way that avoids intermediate over- and underflow, using a single pass over the data.

See also logsumexp!.

References

Sebastian Nowozin: Streaming Log-sum-exp Computation

source
LogExpFunctions.logsumexp!Function
logsumexp!(out, X)
+

Compute logsumexp of X over the singleton dimensions of out, and write results to out.

The result is computed in a numerically stable way that avoids intermediate over- and underflow, using a single pass over the data.

See also logsumexp.

References

Sebastian Nowozin: Streaming Log-sum-exp Computation

source
LogExpFunctions.softmax!Function
softmax!(r::AbstractArray{<:Real}, x::AbstractArray{<:Real}=r; dims=:)

Overwrite r with the softmax transformation of x over dimension dims.

That is, r is overwritten with exp.(x), normalized to sum to 1 over the given dimensions.

See also: softmax

source
LogExpFunctions.softmaxFunction
softmax(x::AbstractArray{<:Real}; dims=:)

Return the softmax transformation of x over dimension dims.

That is, return exp.(x), normalized to sum to 1 over the given dimensions.

See also: softmax!

source
LogExpFunctions.cloglogFunction
cloglog(x)
+

Compute the complementary log-log, log(-log(1 - x)).

source
LogExpFunctions.cexpexpFunction
cexpexp(x)
+

Compute the complementary double exponential, 1 - exp(-exp(x)).

source
diff --git a/dev/search/index.html b/dev/search/index.html index 73586e1..6a151e3 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · LogExpFunctions.jl

Loading search...

    +Search · LogExpFunctions.jl

    Loading search...