diff --git a/previews/PR74/.documenter-siteinfo.json b/previews/PR74/.documenter-siteinfo.json index 669e8001..8cf74f81 100644 --- a/previews/PR74/.documenter-siteinfo.json +++ b/previews/PR74/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.0","generation_timestamp":"2024-02-06T17:46:54","documenter_version":"1.2.1"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.1","generation_timestamp":"2024-02-24T10:21:37","documenter_version":"1.2.1"}} \ No newline at end of file diff --git a/previews/PR74/API/regularization/index.html b/previews/PR74/API/regularization/index.html index 1c11f893..dddf5e6e 100644 --- a/previews/PR74/API/regularization/index.html +++ b/previews/PR74/API/regularization/index.html @@ -1,5 +1,5 @@ -Regularization Terms · RegularizedLeastSquares.jl

API for Regularizers

This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?

RegularizedLeastSquares.L21RegularizationType
L21Regularization

Regularization term implementing the proximal map for group-soft-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • slices=1 - number of elements per group
source
RegularizedLeastSquares.LLRRegularizationType
LLRRegularization

Regularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • shape::Tuple{Int}=[] - dimensions of the image
  • blockSize::Tuple{Int}=[2;2] - size of patches to perform singular value thresholding on
  • randshift::Bool=true - randomly shifts the patches to ensure translation invariance
source
RegularizedLeastSquares.NuclearRegularizationType
NuclearRegularization

Regularization term implementing the proximal map for singular value soft-thresholding.

Arguments:

  • λ - regularization paramter

Keywords

  • svtShape::NTuple - size of the underlying matrix
source
RegularizedLeastSquares.TVRegularizationType
TVRegularization

Regularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.

Reference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf

Reference for the FGP algorithm: A. Beck and T. Teboulle, "Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems", IEEE Trans. Image Process. 18(11), 2009

Arguments

  • λ::T - regularization parameter

Keywords

  • shape::NTuple - size of the underlying image
  • dims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.
  • iterationsTV=20 - number of FGP iterations
source

Projection Regularization

Nested Regularization

RegularizedLeastSquares.innerregMethod
innerreg(reg::AbstractNestedRegularization)

return the inner regularization term of reg. Nested regularization terms also implement the iteration interface.

source

Scaled Regularization

Misc. Nested Regularization

RegularizedLeastSquares.MaskedRegularizationType
MaskedRegularization

Nested regularization term that only applies prox! and norm to elements of x for which the mask is true.

Examples

julia> positive = PositiveRegularization();
+Regularization Terms · RegularizedLeastSquares.jl

API for Regularizers

This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?

RegularizedLeastSquares.L21RegularizationType
L21Regularization

Regularization term implementing the proximal map for group-soft-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • slices=1 - number of elements per group
source
RegularizedLeastSquares.LLRRegularizationType
LLRRegularization

Regularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • shape::Tuple{Int}=[] - dimensions of the image
  • blockSize::Tuple{Int}=[2;2] - size of patches to perform singular value thresholding on
  • randshift::Bool=true - randomly shifts the patches to ensure translation invariance
source
RegularizedLeastSquares.NuclearRegularizationType
NuclearRegularization

Regularization term implementing the proximal map for singular value soft-thresholding.

Arguments:

  • λ - regularization paramter

Keywords

  • svtShape::NTuple - size of the underlying matrix
source
RegularizedLeastSquares.TVRegularizationType
TVRegularization

Regularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.

Reference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf

Reference for the FGP algorithm: A. Beck and T. Teboulle, "Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems", IEEE Trans. Image Process. 18(11), 2009

Arguments

  • λ::T - regularization parameter

Keywords

  • shape::NTuple - size of the underlying image
  • dims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.
  • iterationsTV=20 - number of FGP iterations
source

Projection Regularization

Nested Regularization

RegularizedLeastSquares.innerregMethod
innerreg(reg::AbstractNestedRegularization)

return the inner regularization term of reg. Nested regularization terms also implement the iteration interface.

source

Scaled Regularization

Misc. Nested Regularization

RegularizedLeastSquares.MaskedRegularizationType
MaskedRegularization

Nested regularization term that only applies prox! and norm to elements of x for which the mask is true.

Examples

julia> positive = PositiveRegularization();
 
 julia> masked = MaskedRegularization(reg, [true, false, true, false]);
 
@@ -8,11 +8,11 @@
   0.0
  -1.0
   0.0
- -1.0
source
RegularizedLeastSquares.TransformedRegularizationType
TransformedRegularization(reg, trafo)

Nested regularization term that applies prox! or norm on z = trafo * x and returns (inplace) x = adjoint(trafo) * z.

Example

julia> core = L1Regularization(0.8)
 L1Regularization{Float64}(0.8)
 
 julia> wop = WaveletOp(Float32, shape = (32,32));
 
 julia> reg = TransformedRegularization(core, wop);
 
-julia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain
source
RegularizedLeastSquares.PlugAndPlayRegularizationType
    PlugAndPlayRegularization

Regularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.

Arguments

  • λ - regularization paramter

Keywords

  • model - model applied to the image
  • shape - dimensions of the image
  • input_transform - transform of image before model
source

Miscellaneous Functions

RegularizedLeastSquares.prox!Method
prox!(reg::AbstractParameterizedRegularization, x)

perform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.

source
RegularizedLeastSquares.prox!Method
prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its prox! on x

source
LinearAlgebra.normMethod
norm(reg::AbstractParameterizedRegularization, x)

returns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.

source
LinearAlgebra.normMethod
norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its norm on x

source
+julia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain
source
RegularizedLeastSquares.PlugAndPlayRegularizationType
    PlugAndPlayRegularization

Regularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.

Arguments

  • λ - regularization paramter

Keywords

  • model - model applied to the image
  • shape - dimensions of the image
  • input_transform - transform of image before model
source

Miscellaneous Functions

RegularizedLeastSquares.prox!Method
prox!(reg::AbstractParameterizedRegularization, x)

perform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.

source
RegularizedLeastSquares.prox!Method
prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its prox! on x

source
LinearAlgebra.normMethod
norm(reg::AbstractParameterizedRegularization, x)

returns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.

source
LinearAlgebra.normMethod
norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its norm on x

source
diff --git a/previews/PR74/API/solvers/index.html b/previews/PR74/API/solvers/index.html index c98c77ab..577ed3c4 100644 --- a/previews/PR74/API/solvers/index.html +++ b/previews/PR74/API/solvers/index.html @@ -40,10 +40,10 @@ end plot_trace (generic function with 1 method) -julia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);

The keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.

See also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.

source

ADMM

RegularizedLeastSquares.ADMMType
ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
-ADMM( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates an ADMM object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - penalty of the augmented Lagrangian
  • vary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP
  • iterations::Int - maximum number of (outer) ADMM iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

ADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

CGNR

RegularizedLeastSquares.CGNRType
CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))
-CGNR( ; AHA = ,       reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))

creates an CGNR object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • weights::AbstactVector - weights for the data term; must be of same length and type as the data term
  • iterations::Int - maximum number of iterations
  • relTol::Real - tolerance for stopping criterion

See also createLinearSolver, solve!.

source

Kaczmarz

RegularizedLeastSquares.KaczmarzType
Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), weights=nothing, randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10, regMatrix=nothing)

Creates a Kaczmarz object for the forward operator A.

Required Arguments

  • A - forward operator

Optional Keyword Arguments

  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • weights::AbstractVector - weights for the data term
  • randomized::Bool - randomize Kacmarz algorithm
  • subMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm
  • shuffleRows::Bool - randomize Kacmarz algorithm
  • seed::Int - seed for randomized algorithm
  • iterations::Int - number of iterations

See also createLinearSolver, solve!.

source

FISTA

RegularizedLeastSquares.FISTAType
FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)
-FISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)

creates a FISTA object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step
  • normalize_rho::Bool - normalize step size by the largest eigenvalue of AHA
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • restart::Symbol - :none, :gradient options for restarting
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

OptISTA

RegularizedLeastSquares.OptISTAType
OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)
-OptISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)

creates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.

Reference:

  • Uijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, "Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA," arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step
  • normalize_rho::Bool - normalize step size by the largest eigenvalue of AHA
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

POGM

RegularizedLeastSquares.POGMType
POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)
-POGM( ; AHA = ,     reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)

Creates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.

References:

  • A.B. Taylor, J.M. Hendrickx, F. Glineur, "Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]

  • Kim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]

    Required Arguments

    • A - forward operator

    OR

    • AHA - normal operator (as a keyword argument)

    Optional Keyword Arguments

    • AHA - normal operator is optional if A is supplied
    • reg::AbstractParameterizedRegularization - regularization term
    • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
    • rho::Real - step size for gradient step
    • normalize_rho::Bool - normalize step size by the largest eigenvalue of AHA
    • theta::Real - parameter for predictor-corrector step
    • sigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]
    • relTol::Real - tolerance for stopping criterion
    • iterations::Int - maximum number of iterations
    • restart::Symbol - :none, :gradient options for restarting
    • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

SplitBregman

RegularizedLeastSquares.SplitBregmanType
SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
-SplitBregman( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates a SplitBregman object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms
  • iterationsOuter::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)
  • iterationsInner::Int - maximum number of inner iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

This algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterationsOuter=1 or use ADMM instead, which is equivalent (iterationsOuter=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)

Like ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

Miscellaneous Functions

RegularizedLeastSquares.StoreSolutionCallbackType
StoreSolutionCallback(T)

Callback that accumlates the solvers solution per iteration. Results are stored in the solutions field.

source
RegularizedLeastSquares.StoreConvergenceCallbackType
StoreConvergenceCallback()

Callback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.

source
RegularizedLeastSquares.CompareSolutionCallbackType
CompareSolutionCallback(ref, cmp)

Callback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.

source
RegularizedLeastSquares.linearSolverListFunction

Return a list of all available linear solvers

source
RegularizedLeastSquares.createLinearSolverFunction
createLinearSolver(solver::AbstractLinearSolver, A; kargs...)

This method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.

TODO: give a hint what solvers are available

source
RegularizedLeastSquares.applicableSolverListFunction
applicable(args...)

list all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.

See also isapplicable, linearSolverList.

source
RegularizedLeastSquares.isapplicableFunction
isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)

return true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.

source
+julia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);

The keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.

See also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.

source

ADMM

RegularizedLeastSquares.ADMMType
ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
+ADMM( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates an ADMM object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - penalty of the augmented Lagrangian
  • vary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP
  • iterations::Int - maximum number of (outer) ADMM iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

ADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

CGNR

RegularizedLeastSquares.CGNRType
CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))
+CGNR( ; AHA = ,       reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))

creates an CGNR object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • weights::AbstactVector - weights for the data term; must be of same length and type as the data term
  • iterations::Int - maximum number of iterations
  • relTol::Real - tolerance for stopping criterion

See also createLinearSolver, solve!.

source

Kaczmarz

RegularizedLeastSquares.KaczmarzType
Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), weights=nothing, randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10, regMatrix=nothing)

Creates a Kaczmarz object for the forward operator A.

Required Arguments

  • A - forward operator

Optional Keyword Arguments

  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • randomized::Bool - randomize Kacmarz algorithm
  • subMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm
  • shuffleRows::Bool - randomize Kacmarz algorithm
  • seed::Int - seed for randomized algorithm
  • iterations::Int - number of iterations

See also createLinearSolver, solve!.

source

FISTA

RegularizedLeastSquares.FISTAType
FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)
+FISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)

creates a FISTA object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step
  • normalize_rho::Bool - normalize step size by the largest eigenvalue of AHA
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • restart::Symbol - :none, :gradient options for restarting
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

OptISTA

RegularizedLeastSquares.OptISTAType
OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)
+OptISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)

creates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.

Reference:

  • Uijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, "Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA," arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step
  • normalize_rho::Bool - normalize step size by the largest eigenvalue of AHA
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

POGM

RegularizedLeastSquares.POGMType
POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)
+POGM( ; AHA = ,     reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)

Creates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.

References:

  • A.B. Taylor, J.M. Hendrickx, F. Glineur, "Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]

  • Kim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]

    Required Arguments

    • A - forward operator

    OR

    • AHA - normal operator (as a keyword argument)

    Optional Keyword Arguments

    • AHA - normal operator is optional if A is supplied
    • reg::AbstractParameterizedRegularization - regularization term
    • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
    • rho::Real - step size for gradient step
    • normalize_rho::Bool - normalize step size by the largest eigenvalue of AHA
    • theta::Real - parameter for predictor-corrector step
    • sigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]
    • relTol::Real - tolerance for stopping criterion
    • iterations::Int - maximum number of iterations
    • restart::Symbol - :none, :gradient options for restarting
    • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

SplitBregman

RegularizedLeastSquares.SplitBregmanType
SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
+SplitBregman( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates a SplitBregman object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms
  • iterationsOuter::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)
  • iterationsInner::Int - maximum number of inner iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

This algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterationsOuter=1 or use ADMM instead, which is equivalent (iterationsOuter=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)

Like ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

Miscellaneous Functions

RegularizedLeastSquares.StoreSolutionCallbackType
StoreSolutionCallback(T)

Callback that accumlates the solvers solution per iteration. Results are stored in the solutions field.

source
RegularizedLeastSquares.StoreConvergenceCallbackType
StoreConvergenceCallback()

Callback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.

source
RegularizedLeastSquares.CompareSolutionCallbackType
CompareSolutionCallback(ref, cmp)

Callback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.

source
RegularizedLeastSquares.linearSolverListFunction

Return a list of all available linear solvers

source
RegularizedLeastSquares.createLinearSolverFunction
createLinearSolver(solver::AbstractLinearSolver, A; kargs...)

This method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.

TODO: give a hint what solvers are available

source
RegularizedLeastSquares.applicableSolverListFunction
applicable(args...)

list all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.

See also isapplicable, linearSolverList.

source
RegularizedLeastSquares.isapplicableFunction
isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)

return true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.

source
diff --git a/previews/PR74/gettingStarted/index.html b/previews/PR74/gettingStarted/index.html index 9aff92bc..94cbf03c 100644 --- a/previews/PR74/gettingStarted/index.html +++ b/previews/PR74/gettingStarted/index.html @@ -8,4 +8,4 @@ y = A*vec(I)

To recover the image, we solve the TV-regularized least squares problem

\[\begin{equation} \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{y} \vert\vert_2^2 + λTV(\mathbf{x}) . \end{equation}\]

For this purpose we build a TV regularizer with regularization parameter $λ=0.01$

reg = TVRegularization(0.01; shape=(N,N))

To solve the CS problem, the Alternating Direction Method of Multipliers can be used. Thus, we build the corresponding solver

solver = createLinearSolver(ADMM, A; reg=reg, ρ=0.1, iterations=20)

and apply it to our measurement

Ireco = solve!(solver,y)
-Ireco = reshape(Ireco,N,N)

The original phantom and the reconstructed image are shown below

Phantom Reconstruction

+Ireco = reshape(Ireco,N,N)

The original phantom and the reconstructed image are shown below

Phantom Reconstruction

diff --git a/previews/PR74/index.html b/previews/PR74/index.html index bb021ab4..df22ecc3 100644 --- a/previews/PR74/index.html +++ b/previews/PR74/index.html @@ -1,3 +1,3 @@ Home · RegularizedLeastSquares.jl

RegularizedLeastSquares.jl

Solvers for Linear Inverse Problems using Regularization Techniques

Introduction

RegularizedLeastSquares.jl is a Julia package for solving large scale linear systems using different types of algorithms. Ill-conditioned problems arise in many areas of practical interest. To solve these problems, one often resorts to regularization techniques and non-linear problem formulations. This packages provides implementations for a variety of solvers, which are used in fields such as MPI and MRI.

The implemented methods range from the $l_2$-regularized CGNR method to more general optimizers such as the Alternating Direction of Multipliers Method (ADMM) or the Split-Bregman method.

For convenience, implementations of popular regularizers, such as $l_1$-regularization and TV regularization, are provided. On the other hand, hand-crafted regularizers can be used quite easily. For this purpose, a Regularization object needs to be build. The latter mainly contains the regularization parameter and a function to calculate the proximal map of a given input.

Depending on the problem, it becomes unfeasible to store the full system matrix at hand. For this purpose, RegularizedLeastSquares.jl allows for the use of matrix-free operators. Such operators can be realized using the interface provided by the package LinearOperators.jl. Other interfaces can be used as well, as long as the product *(A,x) and the adjoint adjoint(A) are provided. A number of common matrix-free operators are provided by the package LinearOperatorColection.jl.

Installation

Within Julia, use the package manager:

using Pkg
-Pkg.add("RegularizedLeastSquares")

This adds the latest release of the package is added. To install a different version, please consult the Pkg documentation.

Usage

+Pkg.add("RegularizedLeastSquares")

This adds the latest release of the package is added. To install a different version, please consult the Pkg documentation.

Usage

diff --git a/previews/PR74/regularization/index.html b/previews/PR74/regularization/index.html index b4989574..26b1bb14 100644 --- a/previews/PR74/regularization/index.html +++ b/previews/PR74/regularization/index.html @@ -36,4 +36,4 @@ julia> foreach(r -> println(nameof(typeof(r))), reg) TransformedRegularization -L1Regularization +L1Regularization diff --git a/previews/PR74/search_index.js b/previews/PR74/search_index.js index 8147c8a0..400a0ec1 100644 --- a/previews/PR74/search_index.js +++ b/previews/PR74/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"API/regularization/#API-for-Regularizers","page":"Regularization Terms","title":"API for Regularizers","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?","category":"page"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.L1Regularization\nRegularizedLeastSquares.L2Regularization\nRegularizedLeastSquares.L21Regularization\nRegularizedLeastSquares.LLRRegularization\nRegularizedLeastSquares.NuclearRegularization\nRegularizedLeastSquares.TVRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.L1Regularization","page":"Regularization Terms","title":"RegularizedLeastSquares.L1Regularization","text":"L1Regularization\n\nRegularization term implementing the proximal map for the Lasso problem.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.L2Regularization","page":"Regularization Terms","title":"RegularizedLeastSquares.L2Regularization","text":"L2Regularization\n\nRegularization term implementing the proximal map for Tikhonov regularization.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.L21Regularization","page":"Regularization Terms","title":"RegularizedLeastSquares.L21Regularization","text":"L21Regularization\n\nRegularization term implementing the proximal map for group-soft-thresholding.\n\nArguments\n\nλ - regularization paramter\n\nKeywords\n\nslices=1 - number of elements per group\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.LLRRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.LLRRegularization","text":"LLRRegularization\n\nRegularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.\n\nArguments\n\nλ - regularization paramter\n\nKeywords\n\nshape::Tuple{Int}=[] - dimensions of the image\nblockSize::Tuple{Int}=[2;2] - size of patches to perform singular value thresholding on\nrandshift::Bool=true - randomly shifts the patches to ensure translation invariance\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.NuclearRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.NuclearRegularization","text":"NuclearRegularization\n\nRegularization term implementing the proximal map for singular value soft-thresholding.\n\nArguments:\n\nλ - regularization paramter\n\nKeywords\n\nsvtShape::NTuple - size of the underlying matrix\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.TVRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.TVRegularization","text":"TVRegularization\n\nRegularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.\n\nReference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf\n\nReference for the FGP algorithm: A. Beck and T. Teboulle, \"Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems\", IEEE Trans. Image Process. 18(11), 2009\n\nArguments\n\nλ::T - regularization parameter\n\nKeywords\n\nshape::NTuple - size of the underlying image\ndims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.\niterationsTV=20 - number of FGP iterations\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Projection-Regularization","page":"Regularization Terms","title":"Projection Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.PositiveRegularization\nRegularizedLeastSquares.RealRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.PositiveRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.PositiveRegularization","text":"PositiveRegularization\n\nRegularization term implementing a projection onto positive and real numbers.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.RealRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.RealRegularization","text":"RealRegularization\n\nRegularization term implementing a projection onto real numbers.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Nested-Regularization","page":"Regularization Terms","title":"Nested Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.innerreg(::AbstractNestedRegularization)\nRegularizedLeastSquares.sink(::AbstractNestedRegularization)\nRegularizedLeastSquares.sinktype(::AbstractNestedRegularization)","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.innerreg-Tuple{AbstractNestedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.innerreg","text":"innerreg(reg::AbstractNestedRegularization)\n\nreturn the inner regularization term of reg. Nested regularization terms also implement the iteration interface.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.sink-Tuple{AbstractNestedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.sink","text":"sink(reg::AbstractNestedRegularization)\n\nreturn the innermost regularization term.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.sinktype-Tuple{AbstractNestedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.sinktype","text":"sinktype(reg::AbstractNestedRegularization)\n\nreturn the type of the innermost regularization term.\n\nSee also sink.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#Scaled-Regularization","page":"Regularization Terms","title":"Scaled Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.AbstractScaledRegularization\nRegularizedLeastSquares.scalefactor\nRegularizedLeastSquares.NormalizedRegularization\nRegularizedLeastSquares.NoNormalization\nRegularizedLeastSquares.MeasurementBasedNormalization\nRegularizedLeastSquares.SystemMatrixBasedNormalization\nRegularizedLeastSquares.FixedParameterRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.AbstractScaledRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.AbstractScaledRegularization","text":"AbstractScaledRegularization\n\nNested regularization term that applies a scalefactor to the regularization parameter λ of its inner term.\n\nSee also scalefactor, λ, innerreg.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.scalefactor","page":"Regularization Terms","title":"RegularizedLeastSquares.scalefactor","text":"scalescalefactor(reg::AbstractScaledRegularization)\n\nreturn the scaling scalefactor for λ\n\n\n\n\n\n","category":"function"},{"location":"API/regularization/#RegularizedLeastSquares.NormalizedRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.NormalizedRegularization","text":"NormalizedRegularization\n\nNested regularization term that scales λ according to normalization scheme. This term is commonly applied by a solver based on a given normalization keyword\n\n#See also NoNormalization, MeasurementBasedNormalization, SystemMatrixBasedNormalization.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.NoNormalization","page":"Regularization Terms","title":"RegularizedLeastSquares.NoNormalization","text":"NoNormalization\n\nNo normalization to λ is applied.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.MeasurementBasedNormalization","page":"Regularization Terms","title":"RegularizedLeastSquares.MeasurementBasedNormalization","text":"MeasurementBasedNormalization\n\nλ is normalized by the 1-norm of b divided by its length.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.SystemMatrixBasedNormalization","page":"Regularization Terms","title":"RegularizedLeastSquares.SystemMatrixBasedNormalization","text":"SystemMatrixBasedNormalization\n\nλ is normalized by the energy of the system matrix rows.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.FixedParameterRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.FixedParameterRegularization","text":"FixedParameterRegularization\n\nNested regularization term that discards any λ passed to it and instead uses λ from its inner regularization term. This can be used to selectively disallow normalization.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Misc.-Nested-Regularization","page":"Regularization Terms","title":"Misc. Nested Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.MaskedRegularization\nRegularizedLeastSquares.TransformedRegularization\nRegularizedLeastSquares.PlugAndPlayRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.MaskedRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.MaskedRegularization","text":"MaskedRegularization\n\nNested regularization term that only applies prox! and norm to elements of x for which the mask is true.\n\nExamples\n\njulia> positive = PositiveRegularization();\n\njulia> masked = MaskedRegularization(reg, [true, false, true, false]);\n\njulia> prox!(masked, fill(-1, 4))\n4-element Vector{Float64}:\n 0.0\n -1.0\n 0.0\n -1.0\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.TransformedRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.TransformedRegularization","text":"TransformedRegularization(reg, trafo)\n\nNested regularization term that applies prox! or norm on z = trafo * x and returns (inplace) x = adjoint(trafo) * z.\n\nExample\n\njulia> core = L1Regularization(0.8)\nL1Regularization{Float64}(0.8)\n\njulia> wop = WaveletOp(Float32, shape = (32,32));\n\njulia> reg = TransformedRegularization(core, wop);\n\njulia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.PlugAndPlayRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.PlugAndPlayRegularization","text":" PlugAndPlayRegularization\n\nRegularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.\n\nArguments\n\nλ - regularization paramter\n\nKeywords\n\nmodel - model applied to the image\nshape - dimensions of the image\ninput_transform - transform of image before model\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Miscellaneous-Functions","page":"Regularization Terms","title":"Miscellaneous Functions","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.prox!(::AbstractParameterizedRegularization, ::AbstractArray)\nRegularizedLeastSquares.prox!(::Type{<:AbstractParameterizedRegularization}, ::Any, ::Any)\nRegularizedLeastSquares.norm(::AbstractParameterizedRegularization, ::AbstractArray)\nRegularizedLeastSquares.λ(::AbstractParameterizedRegularization)\nRegularizedLeastSquares.norm(::Type{<:AbstractParameterizedRegularization}, ::Any, ::Any)","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.prox!-Tuple{AbstractParameterizedRegularization, AbstractArray}","page":"Regularization Terms","title":"RegularizedLeastSquares.prox!","text":"prox!(reg::AbstractParameterizedRegularization, x)\n\nperform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.prox!-Tuple{Type{<:AbstractParameterizedRegularization}, Any, Any}","page":"Regularization Terms","title":"RegularizedLeastSquares.prox!","text":"prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)\n\nconstruct a regularization term of type regType with given λ and kwargs and apply its prox! on x\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#LinearAlgebra.norm-Tuple{AbstractParameterizedRegularization, AbstractArray}","page":"Regularization Terms","title":"LinearAlgebra.norm","text":"norm(reg::AbstractParameterizedRegularization, x)\n\nreturns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.λ-Tuple{AbstractParameterizedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.λ","text":"λ(reg::AbstractParameterizedRegularization)\n\nreturn the regularization parameter λ of reg\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#LinearAlgebra.norm-Tuple{Type{<:AbstractParameterizedRegularization}, Any, Any}","page":"Regularization Terms","title":"LinearAlgebra.norm","text":"norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)\n\nconstruct a regularization term of type regType with given λ and kwargs and apply its norm on x\n\n\n\n\n\n","category":"method"},{"location":"solvers/#Solvers","page":"Solvers","title":"Solvers","text":"","category":"section"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.jl provides a variety of solvers, which are used in fields such as MPI and MRI. The following is a non-exhaustive list of the implemented solvers:","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"Kaczmarz algorithm (Kaczmarz)\nConjugate Gradients Normal Residual method (CGNR)\nFast Iterative Shrinkage Thresholding Algorithm (FISTA)\nAlternating Direction of Multipliers Method (ADMM)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"The solvers are organized in a type-hierarchy and inherit from:","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"abstract type AbstractLinearSolver","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"The type hierarchy is further differentiated into solver categories such as AbstractRowAtionSolver, AbstractPrimalDualSolver or AbstractProximalGradientSolver. A list of all available solvers can be returned by the linearSolverList function.","category":"page"},{"location":"solvers/#Creating-a-Solver","page":"Solvers","title":"Creating a Solver","text":"","category":"section"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"To create a solver, one can invoke the method createLinearSolver as in","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"solver = createLinearSolver(ADMM, A; reg=reg, kwargs...)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"Here A denotes the system matrix and reg are the Regularization terms to be used by the solver. All further solver parameters can be passed as keyword arguments and are solver specific. To make things more compact, it can be usefull to collect all parameters in a Dict{Symbol,Any}. In this way, the code snippet above can be written as","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"params=Dict{Symbol,Any}()\nparams[:reg] = ...\n...\n\nsolver = createLinearSolver(ADMM, A; params...)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"This notation can be convenient when a large number of parameters are set manually.","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"It is possible to check if a given solver is applicable to the wanted arguments, as not all solvers are applicable to all system matrix and data (element) types or regularization terms combinations. This is achieved with the isapplicable function:","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"isapplicable(Kaczmarz, A, x, [L21Regularization(0.4f0)])\nfalse","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"For a given set of arguments the list of applicable solvers can be retrieved with applicableSolverList.","category":"page"},{"location":"API/solvers/#API-for-Solvers","page":"Solvers","title":"API for Solvers","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?","category":"page"},{"location":"API/solvers/#solve!","page":"Solvers","title":"solve!","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.solve!(::AbstractLinearSolver, ::Any)","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.solve!-Tuple{AbstractLinearSolver, Any}","page":"Solvers","title":"RegularizedLeastSquares.solve!","text":"solve!(solver::AbstractLinearSolver, b; x0 = 0, callbacks = (_, _) -> nothing)\n\nSolves an inverse problem for the data vector b using solver.\n\nRequired Arguments\n\nsolver::AbstractLinearSolver - linear solver (e.g., ADMM or FISTA), containing forward/normal operator and regularizer\nb::AbstractVector - data vector if A was supplied to the solver, back-projection of the data otherwise\n\nOptional Keyword Arguments\n\nx0::AbstractVector - initial guess for the solution; default is zero\ncallbacks - (optionally a vector of) function or callable struct that takes the two arguments callback(solver, iteration) and, e.g., stores, prints, or plots the intermediate solutions or convergence parameters. Be sure not to modify solver or iteration in the callback function as this would japaridze convergence. The default does nothing.\n\nExamples\n\nThe optimization problem\n\n\targmin_x Ax - b_2^2 + λ x_1\n\ncan be solved with the following lines of code:\n\njulia> using RegularizedLeastSquares\n\njulia> A = [0.831658 0.96717\n 0.383056 0.39043\n 0.820692 0.08118];\n\njulia> x = [0.5932234523399985; 0.2697534345340015];\n\njulia> b = A * x;\n\njulia> S = ADMM(A);\n\njulia> x_approx = solve!(S, b)\n2-element Vector{Float64}:\n 0.5932234523399984\n 0.26975343453400163\n\nHere, we use L1Regularization, which is default for ADMM. All regularization options can be found in API for Regularizers.\n\nThe following example solves the same problem, but stores the solution x of each interation in tr:\n\njulia> tr = Dict[]\nDict[]\n\njulia> store_trace!(tr, solver, iteration) = push!(tr, Dict(\"iteration\" => iteration, \"x\" => solver.x, \"beta\" => solver.β))\nstore_trace! (generic function with 1 method)\n\njulia> x_approx = solve!(S, b; callbacks=(solver, iteration) -> store_trace!(tr, solver, iteration))\n2-element Vector{Float64}:\n 0.5932234523399984\n 0.26975343453400163\n\njulia> tr[3]\nDict{String, Any} with 3 entries:\n \"iteration\" => 2\n \"x\" => [0.593223, 0.269753]\n \"beta\" => [1.23152, 0.927611]\n\nThe last example show demonstrates how to plot the solution at every 10th iteration and store the solvers convergence metrics:\n\njulia> using Plots\n\njulia> conv = StoreConvergenceCallback()\n\njulia> function plot_trace(solver, iteration)\n if iteration % 10 == 0\n display(scatter(solver.x))\n end\n end\nplot_trace (generic function with 1 method)\n\njulia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);\n\nThe keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.\n\nSee also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.\n\n\n\n\n\n","category":"method"},{"location":"API/solvers/#ADMM","page":"Solvers","title":"ADMM","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.ADMM","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.ADMM","page":"Solvers","title":"RegularizedLeastSquares.ADMM","text":"ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\nADMM( ; AHA = , precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\n\nCreates an ADMM object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nprecon - preconditionner for the internal CG algorithm\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nregTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - penalty of the augmented Lagrangian\nvary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP\niterations::Int - maximum number of (outer) ADMM iterations\niterationsCG::Int - maximum number of (inner) CG iterations\nabsTol::Real - absolute tolerance for stopping criterion\nrelTol::Real - relative tolerance for stopping criterion\ntolInner::Real - relative tolerance for CG stopping criterion\nverbose::Bool - print residual in each iteration\n\nADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#CGNR","page":"Solvers","title":"CGNR","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.CGNR","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.CGNR","page":"Solvers","title":"RegularizedLeastSquares.CGNR","text":"CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))\nCGNR( ; AHA = , reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))\n\ncreates an CGNR object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nweights::AbstactVector - weights for the data term; must be of same length and type as the data term\niterations::Int - maximum number of iterations\nrelTol::Real - tolerance for stopping criterion\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#Kaczmarz","page":"Solvers","title":"Kaczmarz","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.Kaczmarz","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.Kaczmarz","page":"Solvers","title":"RegularizedLeastSquares.Kaczmarz","text":"Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), weights=nothing, randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10, regMatrix=nothing)\n\nCreates a Kaczmarz object for the forward operator A.\n\nRequired Arguments\n\nA - forward operator\n\nOptional Keyword Arguments\n\nreg::AbstractParameterizedRegularization - regularization term\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nweights::AbstractVector - weights for the data term\nrandomized::Bool - randomize Kacmarz algorithm\nsubMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm\nshuffleRows::Bool - randomize Kacmarz algorithm\nseed::Int - seed for randomized algorithm\niterations::Int - number of iterations\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#FISTA","page":"Solvers","title":"FISTA","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.FISTA","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.FISTA","page":"Solvers","title":"RegularizedLeastSquares.FISTA","text":"FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)\nFISTA( ; AHA=, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)\n\ncreates a FISTA object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nprecon - preconditionner for the internal CG algorithm\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - step size for gradient step\nnormalize_rho::Bool - normalize step size by the largest eigenvalue of AHA\ntheta::Real - parameter for predictor-corrector step\nrelTol::Real - tolerance for stopping criterion\niterations::Int - maximum number of iterations\nrestart::Symbol - :none, :gradient options for restarting\nverbose::Bool - print residual in each iteration\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#OptISTA","page":"Solvers","title":"OptISTA","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.OptISTA","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.OptISTA","page":"Solvers","title":"RegularizedLeastSquares.OptISTA","text":"OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)\nOptISTA( ; AHA=, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)\n\ncreates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.\n\nReference:\n\nUijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, \"Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA,\" arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nreg::AbstractParameterizedRegularization - regularization term\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - step size for gradient step\nnormalize_rho::Bool - normalize step size by the largest eigenvalue of AHA\ntheta::Real - parameter for predictor-corrector step\nrelTol::Real - tolerance for stopping criterion\niterations::Int - maximum number of iterations\nverbose::Bool - print residual in each iteration\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#POGM","page":"Solvers","title":"POGM","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.POGM","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.POGM","page":"Solvers","title":"RegularizedLeastSquares.POGM","text":"POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)\nPOGM( ; AHA = , reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)\n\nCreates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.\n\nReferences:\n\nA.B. Taylor, J.M. Hendrickx, F. Glineur, \"Exact worst-case performance of first-order algorithms for composite convex optimization,\" Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]\nKim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]\nRequired Arguments\nA - forward operator\nOR\nAHA - normal operator (as a keyword argument)\nOptional Keyword Arguments\nAHA - normal operator is optional if A is supplied\nreg::AbstractParameterizedRegularization - regularization term\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - step size for gradient step\nnormalize_rho::Bool - normalize step size by the largest eigenvalue of AHA\ntheta::Real - parameter for predictor-corrector step\nsigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]\nrelTol::Real - tolerance for stopping criterion\niterations::Int - maximum number of iterations\nrestart::Symbol - :none, :gradient options for restarting\nverbose::Bool - print residual in each iteration\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#SplitBregman","page":"Solvers","title":"SplitBregman","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.SplitBregman","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.SplitBregman","page":"Solvers","title":"RegularizedLeastSquares.SplitBregman","text":"SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\nSplitBregman( ; AHA = , precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\n\nCreates a SplitBregman object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nprecon - preconditionner for the internal CG algorithm\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nregTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms\niterationsOuter::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)\niterationsInner::Int - maximum number of inner iterations\niterationsCG::Int - maximum number of (inner) CG iterations\nabsTol::Real - absolute tolerance for stopping criterion\nrelTol::Real - relative tolerance for stopping criterion\ntolInner::Real - relative tolerance for CG stopping criterion\nverbose::Bool - print residual in each iteration\n\nThis algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterationsOuter=1 or use ADMM instead, which is equivalent (iterationsOuter=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)\n\nLike ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#Miscellaneous-Functions","page":"Solvers","title":"Miscellaneous Functions","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.StoreSolutionCallback\nRegularizedLeastSquares.StoreConvergenceCallback\nRegularizedLeastSquares.CompareSolutionCallback\nRegularizedLeastSquares.linearSolverList\nRegularizedLeastSquares.createLinearSolver\nRegularizedLeastSquares.applicableSolverList\nRegularizedLeastSquares.isapplicable","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.StoreSolutionCallback","page":"Solvers","title":"RegularizedLeastSquares.StoreSolutionCallback","text":"StoreSolutionCallback(T)\n\nCallback that accumlates the solvers solution per iteration. Results are stored in the solutions field.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#RegularizedLeastSquares.StoreConvergenceCallback","page":"Solvers","title":"RegularizedLeastSquares.StoreConvergenceCallback","text":"StoreConvergenceCallback()\n\nCallback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#RegularizedLeastSquares.CompareSolutionCallback","page":"Solvers","title":"RegularizedLeastSquares.CompareSolutionCallback","text":"CompareSolutionCallback(ref, cmp)\n\nCallback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#RegularizedLeastSquares.linearSolverList","page":"Solvers","title":"RegularizedLeastSquares.linearSolverList","text":"Return a list of all available linear solvers\n\n\n\n\n\n","category":"function"},{"location":"API/solvers/#RegularizedLeastSquares.createLinearSolver","page":"Solvers","title":"RegularizedLeastSquares.createLinearSolver","text":"createLinearSolver(solver::AbstractLinearSolver, A; kargs...)\n\nThis method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.\n\nTODO: give a hint what solvers are available\n\n\n\n\n\n","category":"function"},{"location":"API/solvers/#RegularizedLeastSquares.applicableSolverList","page":"Solvers","title":"RegularizedLeastSquares.applicableSolverList","text":"applicable(args...)\n\nlist all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.\n\nSee also isapplicable, linearSolverList.\n\n\n\n\n\n","category":"function"},{"location":"API/solvers/#RegularizedLeastSquares.isapplicable","page":"Solvers","title":"RegularizedLeastSquares.isapplicable","text":"isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)\n\nreturn true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.\n\n\n\n\n\n","category":"function"},{"location":"#RegularizedLeastSquares.jl","page":"Home","title":"RegularizedLeastSquares.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Solvers for Linear Inverse Problems using Regularization Techniques","category":"page"},{"location":"#Introduction","page":"Home","title":"Introduction","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"RegularizedLeastSquares.jl is a Julia package for solving large scale linear systems using different types of algorithms. Ill-conditioned problems arise in many areas of practical interest. To solve these problems, one often resorts to regularization techniques and non-linear problem formulations. This packages provides implementations for a variety of solvers, which are used in fields such as MPI and MRI.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The implemented methods range from the l_2-regularized CGNR method to more general optimizers such as the Alternating Direction of Multipliers Method (ADMM) or the Split-Bregman method.","category":"page"},{"location":"","page":"Home","title":"Home","text":"For convenience, implementations of popular regularizers, such as l_1-regularization and TV regularization, are provided. On the other hand, hand-crafted regularizers can be used quite easily. For this purpose, a Regularization object needs to be build. The latter mainly contains the regularization parameter and a function to calculate the proximal map of a given input.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Depending on the problem, it becomes unfeasible to store the full system matrix at hand. For this purpose, RegularizedLeastSquares.jl allows for the use of matrix-free operators. Such operators can be realized using the interface provided by the package LinearOperators.jl. Other interfaces can be used as well, as long as the product *(A,x) and the adjoint adjoint(A) are provided. A number of common matrix-free operators are provided by the package LinearOperatorColection.jl.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Within Julia, use the package manager:","category":"page"},{"location":"","page":"Home","title":"Home","text":"using Pkg\nPkg.add(\"RegularizedLeastSquares\")","category":"page"},{"location":"","page":"Home","title":"Home","text":"This adds the latest release of the package is added. To install a different version, please consult the Pkg documentation.","category":"page"},{"location":"#Usage","page":"Home","title":"Usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"See Getting Started for an introduction to using the package","category":"page"},{"location":"gettingStarted/#Getting-Started","page":"Getting Started","title":"Getting Started","text":"","category":"section"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To get familiar with the different aspects of RegularizedLeastSquares.jl, we will go through a simple example from the field of Compressed Sensing.","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"In Addtion to RegularizedLeastSquares.jl, we will need the packages LinearOperatorCollection.jl, Images.jl and Random.jl, as well as PyPlot for visualization.","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"using RegularizedLeastSquares, LinearOperatorCollection, Images, PyPlot, Random","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To get started, let us generate a simple phantom","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"N = 256\nI = shepp_logan(N)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"In this example, we consider an operator which randomly samples half of the pixels in the image. Such an operator and the corresponding measurement can be generated by calling","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"# sampling operator\nidx = sort( shuffle( collect(1:N^2) )[1:div(N^2,2)] )\nA = SamplingOp(eltype(I), pattern = idx , shape = (N,N))\n\n# generate undersampled data\ny = A*vec(I)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To recover the image, we solve the TV-regularized least squares problem","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"beginequation\n undersetmathbfxargmin frac12vertvert mathbfAmathbfx-mathbfy vertvert_2^2 + λTV(mathbfx) \nendequation","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"For this purpose we build a TV regularizer with regularization parameter λ=001","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"reg = TVRegularization(0.01; shape=(N,N))","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To solve the CS problem, the Alternating Direction Method of Multipliers can be used. Thus, we build the corresponding solver","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"solver = createLinearSolver(ADMM, A; reg=reg, ρ=0.1, iterations=20)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"and apply it to our measurement","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"Ireco = solve!(solver,y)\nIreco = reshape(Ireco,N,N)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"The original phantom and the reconstructed image are shown below","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"(Image: Phantom) (Image: Reconstruction)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"DocTestSetup = quote\n using RegularizedLeastSquares, Wavelets, LinearOperatorCollection\nend","category":"page"},{"location":"regularization/#Regularization","page":"Regularization","title":"Regularization","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"When formulating inverse problems, a Regularizer is formulated as an additional term in a cost function, which has to be minimized. Popular optimizers often deal with a regularizers g, by computing the proximal map","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"beginequation\n prox_g (mathbfx) = undersetmathbfuargmin frac12vertvert mathbfu-mathbfx vert vert^2 + g(mathbfx)\nendequation","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"In order to implement those kinds of algorithms,RegularizedLeastSquares defines the following type hierarchy:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"abstract type AbstractRegularization\nprox!(reg::AbstractRegularization, x)\nnorm(reg::AbstractRegularization, x)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Here prox!(reg, x) is an in-place function which computes the proximal map on the input-vector x. The function norm computes the value of the corresponding term in the inverse problem. RegularizedLeastSquares.jl provides AbstractParameterizedRegularization and AbstractProjectionRegularization as core regularization types.","category":"page"},{"location":"regularization/#Parameterized-Regularization-Terms","page":"Regularization","title":"Parameterized Regularization Terms","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"This group of regularization terms features a regularization parameter λ that is used during the prox! and normcomputations. Examples of this regulariztion group are L1, L2 or LLR (locally low rank) regularization terms.","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"These terms are constructed by supplying a λ and optionally term specific keyword arguments:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> l2 = L2Regularization(0.3)\nL2Regularization{Float64}(0.3)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Parameterized regularization terms implement:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"prox!(reg::AbstractParameterizedRegularization, x, λ)\nnorm(reg::AbstractParameterizedRegularization, x, λ)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"where λ by default is filled with the value used during construction.","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Invoking λ on a parameterized term retrieves its regularization parameter. This can be used in a solver to scale and overwrite the parameter as follows:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> prox!(l2, [1.0])\n1-element Vector{Float64}:\n 0.625\n\njulia> param = λ(l2)\n0.3\n\njulia> prox!(l2, [1.0], param*0.2)\n1-element Vector{Float64}:\n 0.8928571428571428\n","category":"page"},{"location":"regularization/#Projection-Regularization-Terms","page":"Regularization","title":"Projection Regularization Terms","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"This group of regularization terms implement projections, such as a positivity constraint or a projection with a given convex projection function.","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> positive = PositiveRegularization()\nPositiveRegularization()\n\njulia> prox!(positive, [2.0, -0.2])\n2-element Vector{Float64}:\n 2.0\n 0.0","category":"page"},{"location":"regularization/#Nested-Regularization-Terms","page":"Regularization","title":"Nested Regularization Terms","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Nested regularization terms are terms that act as decorators to the core regularization terms. These terms can be nested around other terms and add functionality to a regularization term, such as scaling λ based on the provided system matrix or applying a transform, such as the Wavelet, to x:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> core = L1Regularization(0.8)\nL1Regularization{Float64}(0.8)\n\njulia> wop = WaveletOp(Float32, shape = (32,32));\n\njulia> reg = TransformedRegularization(core, wop);\n\njulia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"The type of regularization term a nested term can be wrapped around depends on the concrete type of the nested term. However generally, they can be nested arbitrarly deep, adding new functionality with each layer. Each nested regularization term can return its inner regularization. Furthermore, all regularization terms implement the iteration interface to iterate over the nesting. The innermost regularization term of a nested term must be a core regularization term and it can be returned by the sink function:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> RegularizedLeastSquares.innerreg(reg) == core\ntrue\n\njulia> sink(reg) == core\ntrue\n\njulia> foreach(r -> println(nameof(typeof(r))), reg)\nTransformedRegularization\nL1Regularization","category":"page"}] +[{"location":"API/regularization/#API-for-Regularizers","page":"Regularization Terms","title":"API for Regularizers","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?","category":"page"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.L1Regularization\nRegularizedLeastSquares.L2Regularization\nRegularizedLeastSquares.L21Regularization\nRegularizedLeastSquares.LLRRegularization\nRegularizedLeastSquares.NuclearRegularization\nRegularizedLeastSquares.TVRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.L1Regularization","page":"Regularization Terms","title":"RegularizedLeastSquares.L1Regularization","text":"L1Regularization\n\nRegularization term implementing the proximal map for the Lasso problem.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.L2Regularization","page":"Regularization Terms","title":"RegularizedLeastSquares.L2Regularization","text":"L2Regularization\n\nRegularization term implementing the proximal map for Tikhonov regularization.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.L21Regularization","page":"Regularization Terms","title":"RegularizedLeastSquares.L21Regularization","text":"L21Regularization\n\nRegularization term implementing the proximal map for group-soft-thresholding.\n\nArguments\n\nλ - regularization paramter\n\nKeywords\n\nslices=1 - number of elements per group\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.LLRRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.LLRRegularization","text":"LLRRegularization\n\nRegularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.\n\nArguments\n\nλ - regularization paramter\n\nKeywords\n\nshape::Tuple{Int}=[] - dimensions of the image\nblockSize::Tuple{Int}=[2;2] - size of patches to perform singular value thresholding on\nrandshift::Bool=true - randomly shifts the patches to ensure translation invariance\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.NuclearRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.NuclearRegularization","text":"NuclearRegularization\n\nRegularization term implementing the proximal map for singular value soft-thresholding.\n\nArguments:\n\nλ - regularization paramter\n\nKeywords\n\nsvtShape::NTuple - size of the underlying matrix\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.TVRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.TVRegularization","text":"TVRegularization\n\nRegularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.\n\nReference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf\n\nReference for the FGP algorithm: A. Beck and T. Teboulle, \"Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems\", IEEE Trans. Image Process. 18(11), 2009\n\nArguments\n\nλ::T - regularization parameter\n\nKeywords\n\nshape::NTuple - size of the underlying image\ndims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.\niterationsTV=20 - number of FGP iterations\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Projection-Regularization","page":"Regularization Terms","title":"Projection Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.PositiveRegularization\nRegularizedLeastSquares.RealRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.PositiveRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.PositiveRegularization","text":"PositiveRegularization\n\nRegularization term implementing a projection onto positive and real numbers.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.RealRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.RealRegularization","text":"RealRegularization\n\nRegularization term implementing a projection onto real numbers.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Nested-Regularization","page":"Regularization Terms","title":"Nested Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.innerreg(::AbstractNestedRegularization)\nRegularizedLeastSquares.sink(::AbstractNestedRegularization)\nRegularizedLeastSquares.sinktype(::AbstractNestedRegularization)","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.innerreg-Tuple{AbstractNestedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.innerreg","text":"innerreg(reg::AbstractNestedRegularization)\n\nreturn the inner regularization term of reg. Nested regularization terms also implement the iteration interface.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.sink-Tuple{AbstractNestedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.sink","text":"sink(reg::AbstractNestedRegularization)\n\nreturn the innermost regularization term.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.sinktype-Tuple{AbstractNestedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.sinktype","text":"sinktype(reg::AbstractNestedRegularization)\n\nreturn the type of the innermost regularization term.\n\nSee also sink.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#Scaled-Regularization","page":"Regularization Terms","title":"Scaled Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.AbstractScaledRegularization\nRegularizedLeastSquares.scalefactor\nRegularizedLeastSquares.NormalizedRegularization\nRegularizedLeastSquares.NoNormalization\nRegularizedLeastSquares.MeasurementBasedNormalization\nRegularizedLeastSquares.SystemMatrixBasedNormalization\nRegularizedLeastSquares.FixedParameterRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.AbstractScaledRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.AbstractScaledRegularization","text":"AbstractScaledRegularization\n\nNested regularization term that applies a scalefactor to the regularization parameter λ of its inner term.\n\nSee also scalefactor, λ, innerreg.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.scalefactor","page":"Regularization Terms","title":"RegularizedLeastSquares.scalefactor","text":"scalescalefactor(reg::AbstractScaledRegularization)\n\nreturn the scaling scalefactor for λ\n\n\n\n\n\n","category":"function"},{"location":"API/regularization/#RegularizedLeastSquares.NormalizedRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.NormalizedRegularization","text":"NormalizedRegularization\n\nNested regularization term that scales λ according to normalization scheme. This term is commonly applied by a solver based on a given normalization keyword\n\n#See also NoNormalization, MeasurementBasedNormalization, SystemMatrixBasedNormalization.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.NoNormalization","page":"Regularization Terms","title":"RegularizedLeastSquares.NoNormalization","text":"NoNormalization\n\nNo normalization to λ is applied.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.MeasurementBasedNormalization","page":"Regularization Terms","title":"RegularizedLeastSquares.MeasurementBasedNormalization","text":"MeasurementBasedNormalization\n\nλ is normalized by the 1-norm of b divided by its length.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.SystemMatrixBasedNormalization","page":"Regularization Terms","title":"RegularizedLeastSquares.SystemMatrixBasedNormalization","text":"SystemMatrixBasedNormalization\n\nλ is normalized by the energy of the system matrix rows.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.FixedParameterRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.FixedParameterRegularization","text":"FixedParameterRegularization\n\nNested regularization term that discards any λ passed to it and instead uses λ from its inner regularization term. This can be used to selectively disallow normalization.\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Misc.-Nested-Regularization","page":"Regularization Terms","title":"Misc. Nested Regularization","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.MaskedRegularization\nRegularizedLeastSquares.TransformedRegularization\nRegularizedLeastSquares.PlugAndPlayRegularization","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.MaskedRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.MaskedRegularization","text":"MaskedRegularization\n\nNested regularization term that only applies prox! and norm to elements of x for which the mask is true.\n\nExamples\n\njulia> positive = PositiveRegularization();\n\njulia> masked = MaskedRegularization(reg, [true, false, true, false]);\n\njulia> prox!(masked, fill(-1, 4))\n4-element Vector{Float64}:\n 0.0\n -1.0\n 0.0\n -1.0\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.TransformedRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.TransformedRegularization","text":"TransformedRegularization(reg, trafo)\n\nNested regularization term that applies prox! or norm on z = trafo * x and returns (inplace) x = adjoint(trafo) * z.\n\nExample\n\njulia> core = L1Regularization(0.8)\nL1Regularization{Float64}(0.8)\n\njulia> wop = WaveletOp(Float32, shape = (32,32));\n\njulia> reg = TransformedRegularization(core, wop);\n\njulia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#RegularizedLeastSquares.PlugAndPlayRegularization","page":"Regularization Terms","title":"RegularizedLeastSquares.PlugAndPlayRegularization","text":" PlugAndPlayRegularization\n\nRegularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.\n\nArguments\n\nλ - regularization paramter\n\nKeywords\n\nmodel - model applied to the image\nshape - dimensions of the image\ninput_transform - transform of image before model\n\n\n\n\n\n","category":"type"},{"location":"API/regularization/#Miscellaneous-Functions","page":"Regularization Terms","title":"Miscellaneous Functions","text":"","category":"section"},{"location":"API/regularization/","page":"Regularization Terms","title":"Regularization Terms","text":"RegularizedLeastSquares.prox!(::AbstractParameterizedRegularization, ::AbstractArray)\nRegularizedLeastSquares.prox!(::Type{<:AbstractParameterizedRegularization}, ::Any, ::Any)\nRegularizedLeastSquares.norm(::AbstractParameterizedRegularization, ::AbstractArray)\nRegularizedLeastSquares.λ(::AbstractParameterizedRegularization)\nRegularizedLeastSquares.norm(::Type{<:AbstractParameterizedRegularization}, ::Any, ::Any)","category":"page"},{"location":"API/regularization/#RegularizedLeastSquares.prox!-Tuple{AbstractParameterizedRegularization, AbstractArray}","page":"Regularization Terms","title":"RegularizedLeastSquares.prox!","text":"prox!(reg::AbstractParameterizedRegularization, x)\n\nperform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.prox!-Tuple{Type{<:AbstractParameterizedRegularization}, Any, Any}","page":"Regularization Terms","title":"RegularizedLeastSquares.prox!","text":"prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)\n\nconstruct a regularization term of type regType with given λ and kwargs and apply its prox! on x\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#LinearAlgebra.norm-Tuple{AbstractParameterizedRegularization, AbstractArray}","page":"Regularization Terms","title":"LinearAlgebra.norm","text":"norm(reg::AbstractParameterizedRegularization, x)\n\nreturns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#RegularizedLeastSquares.λ-Tuple{AbstractParameterizedRegularization}","page":"Regularization Terms","title":"RegularizedLeastSquares.λ","text":"λ(reg::AbstractParameterizedRegularization)\n\nreturn the regularization parameter λ of reg\n\n\n\n\n\n","category":"method"},{"location":"API/regularization/#LinearAlgebra.norm-Tuple{Type{<:AbstractParameterizedRegularization}, Any, Any}","page":"Regularization Terms","title":"LinearAlgebra.norm","text":"norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)\n\nconstruct a regularization term of type regType with given λ and kwargs and apply its norm on x\n\n\n\n\n\n","category":"method"},{"location":"solvers/#Solvers","page":"Solvers","title":"Solvers","text":"","category":"section"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.jl provides a variety of solvers, which are used in fields such as MPI and MRI. The following is a non-exhaustive list of the implemented solvers:","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"Kaczmarz algorithm (Kaczmarz)\nConjugate Gradients Normal Residual method (CGNR)\nFast Iterative Shrinkage Thresholding Algorithm (FISTA)\nAlternating Direction of Multipliers Method (ADMM)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"The solvers are organized in a type-hierarchy and inherit from:","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"abstract type AbstractLinearSolver","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"The type hierarchy is further differentiated into solver categories such as AbstractRowAtionSolver, AbstractPrimalDualSolver or AbstractProximalGradientSolver. A list of all available solvers can be returned by the linearSolverList function.","category":"page"},{"location":"solvers/#Creating-a-Solver","page":"Solvers","title":"Creating a Solver","text":"","category":"section"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"To create a solver, one can invoke the method createLinearSolver as in","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"solver = createLinearSolver(ADMM, A; reg=reg, kwargs...)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"Here A denotes the system matrix and reg are the Regularization terms to be used by the solver. All further solver parameters can be passed as keyword arguments and are solver specific. To make things more compact, it can be usefull to collect all parameters in a Dict{Symbol,Any}. In this way, the code snippet above can be written as","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"params=Dict{Symbol,Any}()\nparams[:reg] = ...\n...\n\nsolver = createLinearSolver(ADMM, A; params...)","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"This notation can be convenient when a large number of parameters are set manually.","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"It is possible to check if a given solver is applicable to the wanted arguments, as not all solvers are applicable to all system matrix and data (element) types or regularization terms combinations. This is achieved with the isapplicable function:","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"isapplicable(Kaczmarz, A, x, [L21Regularization(0.4f0)])\nfalse","category":"page"},{"location":"solvers/","page":"Solvers","title":"Solvers","text":"For a given set of arguments the list of applicable solvers can be retrieved with applicableSolverList.","category":"page"},{"location":"API/solvers/#API-for-Solvers","page":"Solvers","title":"API for Solvers","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?","category":"page"},{"location":"API/solvers/#solve!","page":"Solvers","title":"solve!","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.solve!(::AbstractLinearSolver, ::Any)","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.solve!-Tuple{AbstractLinearSolver, Any}","page":"Solvers","title":"RegularizedLeastSquares.solve!","text":"solve!(solver::AbstractLinearSolver, b; x0 = 0, callbacks = (_, _) -> nothing)\n\nSolves an inverse problem for the data vector b using solver.\n\nRequired Arguments\n\nsolver::AbstractLinearSolver - linear solver (e.g., ADMM or FISTA), containing forward/normal operator and regularizer\nb::AbstractVector - data vector if A was supplied to the solver, back-projection of the data otherwise\n\nOptional Keyword Arguments\n\nx0::AbstractVector - initial guess for the solution; default is zero\ncallbacks - (optionally a vector of) function or callable struct that takes the two arguments callback(solver, iteration) and, e.g., stores, prints, or plots the intermediate solutions or convergence parameters. Be sure not to modify solver or iteration in the callback function as this would japaridze convergence. The default does nothing.\n\nExamples\n\nThe optimization problem\n\n\targmin_x Ax - b_2^2 + λ x_1\n\ncan be solved with the following lines of code:\n\njulia> using RegularizedLeastSquares\n\njulia> A = [0.831658 0.96717\n 0.383056 0.39043\n 0.820692 0.08118];\n\njulia> x = [0.5932234523399985; 0.2697534345340015];\n\njulia> b = A * x;\n\njulia> S = ADMM(A);\n\njulia> x_approx = solve!(S, b)\n2-element Vector{Float64}:\n 0.5932234523399984\n 0.26975343453400163\n\nHere, we use L1Regularization, which is default for ADMM. All regularization options can be found in API for Regularizers.\n\nThe following example solves the same problem, but stores the solution x of each interation in tr:\n\njulia> tr = Dict[]\nDict[]\n\njulia> store_trace!(tr, solver, iteration) = push!(tr, Dict(\"iteration\" => iteration, \"x\" => solver.x, \"beta\" => solver.β))\nstore_trace! (generic function with 1 method)\n\njulia> x_approx = solve!(S, b; callbacks=(solver, iteration) -> store_trace!(tr, solver, iteration))\n2-element Vector{Float64}:\n 0.5932234523399984\n 0.26975343453400163\n\njulia> tr[3]\nDict{String, Any} with 3 entries:\n \"iteration\" => 2\n \"x\" => [0.593223, 0.269753]\n \"beta\" => [1.23152, 0.927611]\n\nThe last example show demonstrates how to plot the solution at every 10th iteration and store the solvers convergence metrics:\n\njulia> using Plots\n\njulia> conv = StoreConvergenceCallback()\n\njulia> function plot_trace(solver, iteration)\n if iteration % 10 == 0\n display(scatter(solver.x))\n end\n end\nplot_trace (generic function with 1 method)\n\njulia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);\n\nThe keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.\n\nSee also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.\n\n\n\n\n\n","category":"method"},{"location":"API/solvers/#ADMM","page":"Solvers","title":"ADMM","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.ADMM","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.ADMM","page":"Solvers","title":"RegularizedLeastSquares.ADMM","text":"ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\nADMM( ; AHA = , precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\n\nCreates an ADMM object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nprecon - preconditionner for the internal CG algorithm\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nregTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - penalty of the augmented Lagrangian\nvary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP\niterations::Int - maximum number of (outer) ADMM iterations\niterationsCG::Int - maximum number of (inner) CG iterations\nabsTol::Real - absolute tolerance for stopping criterion\nrelTol::Real - relative tolerance for stopping criterion\ntolInner::Real - relative tolerance for CG stopping criterion\nverbose::Bool - print residual in each iteration\n\nADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#CGNR","page":"Solvers","title":"CGNR","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.CGNR","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.CGNR","page":"Solvers","title":"RegularizedLeastSquares.CGNR","text":"CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))\nCGNR( ; AHA = , reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), weights = similar(AHA, 0), iterations = 10, relTol = eps(real(eltype(AHA))))\n\ncreates an CGNR object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nweights::AbstactVector - weights for the data term; must be of same length and type as the data term\niterations::Int - maximum number of iterations\nrelTol::Real - tolerance for stopping criterion\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#Kaczmarz","page":"Solvers","title":"Kaczmarz","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.Kaczmarz","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.Kaczmarz","page":"Solvers","title":"RegularizedLeastSquares.Kaczmarz","text":"Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), weights=nothing, randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10, regMatrix=nothing)\n\nCreates a Kaczmarz object for the forward operator A.\n\nRequired Arguments\n\nA - forward operator\n\nOptional Keyword Arguments\n\nreg::AbstractParameterizedRegularization - regularization term\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrandomized::Bool - randomize Kacmarz algorithm\nsubMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm\nshuffleRows::Bool - randomize Kacmarz algorithm\nseed::Int - seed for randomized algorithm\niterations::Int - number of iterations\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#FISTA","page":"Solvers","title":"FISTA","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.FISTA","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.FISTA","page":"Solvers","title":"RegularizedLeastSquares.FISTA","text":"FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)\nFISTA( ; AHA=, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, restart = :none, verbose = false)\n\ncreates a FISTA object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nprecon - preconditionner for the internal CG algorithm\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - step size for gradient step\nnormalize_rho::Bool - normalize step size by the largest eigenvalue of AHA\ntheta::Real - parameter for predictor-corrector step\nrelTol::Real - tolerance for stopping criterion\niterations::Int - maximum number of iterations\nrestart::Symbol - :none, :gradient options for restarting\nverbose::Bool - print residual in each iteration\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#OptISTA","page":"Solvers","title":"OptISTA","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.OptISTA","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.OptISTA","page":"Solvers","title":"RegularizedLeastSquares.OptISTA","text":"OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)\nOptISTA( ; AHA=, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), rho=0.95, normalize_rho=true, theta=1, relTol=eps(real(eltype(AHA))), iterations=50, verbose = false)\n\ncreates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.\n\nReference:\n\nUijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, \"Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA,\" arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nreg::AbstractParameterizedRegularization - regularization term\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - step size for gradient step\nnormalize_rho::Bool - normalize step size by the largest eigenvalue of AHA\ntheta::Real - parameter for predictor-corrector step\nrelTol::Real - tolerance for stopping criterion\niterations::Int - maximum number of iterations\nverbose::Bool - print residual in each iteration\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#POGM","page":"Solvers","title":"POGM","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.POGM","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.POGM","page":"Solvers","title":"RegularizedLeastSquares.POGM","text":"POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)\nPOGM( ; AHA = , reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), rho = 0.95, normalize_rho = true, theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), iterations = 50, restart = :none, verbose = false)\n\nCreates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.\n\nReferences:\n\nA.B. Taylor, J.M. Hendrickx, F. Glineur, \"Exact worst-case performance of first-order algorithms for composite convex optimization,\" Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]\nKim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]\nRequired Arguments\nA - forward operator\nOR\nAHA - normal operator (as a keyword argument)\nOptional Keyword Arguments\nAHA - normal operator is optional if A is supplied\nreg::AbstractParameterizedRegularization - regularization term\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - step size for gradient step\nnormalize_rho::Bool - normalize step size by the largest eigenvalue of AHA\ntheta::Real - parameter for predictor-corrector step\nsigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]\nrelTol::Real - tolerance for stopping criterion\niterations::Int - maximum number of iterations\nrestart::Symbol - :none, :gradient options for restarting\nverbose::Bool - print residual in each iteration\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#SplitBregman","page":"Solvers","title":"SplitBregman","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.SplitBregman","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.SplitBregman","page":"Solvers","title":"RegularizedLeastSquares.SplitBregman","text":"SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\nSplitBregman( ; AHA = , precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterationsOuter = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)\n\nCreates a SplitBregman object for the forward operator A or normal operator AHA.\n\nRequired Arguments\n\nA - forward operator\n\nOR\n\nAHA - normal operator (as a keyword argument)\n\nOptional Keyword Arguments\n\nAHA - normal operator is optional if A is supplied\nprecon - preconditionner for the internal CG algorithm\nreg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms\nregTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.\nnormalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()\nrho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms\niterationsOuter::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)\niterationsInner::Int - maximum number of inner iterations\niterationsCG::Int - maximum number of (inner) CG iterations\nabsTol::Real - absolute tolerance for stopping criterion\nrelTol::Real - relative tolerance for stopping criterion\ntolInner::Real - relative tolerance for CG stopping criterion\nverbose::Bool - print residual in each iteration\n\nThis algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterationsOuter=1 or use ADMM instead, which is equivalent (iterationsOuter=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)\n\nLike ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).\n\nSee also createLinearSolver, solve!.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#Miscellaneous-Functions","page":"Solvers","title":"Miscellaneous Functions","text":"","category":"section"},{"location":"API/solvers/","page":"Solvers","title":"Solvers","text":"RegularizedLeastSquares.StoreSolutionCallback\nRegularizedLeastSquares.StoreConvergenceCallback\nRegularizedLeastSquares.CompareSolutionCallback\nRegularizedLeastSquares.linearSolverList\nRegularizedLeastSquares.createLinearSolver\nRegularizedLeastSquares.applicableSolverList\nRegularizedLeastSquares.isapplicable","category":"page"},{"location":"API/solvers/#RegularizedLeastSquares.StoreSolutionCallback","page":"Solvers","title":"RegularizedLeastSquares.StoreSolutionCallback","text":"StoreSolutionCallback(T)\n\nCallback that accumlates the solvers solution per iteration. Results are stored in the solutions field.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#RegularizedLeastSquares.StoreConvergenceCallback","page":"Solvers","title":"RegularizedLeastSquares.StoreConvergenceCallback","text":"StoreConvergenceCallback()\n\nCallback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#RegularizedLeastSquares.CompareSolutionCallback","page":"Solvers","title":"RegularizedLeastSquares.CompareSolutionCallback","text":"CompareSolutionCallback(ref, cmp)\n\nCallback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.\n\n\n\n\n\n","category":"type"},{"location":"API/solvers/#RegularizedLeastSquares.linearSolverList","page":"Solvers","title":"RegularizedLeastSquares.linearSolverList","text":"Return a list of all available linear solvers\n\n\n\n\n\n","category":"function"},{"location":"API/solvers/#RegularizedLeastSquares.createLinearSolver","page":"Solvers","title":"RegularizedLeastSquares.createLinearSolver","text":"createLinearSolver(solver::AbstractLinearSolver, A; kargs...)\n\nThis method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.\n\nTODO: give a hint what solvers are available\n\n\n\n\n\n","category":"function"},{"location":"API/solvers/#RegularizedLeastSquares.applicableSolverList","page":"Solvers","title":"RegularizedLeastSquares.applicableSolverList","text":"applicable(args...)\n\nlist all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.\n\nSee also isapplicable, linearSolverList.\n\n\n\n\n\n","category":"function"},{"location":"API/solvers/#RegularizedLeastSquares.isapplicable","page":"Solvers","title":"RegularizedLeastSquares.isapplicable","text":"isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)\n\nreturn true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.\n\n\n\n\n\n","category":"function"},{"location":"#RegularizedLeastSquares.jl","page":"Home","title":"RegularizedLeastSquares.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Solvers for Linear Inverse Problems using Regularization Techniques","category":"page"},{"location":"#Introduction","page":"Home","title":"Introduction","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"RegularizedLeastSquares.jl is a Julia package for solving large scale linear systems using different types of algorithms. Ill-conditioned problems arise in many areas of practical interest. To solve these problems, one often resorts to regularization techniques and non-linear problem formulations. This packages provides implementations for a variety of solvers, which are used in fields such as MPI and MRI.","category":"page"},{"location":"","page":"Home","title":"Home","text":"The implemented methods range from the l_2-regularized CGNR method to more general optimizers such as the Alternating Direction of Multipliers Method (ADMM) or the Split-Bregman method.","category":"page"},{"location":"","page":"Home","title":"Home","text":"For convenience, implementations of popular regularizers, such as l_1-regularization and TV regularization, are provided. On the other hand, hand-crafted regularizers can be used quite easily. For this purpose, a Regularization object needs to be build. The latter mainly contains the regularization parameter and a function to calculate the proximal map of a given input.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Depending on the problem, it becomes unfeasible to store the full system matrix at hand. For this purpose, RegularizedLeastSquares.jl allows for the use of matrix-free operators. Such operators can be realized using the interface provided by the package LinearOperators.jl. Other interfaces can be used as well, as long as the product *(A,x) and the adjoint adjoint(A) are provided. A number of common matrix-free operators are provided by the package LinearOperatorColection.jl.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Within Julia, use the package manager:","category":"page"},{"location":"","page":"Home","title":"Home","text":"using Pkg\nPkg.add(\"RegularizedLeastSquares\")","category":"page"},{"location":"","page":"Home","title":"Home","text":"This adds the latest release of the package is added. To install a different version, please consult the Pkg documentation.","category":"page"},{"location":"#Usage","page":"Home","title":"Usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"See Getting Started for an introduction to using the package","category":"page"},{"location":"gettingStarted/#Getting-Started","page":"Getting Started","title":"Getting Started","text":"","category":"section"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To get familiar with the different aspects of RegularizedLeastSquares.jl, we will go through a simple example from the field of Compressed Sensing.","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"In Addtion to RegularizedLeastSquares.jl, we will need the packages LinearOperatorCollection.jl, Images.jl and Random.jl, as well as PyPlot for visualization.","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"using RegularizedLeastSquares, LinearOperatorCollection, Images, PyPlot, Random","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To get started, let us generate a simple phantom","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"N = 256\nI = shepp_logan(N)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"In this example, we consider an operator which randomly samples half of the pixels in the image. Such an operator and the corresponding measurement can be generated by calling","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"# sampling operator\nidx = sort( shuffle( collect(1:N^2) )[1:div(N^2,2)] )\nA = SamplingOp(eltype(I), pattern = idx , shape = (N,N))\n\n# generate undersampled data\ny = A*vec(I)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To recover the image, we solve the TV-regularized least squares problem","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"beginequation\n undersetmathbfxargmin frac12vertvert mathbfAmathbfx-mathbfy vertvert_2^2 + λTV(mathbfx) \nendequation","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"For this purpose we build a TV regularizer with regularization parameter λ=001","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"reg = TVRegularization(0.01; shape=(N,N))","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"To solve the CS problem, the Alternating Direction Method of Multipliers can be used. Thus, we build the corresponding solver","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"solver = createLinearSolver(ADMM, A; reg=reg, ρ=0.1, iterations=20)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"and apply it to our measurement","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"Ireco = solve!(solver,y)\nIreco = reshape(Ireco,N,N)","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"The original phantom and the reconstructed image are shown below","category":"page"},{"location":"gettingStarted/","page":"Getting Started","title":"Getting Started","text":"(Image: Phantom) (Image: Reconstruction)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"DocTestSetup = quote\n using RegularizedLeastSquares, Wavelets, LinearOperatorCollection\nend","category":"page"},{"location":"regularization/#Regularization","page":"Regularization","title":"Regularization","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"When formulating inverse problems, a Regularizer is formulated as an additional term in a cost function, which has to be minimized. Popular optimizers often deal with a regularizers g, by computing the proximal map","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"beginequation\n prox_g (mathbfx) = undersetmathbfuargmin frac12vertvert mathbfu-mathbfx vert vert^2 + g(mathbfx)\nendequation","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"In order to implement those kinds of algorithms,RegularizedLeastSquares defines the following type hierarchy:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"abstract type AbstractRegularization\nprox!(reg::AbstractRegularization, x)\nnorm(reg::AbstractRegularization, x)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Here prox!(reg, x) is an in-place function which computes the proximal map on the input-vector x. The function norm computes the value of the corresponding term in the inverse problem. RegularizedLeastSquares.jl provides AbstractParameterizedRegularization and AbstractProjectionRegularization as core regularization types.","category":"page"},{"location":"regularization/#Parameterized-Regularization-Terms","page":"Regularization","title":"Parameterized Regularization Terms","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"This group of regularization terms features a regularization parameter λ that is used during the prox! and normcomputations. Examples of this regulariztion group are L1, L2 or LLR (locally low rank) regularization terms.","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"These terms are constructed by supplying a λ and optionally term specific keyword arguments:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> l2 = L2Regularization(0.3)\nL2Regularization{Float64}(0.3)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Parameterized regularization terms implement:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"prox!(reg::AbstractParameterizedRegularization, x, λ)\nnorm(reg::AbstractParameterizedRegularization, x, λ)","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"where λ by default is filled with the value used during construction.","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Invoking λ on a parameterized term retrieves its regularization parameter. This can be used in a solver to scale and overwrite the parameter as follows:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> prox!(l2, [1.0])\n1-element Vector{Float64}:\n 0.625\n\njulia> param = λ(l2)\n0.3\n\njulia> prox!(l2, [1.0], param*0.2)\n1-element Vector{Float64}:\n 0.8928571428571428\n","category":"page"},{"location":"regularization/#Projection-Regularization-Terms","page":"Regularization","title":"Projection Regularization Terms","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"This group of regularization terms implement projections, such as a positivity constraint or a projection with a given convex projection function.","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> positive = PositiveRegularization()\nPositiveRegularization()\n\njulia> prox!(positive, [2.0, -0.2])\n2-element Vector{Float64}:\n 2.0\n 0.0","category":"page"},{"location":"regularization/#Nested-Regularization-Terms","page":"Regularization","title":"Nested Regularization Terms","text":"","category":"section"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"Nested regularization terms are terms that act as decorators to the core regularization terms. These terms can be nested around other terms and add functionality to a regularization term, such as scaling λ based on the provided system matrix or applying a transform, such as the Wavelet, to x:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> core = L1Regularization(0.8)\nL1Regularization{Float64}(0.8)\n\njulia> wop = WaveletOp(Float32, shape = (32,32));\n\njulia> reg = TransformedRegularization(core, wop);\n\njulia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"The type of regularization term a nested term can be wrapped around depends on the concrete type of the nested term. However generally, they can be nested arbitrarly deep, adding new functionality with each layer. Each nested regularization term can return its inner regularization. Furthermore, all regularization terms implement the iteration interface to iterate over the nesting. The innermost regularization term of a nested term must be a core regularization term and it can be returned by the sink function:","category":"page"},{"location":"regularization/","page":"Regularization","title":"Regularization","text":"julia> RegularizedLeastSquares.innerreg(reg) == core\ntrue\n\njulia> sink(reg) == core\ntrue\n\njulia> foreach(r -> println(nameof(typeof(r))), reg)\nTransformedRegularization\nL1Regularization","category":"page"}] } diff --git a/previews/PR74/solvers/index.html b/previews/PR74/solvers/index.html index 6a3aae46..a787b26f 100644 --- a/previews/PR74/solvers/index.html +++ b/previews/PR74/solvers/index.html @@ -4,4 +4,4 @@ ... solver = createLinearSolver(ADMM, A; params...)

This notation can be convenient when a large number of parameters are set manually.

It is possible to check if a given solver is applicable to the wanted arguments, as not all solvers are applicable to all system matrix and data (element) types or regularization terms combinations. This is achieved with the isapplicable function:

isapplicable(Kaczmarz, A, x, [L21Regularization(0.4f0)])
-false

For a given set of arguments the list of applicable solvers can be retrieved with applicableSolverList.

+false

For a given set of arguments the list of applicable solvers can be retrieved with applicableSolverList.