diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index ab4cb9f..14ac288 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-14T14:00:42","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-26T15:52:26","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/API/regularization/index.html b/dev/API/regularization/index.html index c8876ae..55475cc 100644 --- a/dev/API/regularization/index.html +++ b/dev/API/regularization/index.html @@ -1,5 +1,5 @@ -Regularization Terms · RegularizedLeastSquares.jl

API for Regularizers

This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?

RegularizedLeastSquares.L21RegularizationType
L21Regularization

Regularization term implementing the proximal map for group-soft-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • slices=1 - number of elements per group
source
RegularizedLeastSquares.LLRRegularizationType
LLRRegularization

Regularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • shape::Tuple{Int} - dimensions of the image
  • blockSize::Tuple{Int}=(2,2) - size of patches to perform singular value thresholding on
  • randshift::Bool=true - randomly shifts the patches to ensure translation invariance
  • fullyOverlapping::Bool=false - choose between fully overlapping block or non-overlapping blocks
source
RegularizedLeastSquares.NuclearRegularizationType
NuclearRegularization

Regularization term implementing the proximal map for singular value soft-thresholding.

Arguments:

  • λ - regularization paramter

Keywords

  • svtShape::NTuple - size of the underlying matrix
source
RegularizedLeastSquares.TVRegularizationType
TVRegularization

Regularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.

Reference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf

Reference for the FGP algorithm: A. Beck and T. Teboulle, "Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems", IEEE Trans. Image Process. 18(11), 2009

Arguments

  • λ::T - regularization parameter

Keywords

  • shape::NTuple - size of the underlying image
  • dims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.
  • iterationsTV=20 - number of FGP iterations
source

Projection Regularization

Nested Regularization

RegularizedLeastSquares.innerregMethod
innerreg(reg::AbstractNestedRegularization)

return the inner regularization term of reg. Nested regularization terms also implement the iteration interface.

source

Scaled Regularization

Misc. Nested Regularization

RegularizedLeastSquares.MaskedRegularizationType
MaskedRegularization

Nested regularization term that only applies prox! and norm to elements of x for which the mask is true.

Examples

julia> positive = PositiveRegularization();
+Regularization Terms · RegularizedLeastSquares.jl

API for Regularizers

This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?

RegularizedLeastSquares.L21RegularizationType
L21Regularization

Regularization term implementing the proximal map for group-soft-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • slices=1 - number of elements per group
source
RegularizedLeastSquares.LLRRegularizationType
LLRRegularization

Regularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • shape::Tuple{Int} - dimensions of the image
  • blockSize::Tuple{Int}=(2,2) - size of patches to perform singular value thresholding on
  • randshift::Bool=true - randomly shifts the patches to ensure translation invariance
  • fullyOverlapping::Bool=false - choose between fully overlapping block or non-overlapping blocks
source
RegularizedLeastSquares.NuclearRegularizationType
NuclearRegularization

Regularization term implementing the proximal map for singular value soft-thresholding.

Arguments:

  • λ - regularization paramter

Keywords

  • svtShape::NTuple - size of the underlying matrix
source
RegularizedLeastSquares.TVRegularizationType
TVRegularization

Regularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.

Reference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf

Reference for the FGP algorithm: A. Beck and T. Teboulle, "Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems", IEEE Trans. Image Process. 18(11), 2009

Arguments

  • λ::T - regularization parameter

Keywords

  • shape::NTuple - size of the underlying image
  • dims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.
  • iterationsTV=20 - number of FGP iterations
source

Projection Regularization

Nested Regularization

RegularizedLeastSquares.innerregMethod
innerreg(reg::AbstractNestedRegularization)

return the inner regularization term of reg. Nested regularization terms also implement the iteration interface.

source

Scaled Regularization

Misc. Nested Regularization

RegularizedLeastSquares.MaskedRegularizationType
MaskedRegularization

Nested regularization term that only applies prox! and norm to elements of x for which the mask is true.

Examples

julia> positive = PositiveRegularization();
 
 julia> masked = MaskedRegularization(reg, [true, false, true, false]);
 
@@ -8,11 +8,11 @@
   0.0
  -1.0
   0.0
- -1.0
source
RegularizedLeastSquares.TransformedRegularizationType
TransformedRegularization(reg, trafo)

Nested regularization term that applies prox! or norm on z = trafo * x and returns (inplace) x = adjoint(trafo) * z.

Example

julia> core = L1Regularization(0.8)
 L1Regularization{Float64}(0.8)
 
 julia> wop = WaveletOp(Float32, shape = (32,32));
 
 julia> reg = TransformedRegularization(core, wop);
 
-julia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain
source
RegularizedLeastSquares.PlugAndPlayRegularizationType
    PlugAndPlayRegularization

Regularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.

Arguments

  • λ - regularization paramter

Keywords

  • model - model applied to the image
  • shape - dimensions of the image
  • input_transform - transform of image before model
source

Miscellaneous Functions

RegularizedLeastSquares.prox!Method
prox!(reg::AbstractParameterizedRegularization, x)

perform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.

source
RegularizedLeastSquares.prox!Method
prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its prox! on x

source
LinearAlgebra.normMethod
norm(reg::AbstractParameterizedRegularization, x)

returns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.

source
LinearAlgebra.normMethod
norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its norm on x

source
+julia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain
source
RegularizedLeastSquares.PlugAndPlayRegularizationType
    PlugAndPlayRegularization

Regularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.

Arguments

  • λ - regularization paramter

Keywords

  • model - model applied to the image
  • shape - dimensions of the image
  • input_transform - transform of image before model
source

Miscellaneous Functions

RegularizedLeastSquares.prox!Method
prox!(reg::AbstractParameterizedRegularization, x)

perform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.

source
RegularizedLeastSquares.prox!Method
prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its prox! on x

source
LinearAlgebra.normMethod
norm(reg::AbstractParameterizedRegularization, x)

returns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.

source
LinearAlgebra.normMethod
norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its norm on x

source
diff --git a/dev/API/solvers/index.html b/dev/API/solvers/index.html index 1d17f4c..8450ba9 100644 --- a/dev/API/solvers/index.html +++ b/dev/API/solvers/index.html @@ -40,10 +40,10 @@ end plot_trace (generic function with 1 method) -julia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);

The keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.

See also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, b; kwargs...)

Prepare the solver for iteration based on the given data vector b and kwargs.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, state::AbstractSolverState, b::AbstractMatrix; scheduler = SequentialState, kwargs...)

Initialize the solver with each column of b and pass the corresponding states to the scheduler.

source

ADMM

RegularizedLeastSquares.ADMMType
ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
-ADMM( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates an ADMM object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - penalty of the augmented Lagrangian
  • vary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP
  • iterations::Int - maximum number of (outer) ADMM iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

ADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

CGNR

RegularizedLeastSquares.CGNRType
CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))
-CGNR( ; AHA = ,       reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))

creates an CGNR object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • iterations::Int - maximum number of iterations
  • relTol::Real - tolerance for stopping criterion

See also createLinearSolver, solve!.

source

Kaczmarz

RegularizedLeastSquares.KaczmarzType
Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10)

Creates a Kaczmarz object for the forward operator A.

Required Arguments

  • A - forward operator

Optional Keyword Arguments

  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • randomized::Bool - randomize Kacmarz algorithm
  • subMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm
  • shuffleRows::Bool - randomize Kacmarz algorithm
  • seed::Int - seed for randomized algorithm
  • iterations::Int - number of iterations

See also createLinearSolver, solve!.

source

FISTA

RegularizedLeastSquares.FISTAType
FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)
-FISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)

creates a FISTA object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • restart::Symbol - :none, :gradient options for restarting
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

OptISTA

RegularizedLeastSquares.OptISTAType
OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))
-OptISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))

creates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.

Reference:

  • Uijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, "Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA," arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

POGM

RegularizedLeastSquares.POGMType
POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)
-POGM( ; AHA = ,     reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)

Creates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.

References:

  • A.B. Taylor, J.M. Hendrickx, F. Glineur, "Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]

  • Kim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]

    Required Arguments

    • A - forward operator

    OR

    • AHA - normal operator (as a keyword argument)

    Optional Keyword Arguments

    • AHA - normal operator is optional if A is supplied
    • reg::AbstractParameterizedRegularization - regularization term
    • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
    • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
    • theta::Real - parameter for predictor-corrector step
    • sigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]
    • relTol::Real - tolerance for stopping criterion
    • iterations::Int - maximum number of iterations
    • restart::Symbol - :none, :gradient options for restarting
    • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

SplitBregman

RegularizedLeastSquares.SplitBregmanType
SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
-SplitBregman( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates a SplitBregman object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms
  • iterations::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)
  • iterationsInner::Int - maximum number of inner iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

This algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterations=1 or use ADMM instead, which is equivalent (iterations=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)

Like ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

Miscellaneous

RegularizedLeastSquares.solverstateFunction
solverstate(solver::AbstractLinearSolver)

Return the current state of the solver

source
RegularizedLeastSquares.solversolutionFunction
solversolution(solver::AbstractLinearSolver)

Return the current solution of the solver

source
solversolution(state::AbstractSolverState)

Return the current solution of the solver's state

source
RegularizedLeastSquares.solverconvergenceFunction
solverconvergence(solver::AbstractLinearSolver)

Return a named tuple of the solvers current convergence metrics

source
RegularizedLeastSquares.StoreSolutionCallbackType
StoreSolutionCallback(T)

Callback that accumlates the solvers solution per iteration. Results are stored in the solutions field.

source
RegularizedLeastSquares.StoreConvergenceCallbackType
StoreConvergenceCallback()

Callback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.

source
RegularizedLeastSquares.CompareSolutionCallbackType
CompareSolutionCallback(ref, cmp)

Callback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.

source
RegularizedLeastSquares.linearSolverListFunction

Return a list of all available linear solvers

source
RegularizedLeastSquares.createLinearSolverFunction
createLinearSolver(solver::AbstractLinearSolver, A; kargs...)

This method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.

TODO: give a hint what solvers are available

source
RegularizedLeastSquares.applicableSolverListFunction
applicable(args...)

list all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.

See also isapplicable, linearSolverList.

source
RegularizedLeastSquares.isapplicableFunction
isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)

return true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.

source
+julia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);

The keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.

See also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, b; kwargs...)

Prepare the solver for iteration based on the given data vector b and kwargs.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, state::AbstractSolverState, b::AbstractMatrix; scheduler = SequentialState, kwargs...)

Initialize the solver with each column of b and pass the corresponding states to the scheduler.

source

ADMM

RegularizedLeastSquares.ADMMType
ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
+ADMM( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates an ADMM object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - penalty of the augmented Lagrangian
  • vary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP
  • iterations::Int - maximum number of (outer) ADMM iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

ADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

CGNR

RegularizedLeastSquares.CGNRType
CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))
+CGNR( ; AHA = ,       reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))

creates an CGNR object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • iterations::Int - maximum number of iterations
  • relTol::Real - tolerance for stopping criterion

See also createLinearSolver, solve!.

source

Kaczmarz

RegularizedLeastSquares.KaczmarzType
Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10)

Creates a Kaczmarz object for the forward operator A.

Required Arguments

  • A - forward operator

Optional Keyword Arguments

  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • randomized::Bool - randomize Kacmarz algorithm
  • subMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm
  • shuffleRows::Bool - randomize Kacmarz algorithm
  • seed::Int - seed for randomized algorithm
  • iterations::Int - number of iterations

See also createLinearSolver, solve!.

source

FISTA

RegularizedLeastSquares.FISTAType
FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)
+FISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)

creates a FISTA object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • restart::Symbol - :none, :gradient options for restarting
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

OptISTA

RegularizedLeastSquares.OptISTAType
OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))
+OptISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))

creates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.

Reference:

  • Uijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, "Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA," arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

POGM

RegularizedLeastSquares.POGMType
POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)
+POGM( ; AHA = ,     reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)

Creates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.

References:

  • A.B. Taylor, J.M. Hendrickx, F. Glineur, "Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]

  • Kim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]

    Required Arguments

    • A - forward operator

    OR

    • AHA - normal operator (as a keyword argument)

    Optional Keyword Arguments

    • AHA - normal operator is optional if A is supplied
    • reg::AbstractParameterizedRegularization - regularization term
    • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
    • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
    • theta::Real - parameter for predictor-corrector step
    • sigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]
    • relTol::Real - tolerance for stopping criterion
    • iterations::Int - maximum number of iterations
    • restart::Symbol - :none, :gradient options for restarting
    • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

SplitBregman

RegularizedLeastSquares.SplitBregmanType
SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
+SplitBregman( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates a SplitBregman object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms
  • iterations::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)
  • iterationsInner::Int - maximum number of inner iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

This algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterations=1 or use ADMM instead, which is equivalent (iterations=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)

Like ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

Miscellaneous

RegularizedLeastSquares.solverstateFunction
solverstate(solver::AbstractLinearSolver)

Return the current state of the solver

source
RegularizedLeastSquares.solversolutionFunction
solversolution(solver::AbstractLinearSolver)

Return the current solution of the solver

source
solversolution(state::AbstractSolverState)

Return the current solution of the solver's state

source
RegularizedLeastSquares.solverconvergenceFunction
solverconvergence(solver::AbstractLinearSolver)

Return a named tuple of the solvers current convergence metrics

source
RegularizedLeastSquares.StoreSolutionCallbackType
StoreSolutionCallback(T)

Callback that accumlates the solvers solution per iteration. Results are stored in the solutions field.

source
RegularizedLeastSquares.StoreConvergenceCallbackType
StoreConvergenceCallback()

Callback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.

source
RegularizedLeastSquares.CompareSolutionCallbackType
CompareSolutionCallback(ref, cmp)

Callback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.

source
RegularizedLeastSquares.linearSolverListFunction

Return a list of all available linear solvers

source
RegularizedLeastSquares.createLinearSolverFunction
createLinearSolver(solver::AbstractLinearSolver, A; kargs...)

This method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.

TODO: give a hint what solvers are available

source
RegularizedLeastSquares.applicableSolverListFunction
applicable(args...)

list all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.

See also isapplicable, linearSolverList.

source
RegularizedLeastSquares.isapplicableFunction
isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)

return true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.

source
diff --git a/dev/generated/examples/compressed_sensing/2c4e4ab0.png b/dev/generated/examples/compressed_sensing/2c4e4ab0.png new file mode 100644 index 0000000..585ac80 Binary files /dev/null and b/dev/generated/examples/compressed_sensing/2c4e4ab0.png differ diff --git a/dev/generated/examples/compressed_sensing/7fb3e75a.png b/dev/generated/examples/compressed_sensing/7fb3e75a.png new file mode 100644 index 0000000..ef5391f Binary files /dev/null and b/dev/generated/examples/compressed_sensing/7fb3e75a.png differ diff --git a/dev/generated/examples/compressed_sensing/82751606.png b/dev/generated/examples/compressed_sensing/82751606.png deleted file mode 100644 index cad0284..0000000 Binary files a/dev/generated/examples/compressed_sensing/82751606.png and /dev/null differ diff --git a/dev/generated/examples/compressed_sensing/82a9b3c3.png b/dev/generated/examples/compressed_sensing/82a9b3c3.png deleted file mode 100644 index 121f5e3..0000000 Binary files a/dev/generated/examples/compressed_sensing/82a9b3c3.png and /dev/null differ diff --git a/dev/generated/examples/compressed_sensing/index.html b/dev/generated/examples/compressed_sensing/index.html index d95b7aa..c48bec3 100644 --- a/dev/generated/examples/compressed_sensing/index.html +++ b/dev/generated/examples/compressed_sensing/index.html @@ -5,25 +5,25 @@ size(image)
(256, 256)

This produces a 256x256 image of a Shepp-Logan phantom.

In this example, we consider a problem in which we randomly sample a third of the pixels in the image. Such a problem and the corresponding measurement can be constructed with the packages LinearOperatorCollection and Random:

We first randomly shuffle the indices of the image and then select the first third of the indices to sample.

using Random, LinearOperatorCollection
 randomIndices = shuffle(eachindex(image))
 sampledIndices = sort(randomIndices[1:div(end, 3)])
21845-element Vector{Int64}:
+     4
+     7
      9
-    10
-    19
-    24
+    11
+    15
+    18
+    20
     25
     26
-    28
-    35
-    36
-    37
+    27
      ⋮
- 65509
- 65512
- 65514
- 65515
- 65520
- 65521
+ 65492
+ 65500
+ 65516
+ 65522
  65523
- 65527
+ 65524
+ 65528
+ 65530
  65535

Afterwards we build a sampling operator which samples the image at the selected indices.

A = SamplingOp(eltype(image), pattern = sampledIndices , shape = size(image));

Then we apply the sampling operator to the vectorized image to obtain the sampled measurement vector

b = A*vec(image);

To visualize our image we can use CairoMakie:

using CairoMakie
 function plot_image(figPos, img; title = "", width = 150, height = 150)
   ax = CairoMakie.Axis(figPos; yreversed=true, title, width, height)
@@ -36,30 +36,30 @@
 samplingMask[sampledIndices] .= true
 plot_image(fig[1,2], image .* samplingMask, title = "Sampled Image")
 resize_to_layout!(fig)
-fig
Example block output

As we can see in the right image, only a third of the pixels are sampled. The goal of the inverse problem is to recover the original image from this measurement vector.

Solving the Inverse Problem

To recover the image from the measurement vector, we solve the TV-regularized least squares problem:

\[\begin{equation} +figExample block output

As we can see in the right image, only a third of the pixels are sampled. The goal of the inverse problem is to recover the original image from this measurement vector.

Solving the Inverse Problem

To recover the image from the measurement vector, we solve the TV-regularized least squares problem:

\[\begin{equation} \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \lambda\vert\vert\mathbf{x}\vert\vert_{\text{TV}} . \end{equation}\]

For this purpose we build a TV regularizer with regularization parameter $λ=0.01$

using RegularizedLeastSquares
 reg = TVRegularization(0.01; shape=size(image));

We will use the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) to solve our inverse problem. Thus, we build the corresponding solver

solver = createLinearSolver(FISTA, A; reg=reg, iterations=20);

and apply it to our measurement vector

img_approx = solve!(solver,b)
65536-element Vector{Float32}:
- 1.7268512f-18
- 2.767287f-18
- 5.302688f-18
- 1.0522765f-17
- 2.1176763f-17
- 4.4012232f-17
- 9.5682496f-17
- 2.1162977f-16
- 4.494324f-16
- 8.933863f-16
+ 4.475941f-19
+ 9.203197f-19
+ 2.1954707f-18
+ 5.0659254f-18
+ 1.1152266f-17
+ 2.390461f-17
+ 5.0998504f-17
+ 1.070807f-16
+ 2.1490663f-16
+ 4.1222092f-16
  ⋮
- 4.0347837f-16
- 1.8988034f-16
- 8.902884f-17
- 4.230316f-17
- 1.9907461f-17
- 8.953272f-18
- 3.7583915f-18
- 1.524718f-18
- 7.5730536f-19

To visualize the reconstructed image, we need to reshape the result vector to the correct shape. Afterwards we can use CairoMakie again:

img_approx = reshape(img_approx,size(image));
+ 2.364637f-16
+ 1.1878623f-16
+ 5.4425447f-17
+ 2.3837499f-17
+ 1.1017589f-17
+ 5.440978f-18
+ 2.628318f-18
+ 1.2022713f-18
+ 6.2830215f-19

To visualize the reconstructed image, we need to reshape the result vector to the correct shape. Afterwards we can use CairoMakie again:

img_approx = reshape(img_approx,size(image));
 plot_image(fig[1,3], img_approx, title = "Reconstructed Image")
 resize_to_layout!(fig)
-fig
Example block output

This page was generated using Literate.jl.

+figExample block output

This page was generated using Literate.jl.

diff --git a/dev/generated/examples/computed_tomography/index.html b/dev/generated/examples/computed_tomography/index.html index ae8cc9e..84703ed 100644 --- a/dev/generated/examples/computed_tomography/index.html +++ b/dev/generated/examples/computed_tomography/index.html @@ -62,4 +62,4 @@ 0.0

To visualize the reconstructed image, we need to reshape the result vector to the correct shape. Afterwards we can use CairoMakie again:

img_approx = reshape(img_approx,size(image));
 plot_image(fig[1,4], img_approx, title = "Reconstructed Image")
 resize_to_layout!(fig)
-fig
Example block output

This page was generated using Literate.jl.

+figExample block output

This page was generated using Literate.jl.

diff --git a/dev/generated/examples/getting_started/index.html b/dev/generated/examples/getting_started/index.html index 72e1d0e..57ad664 100644 --- a/dev/generated/examples/getting_started/index.html +++ b/dev/generated/examples/getting_started/index.html @@ -9,4 +9,4 @@ \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \lambda\vert\vert\mathbf{x}\vert\vert^2_2 . \end{equation}\]

The corresponding solver can be built with the $l^2_2$-regularization term:

solver = createLinearSolver(CGNR, A; reg = L2Regularization(0.0001), iterations=32);
 x_approx = solve!(solver, b)
-isapprox(x, x_approx, rtol = 0.001)
true

This page was generated using Literate.jl.

+isapprox(x, x_approx, rtol = 0.001)
true

This page was generated using Literate.jl.

diff --git a/dev/generated/explanations/regularization/index.html b/dev/generated/explanations/regularization/index.html index d54db3c..51b8246 100644 --- a/dev/generated/explanations/regularization/index.html +++ b/dev/generated/explanations/regularization/index.html @@ -55,4 +55,4 @@ plot_image(fig[1,4], img_prox_wavelet, title = "Reg. Wavelet Domain") resize_to_layout!(fig) figExample block output

Generally, regularization terms can be nested arbitrarly deep, adding new functionality with each layer. Each nested regularization term can return its inner regularization term. Furthermore, all regularization terms implement the iteration interface to iterate over the nesting. The innermost regularization term of a nested term must be a core regularization term and it can be returned by the sink function:

RegularizedLeastSquares.innerreg(reg) == core
true
sink(reg) == core
true
foreach(r -> println(nameof(typeof(r))), reg)
TransformedRegularization
-L1Regularization

This page was generated using Literate.jl.

+L1Regularization

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/callbacks/32b1ce77.png b/dev/generated/howto/callbacks/32b1ce77.png new file mode 100644 index 0000000..de01f39 Binary files /dev/null and b/dev/generated/howto/callbacks/32b1ce77.png differ diff --git a/dev/generated/howto/callbacks/fda7561b.png b/dev/generated/howto/callbacks/fda7561b.png deleted file mode 100644 index 2249dbd..0000000 Binary files a/dev/generated/howto/callbacks/fda7561b.png and /dev/null differ diff --git a/dev/generated/howto/callbacks/index.html b/dev/generated/howto/callbacks/index.html index e1ec850..aadb904 100644 --- a/dev/generated/howto/callbacks/index.html +++ b/dev/generated/howto/callbacks/index.html @@ -24,24 +24,24 @@ global idx += 1 end end
65536-element Vector{Float32}:
- 6.6228024f-19
- 1.2359009f-18
- 2.6625658f-18
- 5.4949223f-18
- 1.1019618f-17
- 2.2693419f-17
- 4.7480908f-17
- 9.878613f-17
- 2.1226077f-16
- 4.751228f-16
+ 3.5526627f-19
+ 6.1204515f-19
+ 1.3693467f-18
+ 3.3300956f-18
+ 8.007986f-18
+ 1.8703397f-17
+ 4.426772f-17
+ 1.0914947f-16
+ 2.7087171f-16
+ 6.4334863f-16
  ⋮
- 6.87197f-16
- 3.2824552f-16
- 1.4768946f-16
- 6.4488536f-17
- 2.8849184f-17
- 1.2999208f-17
- 5.619139f-18
- 2.414056f-18
- 1.2780352f-18

In the callback we have to copy the solution, as the solver will update it in place. As is explained in the solver section, each features fields that are intended to be immutable during a solve! call and a state that is modified in each iteration. Depending on the solvers parameters and the measurement input, the state can differ in its fields and their type. Ideally, one tries to avoid accessing the state directly and uses the provided functions to access the state.

The resulting figure shows the reconstructed image after 0, 4, 8, 12, 16 and 20 iterations:

resize_to_layout!(fig)
-fig
Example block output

This page was generated using Literate.jl.

+ 2.488705f-16 + 1.311065f-16 + 6.852115f-17 + 3.5381694f-17 + 1.7851537f-17 + 8.566799f-18 + 3.8574815f-18 + 1.7223015f-18 + 9.473081f-19

In the callback we have to copy the solution, as the solver will update it in place. As is explained in the solver section, each features fields that are intended to be immutable during a solve! call and a state that is modified in each iteration. Depending on the solvers parameters and the measurement input, the state can differ in its fields and their type. Ideally, one tries to avoid accessing the state directly and uses the provided functions to access the state.

The resulting figure shows the reconstructed image after 0, 4, 8, 12, 16 and 20 iterations:

resize_to_layout!(fig)
+fig
Example block output

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/efficient_kaczmarz/index.html b/dev/generated/howto/efficient_kaczmarz/index.html index bdb0d0d..3b365f2 100644 --- a/dev/generated/howto/efficient_kaczmarz/index.html +++ b/dev/generated/howto/efficient_kaczmarz/index.html @@ -5,44 +5,44 @@ b = A*x;

The dot_with_matrix_row function calculates the dot product between a row of A and the current approximate solution of x:

row = 1
 isapprox(RegularizedLeastSquares.dot_with_matrix_row(A, x, row), sum(A[row, :] .* x))
true

Since in Julia, dense arrays are stored in column-major order, such a row-based operation is quite inefficient. A workaround is to transpose the matrix then pass it to a Kaczmarz solver.

At = collect(transpose(A))
 A_eff = transpose(At)
256×256 transpose(::Matrix{Float64}) with eltype Float64:
- -0.592006    -1.09356     0.606957   …  -1.6098      0.709633   -1.42421
-  0.0118192   -0.602804   -0.592768      -0.129003    0.979147   -1.00958
-  0.194461    -0.537236   -1.27768       -0.941004    0.482488    0.959035
- -0.880044    -0.690664    0.0199048      0.832869   -0.518061    1.06487
-  1.72536      0.624136   -0.77231       -0.914217   -0.681412   -0.0089963
- -1.24327      0.321073    0.0712493  …   0.603416    0.439568   -0.141753
- -0.621671    -0.989508    1.66463       -0.649635   -1.11376     0.362636
- -0.295115    -1.1093     -0.48793        1.81414     0.522765    0.576341
- -0.00828994  -0.610589    1.32246       -0.220332   -1.92136     0.861899
- -0.150961    -1.53296     0.379674      -0.575321    1.0326      1.19225
-  ⋮                                   ⋱                           ⋮
- -0.743986    -0.436501    0.128749      -0.537145   -0.0333611   1.08457
- -1.99141     -0.966525    0.106357      -0.613577    0.854087   -0.314327
- -0.455865     1.15869    -1.362          0.0713644  -0.188868    1.7383
-  0.299135    -0.0378606  -1.43608    …  -0.133535   -0.192522    0.423759
- -1.19836      0.488826   -0.140327      -2.22238     0.658275   -0.257311
-  0.334698    -1.04745    -0.584311      -1.93016     0.844507   -0.0947162
-  0.916687     0.135363   -1.35232       -0.769221   -1.25826     0.205746
-  3.19631     -1.34659     0.475973       0.527972   -0.0708469  -0.361476
-  0.834983    -0.44712    -0.497614   …   0.179572    0.422279   -1.04092

Note that the transpose function can return a lazy transpose object, so we first collect the transpose into a dense matrix. Then we transpose it again to get the efficient representation of the matrix.

We can compare the performance using the BenchmarkTools.jl package. First for the original matrix:

using BenchmarkTools
+ -1.58848     -0.702876   0.207895   …   0.628348   -0.911277  -2.35128
+ -1.05195      1.20508    1.89204        0.916688   -0.104614   1.00904
+ -0.182419    -0.107532  -0.562823       0.484047    0.473667   0.138092
+ -1.25688     -3.30726   -0.350749      -1.21014    -0.827118   0.842181
+ -0.934613     0.049951   1.35801       -0.507719    0.284878   1.05856
+ -0.0780963   -0.802092  -0.991011   …  -0.480662   -0.398849  -0.00911743
+  0.0624419   -0.264591  -0.636085       1.19338     0.978706   0.161998
+ -1.29355     -0.556279   0.62763       -1.11124    -0.20356    0.588147
+  0.00162896  -0.141933   0.613443       0.804567   -0.771693   0.223116
+ -0.714607     1.37215   -0.348343      -0.0881502  -0.184279   2.26516
+  ⋮                                  ⋱                          ⋮
+  0.0593052    0.150813  -0.418234      -0.363281   -0.126677  -1.031
+  0.315976     0.475341  -0.364566      -1.57088    -1.1108     1.90436
+ -0.222124     1.16131   -0.109632      -0.727029   -0.483203   2.73097
+ -1.32104      1.03128    1.23305    …  -0.749822    0.384353   0.61805
+ -0.607512    -1.30241   -0.119776       0.256326    0.226516  -0.539451
+ -0.690908    -0.605261   0.0299372     -0.102816   -0.65516    1.19076
+  0.96605     -0.742487  -0.231368      -0.541471   -2.23757   -2.95343
+ -1.19494      0.692316   0.193184       0.0976774  -0.570039   0.422988
+  0.0332446   -0.707908  -1.50533    …   0.0489399  -2.65358   -0.622796

Note that the transpose function can return a lazy transpose object, so we first collect the transpose into a dense matrix. Then we transpose it again to get the efficient representation of the matrix.

We can compare the performance using the BenchmarkTools.jl package. First for the original matrix:

using BenchmarkTools
 solver = createLinearSolver(Kaczmarz, A; reg = L2Regularization(0.0001), iterations=100)
 @benchmark solve!(solver, b) samples = 100
BenchmarkTools.Trial: 100 samples with 1 evaluation.
- Range (minmax):  25.229 ms 27.857 ms   GC (min … max): 0.00% … 0.00%
- Time  (median):     27.137 ms                GC (median):    0.00%
- Time  (mean ± σ):   27.135 ms ± 240.091 μs   GC (mean ± σ):  0.00% ± 0.00%
+ Range (minmax):  25.668 ms 27.919 ms   GC (min … max): 0.00% … 0.00%
+ Time  (median):     27.267 ms                GC (median):    0.00%
+ Time  (mean ± σ):   27.245 ms ± 221.822 μs   GC (mean ± σ):  0.00% ± 0.00%
 
-                                                  █            
-  ▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅█▅▃▃▂▁▁▁▁▁▃ ▂
-  25.2 ms         Histogram: frequency by time         27.6 ms <
+                                                               
+  ▂▁▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▇▃ ▂
+  25.7 ms         Histogram: frequency by time         27.3 ms <
 
  Memory estimate: 17.31 KiB, allocs estimate: 505.

And then for the efficient matrix:

solver_eff = createLinearSolver(Kaczmarz, A_eff; reg = L2Regularization(0.0001), iterations=100)
 @benchmark solve!(solver_eff, b) samples = 100
BenchmarkTools.Trial: 100 samples with 1 evaluation.
- Range (minmax):  1.776 ms 2.284 ms   GC (min … max): 0.00% … 0.00%
- Time  (median):     1.796 ms               GC (median):    0.00%
- Time  (mean ± σ):   1.804 ms ± 51.142 μs   GC (mean ± σ):  0.00% ± 0.00%
+ Range (minmax):  1.779 ms 2.243 ms   GC (min … max): 0.00% … 0.00%
+ Time  (median):     1.795 ms               GC (median):    0.00%
+ Time  (mean ± σ):   1.813 ms ± 68.468 μs   GC (mean ± σ):  0.00% ± 0.00%
 
-   ▃ ▃█▆  ▃ ▁ ▆▆  ▁▁  ▆ ▁▁ ▁                                 
-  ▄█▇███▇▇█▇█▄██▇██▄▇█▇██▄█▁▇▄▁▁▁▇▄▇▇▁▁▁▁▁▇▁▇▄▁▁▄▁▁▄▁▁▁▁▁▄ ▄
-  1.78 ms        Histogram: frequency by time        1.85 ms <
+  ▆▅▄▂                                                       
+  █████▅▁▆▁▅▁▁▁▁▁▁▅▁▅▁▁▁▁▁▁▅▁▁▁▅▅▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▅ ▅
+  1.78 ms      Histogram: log(frequency) by time     2.17 ms <
 
- Memory estimate: 17.34 KiB, allocs estimate: 507.

We can also combine the efficient matrix with a weighting matrix, as is shown in the Weighting example.

Custom operators need to implement the dot_with_matrix_row function to be used with the Kaczmarz solver. Ideally, such an implementation is allocation free.


This page was generated using Literate.jl.

+ Memory estimate: 17.34 KiB, allocs estimate: 507.

We can also combine the efficient matrix with a weighting matrix, as is shown in the Weighting example.

Custom operators need to implement the dot_with_matrix_row function to be used with the Kaczmarz solver. Ideally, such an implementation is allocation free.


This page was generated using Literate.jl.

diff --git a/dev/generated/howto/gpu_acceleration/6cab7b41.png b/dev/generated/howto/gpu_acceleration/6cab7b41.png deleted file mode 100644 index 200699a..0000000 Binary files a/dev/generated/howto/gpu_acceleration/6cab7b41.png and /dev/null differ diff --git a/dev/generated/howto/gpu_acceleration/d10dcde8.png b/dev/generated/howto/gpu_acceleration/d10dcde8.png new file mode 100644 index 0000000..f061f57 Binary files /dev/null and b/dev/generated/howto/gpu_acceleration/d10dcde8.png differ diff --git a/dev/generated/howto/gpu_acceleration/index.html b/dev/generated/howto/gpu_acceleration/index.html index 8956c15..4dcd8d7 100644 --- a/dev/generated/howto/gpu_acceleration/index.html +++ b/dev/generated/howto/gpu_acceleration/index.html @@ -6,22 +6,22 @@ x = gpu(rand(Float32, 16)) b = A*x;

Solvers adapt their states based on the type of the given measurement vector. This means that the solver will automatically switch to GPU acceleration if a GPU array is passed as the measurement vector.

solver = createLinearSolver(CGNR, A; reg = L2Regularization(0.0001), iterations=32);
 x_approx = solve!(solver, b)
16-element JLArrays.JLArray{Float32, 1}:
- 0.93737096
- 0.96504366
- 0.933391
- 0.71458656
- 0.44426453
- 0.6241676
- 0.27723044
- 0.037780937
- 0.090819545
- 0.8927891
- 0.6557523
- 0.8132127
- 0.15542854
- 0.7389976
- 0.09504054
- 0.02497923

This adaption does not include the operator. So if we want to compare with CPU result, we need to construct a new solver with a CPU operator.

solver = createLinearSolver(CGNR, Array(A); reg = L2Regularization(0.0001), iterations=32);
+ 0.40141362
+ 0.37949044
+ 0.891507
+ 0.2054995
+ 0.60271794
+ 0.8562656
+ 0.849092
+ 0.24734385
+ 0.895575
+ 0.8785708
+ 0.43603998
+ 0.57751924
+ 0.4279998
+ 0.49717334
+ 0.509235
+ 0.733801

This adaption does not include the operator. So if we want to compare with CPU result, we need to construct a new solver with a CPU operator.

solver = createLinearSolver(CGNR, Array(A); reg = L2Regularization(0.0001), iterations=32);
 x_cpu = solve!(solver, Array(b))
 isapprox(Array(x_approx), x_cpu)
true

Matrix-Free Operators

A special case is the usage of matrix-free operators. Since these operators do not have a concrete matrix representation, their GPU support depends on their implementation. Since not all multiplications within a solver approximation are in-place, the operator also needs to support the * operation and construct an appropriate result vector. For matrix-free operators based on LinearOperators.jl, this can be achieved by implementing the LinearOperators.storage_type method.

In the following, we will take another look at the CS example and execute it on the GPU. Note that for the JLArray example we chose a small phantom, since the JLArray implementation is not optimized for performance:

using ImagePhantoms, ImageGeoms
 N = 32
@@ -35,26 +35,26 @@
 reg = TVRegularization(0.01; shape=size(image))
 solver = createLinearSolver(FISTA, A; reg=reg, iterations=20)
 img_approx = solve!(solver,b);

To visualize the reconstructed image, we need to reshape the result vector to the correct shape and convert it to a CPU array:

img_approx = reshape(Array(img_approx),size(image))
32×32 Matrix{Float32}:
- 0.00117502   0.00119662   0.00126802   …  0.00187409  0.00188455  0.00188236
- 0.00121398   0.00125591   0.00135638      0.0021006   0.00206428  0.0020369
- 0.00129529   0.00137541   0.00153972      0.00253873  0.00242025  0.0023598
- 0.00146786   0.00159666   0.0018635       0.00329029  0.00302872  0.00291858
- 0.00180539   0.00199923   0.00243153      0.00449109  0.00396588  0.00373822
- 0.00238476   0.00269974   0.00340714   …  0.00609485  0.00519683  0.00475792
- 0.00329801   0.00382595   0.00489752      0.00782527  0.00660758  0.00592285
- 0.00452793   0.00534133   0.00670401      0.00935588  0.00817232  0.0074028
- 0.00594366   0.00698595   0.0080334       0.0107542   0.00987068  0.00932967
- 0.00750498   0.00832567   0.0084309       0.011467    0.0112405   0.011212
- ⋮                                      ⋱              ⋮           
- 0.00771173   0.00842696   0.00882389      0.0195737   0.0113748   0.0121751
- 0.00562524   0.00653121   0.0078416       0.0107378   0.0109635   0.0103972
- 0.00394678   0.00461662   0.00590987   …  0.0100134   0.00906975  0.00828178
- 0.00284803   0.00323438   0.004121        0.00790083  0.00688545  0.00632174
- 0.00209965   0.00230613   0.00283774      0.00575686  0.00507323  0.00473092
- 0.00152605   0.00162766   0.00192023      0.00415118  0.00370988  0.00347377
- 0.00110034   0.00114149   0.0012856       0.0030561   0.00272431  0.0025306
- 0.000833002  0.000841349  0.000902809  …  0.00232195  0.0020463   0.00189023
- 0.000708099  0.000700929  0.000725532     0.0019339   0.00168778  0.00156009

We will again use CairoMakie for visualization:

using CairoMakie
+ 0.00128774  0.0012763   0.00128904  …  0.00342892  0.0035108   0.00354592
+ 0.00145327  0.0014527   0.00148731     0.00355013  0.00363533  0.00367108
+ 0.00177584  0.00178805  0.00185264     0.00379079  0.00385326  0.00388945
+ 0.00225845  0.00228468  0.00239746     0.00426484  0.004216    0.00423158
+ 0.00289706  0.00297168  0.00322066     0.00517835  0.00485316  0.00476427
+ 0.00370639  0.00392548  0.0044738   …  0.00657066  0.00586181  0.00553439
+ 0.00485403  0.0053074   0.00626035     0.0080966   0.00720363  0.00660532
+ 0.00653616  0.00715774  0.00843245     0.00941454  0.00870248  0.00801619
+ 0.00800757  0.00869598  0.00968901     0.0105354   0.0103189   0.00970467
+ 0.00838777  0.00907723  0.00957582     0.0114005   0.0117576   0.0114943
+ ⋮                                   ⋱              ⋮           
+ 0.0101301   0.00910513  0.00967611     0.00968357  0.00993852  0.0088728
+ 0.00761723  0.00800098  0.00858205     0.00857169  0.00756542  0.00674794
+ 0.00535211  0.00580545  0.00654928  …  0.00656933  0.00559808  0.00499241
+ 0.00387354  0.00414518  0.00475304     0.00469581  0.00398278  0.00362488
+ 0.00296397  0.00310832  0.00348954     0.00323736  0.00282819  0.00264147
+ 0.00235898  0.00245359  0.00269084     0.00229174  0.00204469  0.00193922
+ 0.00194335  0.00201163  0.00217644     0.00167443  0.00150839  0.00143566
+ 0.00169942  0.00173755  0.00184929  …  0.00128426  0.00117083  0.00111556
+ 0.00159879  0.00161309  0.00168829     0.00110486  0.00101851  0.000969791

We will again use CairoMakie for visualization:

using CairoMakie
 function plot_image(figPos, img; title = "", width = 150, height = 150)
   ax = CairoMakie.Axis(figPos; yreversed=true, title, width, height)
   hidedecorations!(ax)
@@ -67,4 +67,4 @@
 plot_image(fig[1,2], image .* samplingMask, title = "Sampled Image")
 plot_image(fig[1,3], img_approx, title = "Reconstructed Image")
 resize_to_layout!(fig)
-fig
Example block output

This page was generated using Literate.jl.

+figExample block output

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/multi_threading/index.html b/dev/generated/howto/multi_threading/index.html index 14e0fa2..eb1039b 100644 --- a/dev/generated/howto/multi_threading/index.html +++ b/dev/generated/howto/multi_threading/index.html @@ -10,7 +10,7 @@ xs_approx[i] = solve!(solver, bs[i]) end

Operator Based Multi-Threading

This type of multi-threading involves linear operators or proximal maps that can be implemnted in parallel. Examples of this include the proximal map of the TV regularization term, which is based on the multi-threaded GradientOp from LinearOperatorCollection. GPU acceleration also falls under this approach, see GPU Acceleration for more information.

Measurement Based Multi-Threading

This level of multi-threading applies the same solver (and its parameters) to multiple measurement vectors or rather a measurement matrix B. This is useful in the case of multiple measurements that can be solved in parallel and can reuse the same solver. This approach is not applicable if the operator is stateful.

To use this approach we first build a measurement matrix B and a corresponding solver:

A = first(As)
 B = mapreduce(x -> A*x, hcat, xs)
-solver = createLinearSolver(CGNR, A; iterations=32)
CGNR{Matrix{Float64}, Matrix{Float64}, L2Regularization{Float64}, Vector{Any}}([0.36034338720323 0.7376032888343824 … 0.6945766745627562 0.4320601960578849; 0.6171391308643464 0.9413862358697862 … 0.061483755541337226 0.6742589400073012; … ; 0.9510060788976923 0.12609680052781025 … 0.787580647293926 0.3889176060760945; 0.8058513530196659 0.687900090216087 … 0.6222110176999734 0.6632588486832659], [11.792906114596663 9.64440947495289 … 9.395979741869482 8.341694046841637; 9.64440947495289 11.297980247297732 … 8.300854690237548 8.780821304081012; … ; 9.395979741869482 8.300854690237548 … 11.639983272022384 8.608692134428471; 8.341694046841637 8.780821304081012 … 8.608692134428471 11.112422103387987], L2Regularization{Float64}(0.0), Any[], NoNormalization(), 32, RegularizedLeastSquares.CGNRState{Float64, Float64, Vector{Float64}}([5.06e-321, 6.9327161726711e-310, 6.93272068980725e-310, 0.0, 5.06e-321, 6.93269980165486e-310, 5.0e-324, 5.0e-324, 6.9326998016398e-310, 0.0, 5.06e-321, 8.86e-321, 5.0e-324, 5.06e-321, 5.0e-324, 6.9327189117575e-310], [5.06e-321, 6.9327161726711e-310, 6.93272068980725e-310, 0.0, 5.06e-321, 6.93269980165486e-310, 5.0e-324, 5.0e-324, 6.9326998016398e-310, 0.0, 5.06e-321, 8.86e-321, 5.0e-324, 5.06e-321, 5.0e-324, 6.93271891176064e-310], [5.06e-321, 6.9327161726711e-310, 6.93272068980725e-310, 0.0, 5.06e-321, 6.93269980165486e-310, 5.0e-324, 5.0e-324, 6.9326998016398e-310, 0.0, 5.06e-321, 8.86e-321, 5.0e-324, 5.06e-321, 5.0e-324, 6.9327189117638e-310], [5.06e-321, 6.9327161726711e-310, 6.93272068980725e-310, 0.0, 5.06e-321, 6.93269980165486e-310, 5.0e-324, 5.0e-324, 6.9326998016398e-310, 0.0, 5.06e-321, 8.86e-321, 5.0e-324, 5.06e-321, 5.0e-324, 6.93271891176696e-310], 0.0, 0.0, 0.0, 0, 2.220446049250313e-16, 0.0))

We can then simply pass the measurement matrix to the solver. The result will be the same as if we passed each colument of B seperately:

x_approx = solve!(solver, B)
+solver = createLinearSolver(CGNR, A; iterations=32)
CGNR{Matrix{Float64}, Matrix{Float64}, L2Regularization{Float64}, Vector{Any}}([0.13719691443453896 0.5083452641556484 … 0.5021565279811417 0.36725858886855123; 0.9306878352230292 0.801948902910408 … 0.49587277200432334 0.9532699010891921; … ; 0.9909241188775895 0.01223012675201851 … 0.4200537485019362 0.6166588329858235; 0.8439566352217166 0.157731907236769 … 0.9811121887435705 0.6238331008573217], [10.374502436090257 7.510053274280251 … 7.173207732105199 7.20431259815989; 7.510053274280251 11.303971169983539 … 7.0111504440927135 6.875911285298582; … ; 7.173207732105199 7.0111504440927135 … 9.993323958629501 7.261763494333879; 7.20431259815989 6.875911285298582 … 7.261763494333879 8.655313753636896], L2Regularization{Float64}(0.0), Any[], NoNormalization(), 32, RegularizedLeastSquares.CGNRState{Float64, Float64, Vector{Float64}}([9.8e-321, 6.92179221656174e-310, 6.92193306521907e-310, 0.0, 9.8e-321, 6.9219340225057e-310, 5.0e-324, 5.0e-324, 6.9219340225112e-310, 0.0, 9.8e-321, 8.86e-321, 5.0e-324, 9.8e-321, 5.0e-324, 6.92186016220935e-310], [9.8e-321, 6.92179221656174e-310, 6.92193306521907e-310, 0.0, 9.8e-321, 6.9219340225057e-310, 5.0e-324, 5.0e-324, 6.9219340225112e-310, 0.0, 9.8e-321, 8.86e-321, 5.0e-324, 9.8e-321, 5.0e-324, 6.9218601622125e-310], [9.8e-321, 6.92179221656174e-310, 6.92193306521907e-310, 0.0, 9.8e-321, 6.9219340225057e-310, 5.0e-324, 5.0e-324, 6.9219340225112e-310, 0.0, 9.8e-321, 8.86e-321, 5.0e-324, 9.8e-321, 5.0e-324, 6.9218601622157e-310], [9.8e-321, 6.92179221656174e-310, 6.92193306521907e-310, 0.0, 9.8e-321, 6.9219340225057e-310, 5.0e-324, 5.0e-324, 6.9219340225112e-310, 0.0, 9.8e-321, 8.86e-321, 5.0e-324, 9.8e-321, 5.0e-324, 6.921860162222e-310], 0.0, 0.0, 0.0, 0, 2.220446049250313e-16, 0.0))

We can then simply pass the measurement matrix to the solver. The result will be the same as if we passed each colument of B seperately:

x_approx = solve!(solver, B)
 size(x_approx)
(16, 4)

The previous solve! call was still executed sequentially. To execute it in parallel, we have to specify a multi-threaded scheduler as a keyword-argument of the solve! call. RegularizedLeastSquares.jl provides a MultiThreadingState scheduler that can be used for this purpose. This scheduler is based on the Threads.@threads macro:

x_multi = solve!(solver, B; scheduler = MultiThreadingState)
 x_approx == x_multi
true

Custom Scheduling

It is possible to implement custom scheduling. The following code shows how to implement this for the Threads.@spawn macro. Usually one this to implement multi-threading with a package such as FLoop.jl or ThreadPools.jl for thread pinning:

Since most solver have conv. criteria, they can finish at different iteration numbers, which we track this information with flags.

 mutable struct SpawnState{S, ST <: AbstractSolverState{S}} <: RegularizedLeastSquares.AbstractMatrixSolverState{S}
    states::Vector{ST}
@@ -25,4 +25,4 @@
   end
   return state.active, state
 end

Now we can simply use the SpawnState scheduler in the solve! call:

x_custom = solve!(solver, B; scheduler = SpawnState)
-x_approx == x_multi
true

This page was generated using Literate.jl.

+x_approx == x_multi
true

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/normal_operator/index.html b/dev/generated/howto/normal_operator/index.html index 51a2025..87a6848 100644 --- a/dev/generated/howto/normal_operator/index.html +++ b/dev/generated/howto/normal_operator/index.html @@ -14,22 +14,22 @@ solver = createLinearSolver(CGNR, A; AHA = adjoint(A) * A, reg = L2Regularization(0.0001), iterations=32); x_approx = solve!(solver, b)
16-element Vector{Float64}:
-  0.7614324761258512
-  0.5171049075218409
- -1.992714411217983
-  1.1601932820146472
-  0.14573707573772784
-  0.09492396822425413
-  1.06939709148318
-  0.8044011070372523
- -0.6256726349045354
-  0.1198032877747135
- -1.2583124594975639
-  0.4863973531356872
-  0.7939915175657417
-  1.5345385781893208
-  0.3313501329348165
-  0.2963569175237532

The normal operator can also be computed using the normalOperator function from LinearOperatorCollection.jl. This is useful if the normal operator is not directly available or shouldn't be stored in memory. This function is opinionated and attempts to optimize the resulting operator for iterative applications. Specifying a custom method for a custom operator allows one to control this optimization.

An example of such an optimization is a matrix-free weighting of $\mathbf{A}$ as shown in the Weighting example:

using LinearOperatorCollection
+ -0.36114181646403076
+  1.301281200446072
+ -0.08422842396648203
+ -1.0140319289302482
+ -0.4630864155023963
+ -0.08922523075447159
+ -1.0166299649528139
+  1.1634757153346789
+ -0.1682907870335553
+  0.25264331915065014
+  0.4787420180283147
+ -0.44126974439127664
+  2.258761908627233
+  1.1287214664750718
+ -1.3122420796194265
+  2.0340685666394425

The normal operator can also be computed using the normalOperator function from LinearOperatorCollection.jl. This is useful if the normal operator is not directly available or shouldn't be stored in memory. This function is opinionated and attempts to optimize the resulting operator for iterative applications. Specifying a custom method for a custom operator allows one to control this optimization.

An example of such an optimization is a matrix-free weighting of $\mathbf{A}$ as shown in the Weighting example:

using LinearOperatorCollection
 weights = rand(32)
 WA = ProdOp(WeightingOp(weights), A)
 AHA = LinearOperatorCollection.normalOperator(WA)
Linear operator
@@ -50,19 +50,19 @@
   \mathbf{A}^*\tilde{\mathbf{W}}\mathbf{A}
 \end{equation}\]

The optimized normal operator can then be passed to the solver:

solver = createLinearSolver(CGNR, WA; AHA = AHA, reg = L2Regularization(0.0001), iterations=32);
 x_approx2 = solve!(solver, weights .* b)
16-element Vector{Float64}:
-  0.7614084631422479
-  0.517068032710027
- -1.9924376840243259
-  1.1599069790534633
-  0.14574752762140814
-  0.09493858297802062
-  1.0692571143692473
-  0.8044028616423476
- -0.6255744564139221
-  0.11988944754024934
- -1.2583603942331416
-  0.4862973553059159
-  0.7940618788275474
-  1.5345642185986093
-  0.3313119902887417
-  0.2962961295518928

Of course it is also possible to optimize a normal operator with other means and pass it to the solver via the AHA keyword argument.

It is also possible to only supply the normal operator to these solvers, however on then needs to supply $\mathbf{A^*b}$ intead of $\mathbf{b}$.


This page was generated using Literate.jl.

+ -0.36107204936377074 + 1.3013554093952597 + -0.08426670465279733 + -1.013960287096054 + -0.4630962222598162 + -0.08906954738075112 + -1.0165494748064565 + 1.1634332743979154 + -0.16821219775431212 + 0.25253727796842024 + 0.47875481510102674 + -0.44131534727830374 + 2.258745883409155 + 1.128815867556167 + -1.3121973040900439 + 2.0339414345243445

Of course it is also possible to optimize a normal operator with other means and pass it to the solver via the AHA keyword argument.

It is also possible to only supply the normal operator to these solvers, however on then needs to supply $\mathbf{A^*b}$ intead of $\mathbf{b}$.


This page was generated using Literate.jl.

diff --git a/dev/generated/howto/plug-and-play/index.html b/dev/generated/howto/plug-and-play/index.html index 27bf9e9..fa9e85c 100644 --- a/dev/generated/howto/plug-and-play/index.html +++ b/dev/generated/howto/plug-and-play/index.html @@ -5,19 +5,19 @@ b = A*x;

For the documentation we will just use the identity function as a placeholder for the PnP prior.

model = identity
identity (generic function with 1 method)

In practice, you would replace this with a neural network:

using Flux
 model = Flux.loadmodel!(model, ...)

The model can then be used together with the PnPRegularization term:

reg = PnPRegularization(1.0; model = model, shape = [16]);

Since models often expect a specific input range, we can use the MinMaxTransform to normalize the input:

reg = PnPRegularization(1.0; model = model, shape = [16], input_transform = RegularizedLeastSquares.MinMaxTransform);

Custom input transforms can be implemented by passing something callable as the input_transform keyword argument. For more details see the PnPRegularization documentation.

The regularization term can then be used in the solver:

solver = createLinearSolver(Kaczmarz, A; reg = reg, iterations = 32)
 x_approx = solve!(solver, b)
16-element Vector{Float64}:
-  0.11324372239828229
- -1.3706624066053679
- -2.752706161232867
- -0.6172155110671076
- -0.47153606859787534
-  0.32328611342973357
-  0.18263897844123278
- -0.6991011944129935
-  0.5365231733566738
-  1.8213473018661954
- -0.0436194823721654
-  0.04201844275657329
- -0.17580315790357393
- -0.39424618196318884
-  0.6666255879148419
-  1.16729851094769

This page was generated using Literate.jl.

+ 0.16317136670084653 + -0.2619900366287182 + -1.1772575027607195 + 1.7338538862232973 + -1.2454647840388229 + -0.5715012517558823 + 0.7364847843676281 + 1.6215605591651427 + 1.115314063343094 + 1.3251701221056584 + 0.9066069527932701 + -0.007474906828228489 + -0.9741295062700638 + 0.30400318859360276 + 0.01289827728067694 + 0.37482475195275455

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/weighting/index.html b/dev/generated/howto/weighting/index.html index 32cff61..1a6b60a 100644 --- a/dev/generated/howto/weighting/index.html +++ b/dev/generated/howto/weighting/index.html @@ -12,4 +12,4 @@ P = ProdOp(W, A) solver = createLinearSolver(Kaczmarz, P; reg = L2Regularization(0.0001), iterations=10) x_approx2 = solve!(solver, W * b) -isapprox(x_approx, x_approx2)
true

This page was generated using Literate.jl.

+isapprox(x_approx, x_approx2)
true

This page was generated using Literate.jl.

diff --git a/dev/index.html b/dev/index.html index 2874225..f14771c 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · RegularizedLeastSquares.jl

RegularizedLeastSquares.jl

Solvers for Linear Inverse Problems using Regularization Techniques

Introduction

RegularizedLeastSquares.jl is a Julia package for solving large linear systems using various types of algorithms. Ill-conditioned problems arise in many areas of practical interest. Regularisation techniques and nonlinear problem formulations are often used to solve these problems. This package provides implementations for a variety of solvers used in areas such as MPI and MRI. In particular, this package serves as the optimization backend of the Julia packages MPIReco.jl and MRIReco.jl.

The implemented methods range from the $l^2_2$-regularized CGNR method to more general optimizers such as the Alternating Direction of Multipliers Method (ADMM) or the Split-Bregman method.

For convenience, implementations of popular regularizers, such as $l_1$-regularization and TV regularization, are provided. On the other hand, hand-crafted regularizers can be used quite easily.

Depending on the problem, it becomes unfeasible to store the full system matrix at hand. For this purpose, RegularizedLeastSquares.jl allows for the use of matrix-free operators. Such operators can be realized using the interface provided by the package LinearOperators.jl. Other interfaces can be used as well, as long as the product *(A,x) and the adjoint adjoint(A) are provided. A number of common matrix-free operators are provided by the package LinearOperatorColection.jl.

Features

  • Variety of optimization algorithms optimized for least squares problems
  • Support for matrix-free operators
  • GPU support

Usage

See also

Packages:

Organizations:

+Home · RegularizedLeastSquares.jl

RegularizedLeastSquares.jl

Solvers for Linear Inverse Problems using Regularization Techniques

Introduction

RegularizedLeastSquares.jl is a Julia package for solving large linear systems using various types of algorithms. Ill-conditioned problems arise in many areas of practical interest. Regularisation techniques and nonlinear problem formulations are often used to solve these problems. This package provides implementations for a variety of solvers used in areas such as MPI and MRI. In particular, this package serves as the optimization backend of the Julia packages MPIReco.jl and MRIReco.jl.

The implemented methods range from the $l^2_2$-regularized CGNR method to more general optimizers such as the Alternating Direction of Multipliers Method (ADMM) or the Split-Bregman method.

For convenience, implementations of popular regularizers, such as $l_1$-regularization and TV regularization, are provided. On the other hand, hand-crafted regularizers can be used quite easily.

Depending on the problem, it becomes unfeasible to store the full system matrix at hand. For this purpose, RegularizedLeastSquares.jl allows for the use of matrix-free operators. Such operators can be realized using the interface provided by the package LinearOperators.jl. Other interfaces can be used as well, as long as the product *(A,x) and the adjoint adjoint(A) are provided. A number of common matrix-free operators are provided by the package LinearOperatorColection.jl.

Features

  • Variety of optimization algorithms optimized for least squares problems
  • Support for matrix-free operators
  • GPU support

Usage

See also

Packages:

Organizations:

diff --git a/dev/solvers/index.html b/dev/solvers/index.html index 979f717..b2b780d 100644 --- a/dev/solvers/index.html +++ b/dev/solvers/index.html @@ -28,4 +28,4 @@ function iterate(solver::Solver, state::VarianteState) # Custom iteration -end +end