diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 0d2756e..ef7e6c1 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-23T09:35:01","documenter_version":"1.6.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.10.4","generation_timestamp":"2024-08-23T09:35:44","documenter_version":"1.6.0"}} \ No newline at end of file diff --git a/dev/API/regularization/index.html b/dev/API/regularization/index.html index 0e0d21c..da02b3d 100644 --- a/dev/API/regularization/index.html +++ b/dev/API/regularization/index.html @@ -1,5 +1,5 @@ -Regularization Terms · RegularizedLeastSquares.jl

API for Regularizers

This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?

RegularizedLeastSquares.L21RegularizationType
L21Regularization

Regularization term implementing the proximal map for group-soft-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • slices=1 - number of elements per group
source
RegularizedLeastSquares.LLRRegularizationType
LLRRegularization

Regularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • shape::Tuple{Int} - dimensions of the image
  • blockSize::Tuple{Int}=(2,2) - size of patches to perform singular value thresholding on
  • randshift::Bool=true - randomly shifts the patches to ensure translation invariance
  • fullyOverlapping::Bool=false - choose between fully overlapping block or non-overlapping blocks
source
RegularizedLeastSquares.NuclearRegularizationType
NuclearRegularization

Regularization term implementing the proximal map for singular value soft-thresholding.

Arguments:

  • λ - regularization paramter

Keywords

  • svtShape::NTuple - size of the underlying matrix
source
RegularizedLeastSquares.TVRegularizationType
TVRegularization

Regularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.

Reference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf

Reference for the FGP algorithm: A. Beck and T. Teboulle, "Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems", IEEE Trans. Image Process. 18(11), 2009

Arguments

  • λ::T - regularization parameter

Keywords

  • shape::NTuple - size of the underlying image
  • dims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.
  • iterationsTV=20 - number of FGP iterations
source

Projection Regularization

Nested Regularization

RegularizedLeastSquares.innerregMethod
innerreg(reg::AbstractNestedRegularization)

return the inner regularization term of reg. Nested regularization terms also implement the iteration interface.

source

Scaled Regularization

Misc. Nested Regularization

RegularizedLeastSquares.MaskedRegularizationType
MaskedRegularization

Nested regularization term that only applies prox! and norm to elements of x for which the mask is true.

Examples

julia> positive = PositiveRegularization();
+Regularization Terms · RegularizedLeastSquares.jl

API for Regularizers

This page contains documentation of the public API of the RegularizedLeastSquares. In the Julia REPL one can access this documentation by entering the help mode with ?

RegularizedLeastSquares.L21RegularizationType
L21Regularization

Regularization term implementing the proximal map for group-soft-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • slices=1 - number of elements per group
source
RegularizedLeastSquares.LLRRegularizationType
LLRRegularization

Regularization term implementing the proximal map for locally low rank (LLR) regularization using singular-value-thresholding.

Arguments

  • λ - regularization paramter

Keywords

  • shape::Tuple{Int} - dimensions of the image
  • blockSize::Tuple{Int}=(2,2) - size of patches to perform singular value thresholding on
  • randshift::Bool=true - randomly shifts the patches to ensure translation invariance
  • fullyOverlapping::Bool=false - choose between fully overlapping block or non-overlapping blocks
source
RegularizedLeastSquares.NuclearRegularizationType
NuclearRegularization

Regularization term implementing the proximal map for singular value soft-thresholding.

Arguments:

  • λ - regularization paramter

Keywords

  • svtShape::NTuple - size of the underlying matrix
source
RegularizedLeastSquares.TVRegularizationType
TVRegularization

Regularization term implementing the proximal map for TV regularization. Calculated with the Condat algorithm if the TV is calculated only along one real-valued dimension and with the Fast Gradient Projection algorithm otherwise.

Reference for the Condat algorithm: https://lcondat.github.io/publis/Condat-fast_TV-SPL-2013.pdf

Reference for the FGP algorithm: A. Beck and T. Teboulle, "Fast Gradient-Based Algorithms for Constrained Total Variation Image Denoising and Deblurring Problems", IEEE Trans. Image Process. 18(11), 2009

Arguments

  • λ::T - regularization parameter

Keywords

  • shape::NTuple - size of the underlying image
  • dims - Dimension to perform the TV along. If Integer, the Condat algorithm is called, and the FDG algorithm otherwise.
  • iterationsTV=20 - number of FGP iterations
source

Projection Regularization

Nested Regularization

RegularizedLeastSquares.innerregMethod
innerreg(reg::AbstractNestedRegularization)

return the inner regularization term of reg. Nested regularization terms also implement the iteration interface.

source

Scaled Regularization

Misc. Nested Regularization

RegularizedLeastSquares.MaskedRegularizationType
MaskedRegularization

Nested regularization term that only applies prox! and norm to elements of x for which the mask is true.

Examples

julia> positive = PositiveRegularization();
 
 julia> masked = MaskedRegularization(reg, [true, false, true, false]);
 
@@ -8,11 +8,11 @@
   0.0
  -1.0
   0.0
- -1.0
source
RegularizedLeastSquares.TransformedRegularizationType
TransformedRegularization(reg, trafo)

Nested regularization term that applies prox! or norm on z = trafo * x and returns (inplace) x = adjoint(trafo) * z.

Example

julia> core = L1Regularization(0.8)
 L1Regularization{Float64}(0.8)
 
 julia> wop = WaveletOp(Float32, shape = (32,32));
 
 julia> reg = TransformedRegularization(core, wop);
 
-julia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain
source
RegularizedLeastSquares.PlugAndPlayRegularizationType
    PlugAndPlayRegularization

Regularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.

Arguments

  • λ - regularization paramter

Keywords

  • model - model applied to the image
  • shape - dimensions of the image
  • input_transform - transform of image before model
source

Miscellaneous Functions

RegularizedLeastSquares.prox!Method
prox!(reg::AbstractParameterizedRegularization, x)

perform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.

source
RegularizedLeastSquares.prox!Method
prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its prox! on x

source
LinearAlgebra.normMethod
norm(reg::AbstractParameterizedRegularization, x)

returns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.

source
LinearAlgebra.normMethod
norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its norm on x

source
+julia> prox!(reg, randn(32*32)); # Apply soft-thresholding in Wavelet domain
source
RegularizedLeastSquares.PlugAndPlayRegularizationType
    PlugAndPlayRegularization

Regularization term implementing a given plug-and-play proximal mapping. The actual regularization term is indirectly defined by the learned proximal mapping and as such there is no norm implemented.

Arguments

  • λ - regularization paramter

Keywords

  • model - model applied to the image
  • shape - dimensions of the image
  • input_transform - transform of image before model
source

Miscellaneous Functions

RegularizedLeastSquares.prox!Method
prox!(reg::AbstractParameterizedRegularization, x)

perform the proximal mapping defined by reg on x. Uses the regularization parameter defined for reg.

source
RegularizedLeastSquares.prox!Method
prox!(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its prox! on x

source
LinearAlgebra.normMethod
norm(reg::AbstractParameterizedRegularization, x)

returns the value of the reg regularization term on x. Uses the regularization parameter defined for reg.

source
LinearAlgebra.normMethod
norm(regType::Type{<:AbstractParameterizedRegularization}, x, λ; kwargs...)

construct a regularization term of type regType with given λ and kwargs and apply its norm on x

source
diff --git a/dev/API/solvers/index.html b/dev/API/solvers/index.html index 4e94c9b..521ad14 100644 --- a/dev/API/solvers/index.html +++ b/dev/API/solvers/index.html @@ -40,10 +40,10 @@ end plot_trace (generic function with 1 method) -julia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);

The keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.

See also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, b; kwargs...)

Prepare the solver for iteration based on the given data vector b and kwargs.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, state::AbstractSolverState, b::AbstractMatrix; scheduler = SequentialState, kwargs...)

Initialize the solver with each column of b and pass the corresponding states to the scheduler.

source

ADMM

RegularizedLeastSquares.ADMMType
ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
-ADMM( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates an ADMM object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - penalty of the augmented Lagrangian
  • vary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP
  • iterations::Int - maximum number of (outer) ADMM iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

ADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

CGNR

RegularizedLeastSquares.CGNRType
CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))
-CGNR( ; AHA = ,       reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))

creates an CGNR object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • iterations::Int - maximum number of iterations
  • relTol::Real - tolerance for stopping criterion

See also createLinearSolver, solve!.

source

Kaczmarz

RegularizedLeastSquares.KaczmarzType
Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10)

Creates a Kaczmarz object for the forward operator A.

Required Arguments

  • A - forward operator

Optional Keyword Arguments

  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • randomized::Bool - randomize Kacmarz algorithm
  • subMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm
  • shuffleRows::Bool - randomize Kacmarz algorithm
  • seed::Int - seed for randomized algorithm
  • iterations::Int - number of iterations

See also createLinearSolver, solve!.

source

FISTA

RegularizedLeastSquares.FISTAType
FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)
-FISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)

creates a FISTA object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • restart::Symbol - :none, :gradient options for restarting
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

OptISTA

RegularizedLeastSquares.OptISTAType
OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))
-OptISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))

creates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.

Reference:

  • Uijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, "Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA," arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

POGM

RegularizedLeastSquares.POGMType
POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)
-POGM( ; AHA = ,     reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)

Creates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.

References:

  • A.B. Taylor, J.M. Hendrickx, F. Glineur, "Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]

  • Kim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]

    Required Arguments

    • A - forward operator

    OR

    • AHA - normal operator (as a keyword argument)

    Optional Keyword Arguments

    • AHA - normal operator is optional if A is supplied
    • reg::AbstractParameterizedRegularization - regularization term
    • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
    • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
    • theta::Real - parameter for predictor-corrector step
    • sigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]
    • relTol::Real - tolerance for stopping criterion
    • iterations::Int - maximum number of iterations
    • restart::Symbol - :none, :gradient options for restarting
    • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

SplitBregman

RegularizedLeastSquares.SplitBregmanType
SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
-SplitBregman( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates a SplitBregman object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms
  • iterations::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)
  • iterationsInner::Int - maximum number of inner iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

This algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterations=1 or use ADMM instead, which is equivalent (iterations=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)

Like ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

Miscellaneous

RegularizedLeastSquares.solverstateFunction
solverstate(solver::AbstractLinearSolver)

Return the current state of the solver

source
RegularizedLeastSquares.solversolutionFunction
solversolution(solver::AbstractLinearSolver)

Return the current solution of the solver

source
solversolution(state::AbstractSolverState)

Return the current solution of the solver's state

source
RegularizedLeastSquares.solverconvergenceFunction
solverconvergence(solver::AbstractLinearSolver)

Return a named tuple of the solvers current convergence metrics

source
RegularizedLeastSquares.StoreSolutionCallbackType
StoreSolutionCallback(T)

Callback that accumlates the solvers solution per iteration. Results are stored in the solutions field.

source
RegularizedLeastSquares.StoreConvergenceCallbackType
StoreConvergenceCallback()

Callback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.

source
RegularizedLeastSquares.CompareSolutionCallbackType
CompareSolutionCallback(ref, cmp)

Callback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.

source
RegularizedLeastSquares.linearSolverListFunction

Return a list of all available linear solvers

source
RegularizedLeastSquares.createLinearSolverFunction
createLinearSolver(solver::AbstractLinearSolver, A; kargs...)

This method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.

TODO: give a hint what solvers are available

source
RegularizedLeastSquares.applicableSolverListFunction
applicable(args...)

list all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.

See also isapplicable, linearSolverList.

source
RegularizedLeastSquares.isapplicableFunction
isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)

return true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.

source
+julia> x_approx = solve!(S, b; callbacks = [conv, plot_trace]);

The keyword callbacks allows you to pass a (vector of) callable objects that takes the arguments solver and iteration and prints, stores, or plots intermediate result.

See also StoreSolutionCallback, StoreConvergenceCallback, CompareSolutionCallback for a number of provided callback options.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, b; kwargs...)

Prepare the solver for iteration based on the given data vector b and kwargs.

source
RegularizedLeastSquares.init!Method
init!(solver::AbstractLinearSolver, state::AbstractSolverState, b::AbstractMatrix; scheduler = SequentialState, kwargs...)

Initialize the solver with each column of b and pass the corresponding states to the scheduler.

source

ADMM

RegularizedLeastSquares.ADMMType
ADMM(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
+ADMM( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, vary_rho = :none, iterations = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates an ADMM object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - penalty of the augmented Lagrangian
  • vary_rho::Symbol - vary rho to balance primal and dual feasibility; options :none, :balance, :PnP
  • iterations::Int - maximum number of (outer) ADMM iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

ADMM differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

CGNR

RegularizedLeastSquares.CGNRType
CGNR(A; AHA = A' * A, reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))
+CGNR( ; AHA = ,       reg = L2Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 10, relTol = eps(real(eltype(AHA))))

creates an CGNR object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • iterations::Int - maximum number of iterations
  • relTol::Real - tolerance for stopping criterion

See also createLinearSolver, solve!.

source

Kaczmarz

RegularizedLeastSquares.KaczmarzType
Kaczmarz(A; reg = L2Regularization(0), normalizeReg = NoNormalization(), randomized=false, subMatrixFraction=0.15, shuffleRows=false, seed=1234, iterations=10)

Creates a Kaczmarz object for the forward operator A.

Required Arguments

  • A - forward operator

Optional Keyword Arguments

  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • randomized::Bool - randomize Kacmarz algorithm
  • subMatrixFraction::Real - fraction of rows used in randomized Kaczmarz algorithm
  • shuffleRows::Bool - randomize Kacmarz algorithm
  • seed::Int - seed for randomized algorithm
  • iterations::Int - number of iterations

See also createLinearSolver, solve!.

source

FISTA

RegularizedLeastSquares.FISTAType
FISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)
+FISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho = 0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))), restart = :none)

creates a FISTA object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • restart::Symbol - :none, :gradient options for restarting
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

OptISTA

RegularizedLeastSquares.OptISTAType
OptISTA(A; AHA=A'*A, reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))
+OptISTA( ; AHA=,     reg=L1Regularization(zero(real(eltype(AHA)))), normalizeReg=NoNormalization(), iterations=50, verbose = false, rho=0.95 / power_iterations(AHA), theta=1, relTol=eps(real(eltype(AHA))))

creates a OptISTA object for the forward operator A or normal operator AHA. OptISTA has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 2 extra intermediate variables the size of the image compared to FISTA.

Reference:

  • Uijeong Jang, Shuvomoy Das Gupta, Ernest K. Ryu, "Computer-Assisted Design of Accelerated Composite Optimization Methods: OptISTA," arXiv:2305.15704, 2023, [https://arxiv.org/abs/2305.15704]

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • reg::AbstractParameterizedRegularization - regularization term
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
  • theta::Real - parameter for predictor-corrector step
  • relTol::Real - tolerance for stopping criterion
  • iterations::Int - maximum number of iterations
  • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

POGM

RegularizedLeastSquares.POGMType
POGM(A; AHA = A'*A, reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)
+POGM( ; AHA = ,     reg = L1Regularization(zero(real(eltype(AHA)))), normalizeReg = NoNormalization(), iterations = 50, verbose = false, rho = 0.95 / power_iterations(AHA), theta = 1, sigma_fac = 1, relTol = eps(real(eltype(AHA))), restart = :none)

Creates a POGM object for the forward operator A or normal operator AHA. POGM has a 2x better worst-case bound than FISTA, but actual performance varies by application. It stores 3 extra intermediate variables the size of the image compared to FISTA. Only gradient restart scheme is implemented for now.

References:

  • A.B. Taylor, J.M. Hendrickx, F. Glineur, "Exact worst-case performance of first-order algorithms for composite convex optimization," Arxiv:1512.07516, 2015, SIAM J. Opt. 2017 [http://doi.org/10.1137/16m108104x]

  • Kim, D., & Fessler, J. A. (2018). Adaptive Restart of the Optimized Gradient Method for Convex Optimization. Journal of Optimization Theory and Applications, 178(1), 240–263. [https://doi.org/10.1007/s10957-018-1287-4]

    Required Arguments

    • A - forward operator

    OR

    • AHA - normal operator (as a keyword argument)

    Optional Keyword Arguments

    • AHA - normal operator is optional if A is supplied
    • reg::AbstractParameterizedRegularization - regularization term
    • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
    • rho::Real - step size for gradient step; the default is 0.95 / max_eigenvalue as determined with power iterations.
    • theta::Real - parameter for predictor-corrector step
    • sigma_fac::Real - parameter for decreasing γ-momentum ∈ [0,1]
    • relTol::Real - tolerance for stopping criterion
    • iterations::Int - maximum number of iterations
    • restart::Symbol - :none, :gradient options for restarting
    • verbose::Bool - print residual in each iteration

See also createLinearSolver, solve!.

source

SplitBregman

RegularizedLeastSquares.SplitBregmanType
SplitBregman(A; AHA = A'*A, precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)
+SplitBregman( ; AHA = ,     precon = Identity(), reg = L1Regularization(zero(real(eltype(AHA)))), regTrafo = opEye(eltype(AHA), size(AHA,1)), normalizeReg = NoNormalization(), rho = 1e-1, iterations = 10, iterationsInner = 10, iterationsCG = 10, absTol = eps(real(eltype(AHA))), relTol = eps(real(eltype(AHA))), tolInner = 1e-5, verbose = false)

Creates a SplitBregman object for the forward operator A or normal operator AHA.

Required Arguments

  • A - forward operator

OR

  • AHA - normal operator (as a keyword argument)

Optional Keyword Arguments

  • AHA - normal operator is optional if A is supplied
  • precon - preconditionner for the internal CG algorithm
  • reg::AbstractParameterizedRegularization - regularization term; can also be a vector of regularization terms
  • regTrafo - transformation to a space in which reg is applied; if reg is a vector, regTrafo has to be a vector of the same length. Use opEye(eltype(AHA), size(AHA,1)) if no transformation is desired.
  • normalizeReg::AbstractRegularizationNormalization - regularization normalization scheme; options are NoNormalization(), MeasurementBasedNormalization(), SystemMatrixBasedNormalization()
  • rho::Real - weights for condition on regularized variables; can also be a vector for multiple regularization terms
  • iterations::Int - maximum number of outer iterations. Set to 1 for unconstraint split Bregman (equivalent to ADMM)
  • iterationsInner::Int - maximum number of inner iterations
  • iterationsCG::Int - maximum number of (inner) CG iterations
  • absTol::Real - absolute tolerance for stopping criterion
  • relTol::Real - relative tolerance for stopping criterion
  • tolInner::Real - relative tolerance for CG stopping criterion
  • verbose::Bool - print residual in each iteration

This algorithm solves the constraint problem (Eq. (4.7) in Tom Goldstein and Stanley Osher), i.e. ||R(x)||₁ such that ||Ax -b||₂² < σ². In order to solve the unconstraint problem (Eq. (4.8) in Tom Goldstein and Stanley Osher), i.e. ||Ax -b||₂² + λ ||R(x)||₁, you can either set iterations=1 or use ADMM instead, which is equivalent (iterations=1 in SplitBregman in implied in ADMM and the SplitBregman variable iterationsInner is simply called iterations in ADMM)

Like ADMM, SplitBregman differs from ISTA-type algorithms in the sense that the proximal operation is applied separately from the transformation to the space in which the penalty is applied. This is reflected by the interface which has reg and regTrafo as separate arguments. E.g., for a TV penalty, you should NOT set reg=TVRegularization, but instead use reg=L1Regularization(λ), regTrafo=RegularizedLeastSquares.GradientOp(Float64; shape=(Nx,Ny,Nz)).

See also createLinearSolver, solve!.

source

Miscellaneous

RegularizedLeastSquares.solverstateFunction
solverstate(solver::AbstractLinearSolver)

Return the current state of the solver

source
RegularizedLeastSquares.solversolutionFunction
solversolution(solver::AbstractLinearSolver)

Return the current solution of the solver

source
solversolution(state::AbstractSolverState)

Return the current solution of the solver's state

source
RegularizedLeastSquares.solverconvergenceFunction
solverconvergence(solver::AbstractLinearSolver)

Return a named tuple of the solvers current convergence metrics

source
RegularizedLeastSquares.StoreSolutionCallbackType
StoreSolutionCallback(T)

Callback that accumlates the solvers solution per iteration. Results are stored in the solutions field.

source
RegularizedLeastSquares.StoreConvergenceCallbackType
StoreConvergenceCallback()

Callback that accumlates the solvers convergence metrics per iteration. Results are stored in the convMeas field.

source
RegularizedLeastSquares.CompareSolutionCallbackType
CompareSolutionCallback(ref, cmp)

Callback that compares the solvers current solution with the given reference via cmp(ref, solution) per iteration. Results are stored in the results field.

source
RegularizedLeastSquares.linearSolverListFunction

Return a list of all available linear solvers

source
RegularizedLeastSquares.createLinearSolverFunction
createLinearSolver(solver::AbstractLinearSolver, A; kargs...)

This method creates a solver. The supported solvers are methods typically used for solving regularized linear systems. All solvers return an approximate solution to Ax = b.

TODO: give a hint what solvers are available

source
RegularizedLeastSquares.applicableSolverListFunction
applicable(args...)

list all solvers that are applicable to the given arguments. Arguments are the same as for isapplicable without the solver type.

See also isapplicable, linearSolverList.

source
RegularizedLeastSquares.isapplicableFunction
isapplicable(solverType::Type{<:AbstractLinearSolver}, A, x, reg)

return true if a solver of type solverType is applicable to system matrix A, data x and regularization terms reg.

source
diff --git a/dev/generated/examples/compressed_sensing/0e280db2.png b/dev/generated/examples/compressed_sensing/0e280db2.png deleted file mode 100644 index 54cf2f1..0000000 Binary files a/dev/generated/examples/compressed_sensing/0e280db2.png and /dev/null differ diff --git a/dev/generated/examples/compressed_sensing/14aca95e.png b/dev/generated/examples/compressed_sensing/14aca95e.png deleted file mode 100644 index c40d6cb..0000000 Binary files a/dev/generated/examples/compressed_sensing/14aca95e.png and /dev/null differ diff --git a/dev/generated/examples/compressed_sensing/3073c051.png b/dev/generated/examples/compressed_sensing/3073c051.png new file mode 100644 index 0000000..c3a3808 Binary files /dev/null and b/dev/generated/examples/compressed_sensing/3073c051.png differ diff --git a/dev/generated/examples/compressed_sensing/a03acae3.png b/dev/generated/examples/compressed_sensing/a03acae3.png new file mode 100644 index 0000000..1c9a830 Binary files /dev/null and b/dev/generated/examples/compressed_sensing/a03acae3.png differ diff --git a/dev/generated/examples/compressed_sensing/index.html b/dev/generated/examples/compressed_sensing/index.html index f6a99e1..65fd0a5 100644 --- a/dev/generated/examples/compressed_sensing/index.html +++ b/dev/generated/examples/compressed_sensing/index.html @@ -5,25 +5,25 @@ size(image)
(256, 256)

This produces a 256x256 image of a Shepp-Logan phantom.

In this example, we consider a problem in which we randomly sample a third of the pixels in the image. Such a problem and the corresponding measurement can be constructed with the packages LinearOperatorCollection and Random:

We first randomly shuffle the indices of the image and then select the first third of the indices to sample.

using Random, LinearOperatorCollection
 randomIndices = shuffle(eachindex(image))
 sampledIndices = sort(randomIndices[1:div(end, 3)])
21845-element Vector{Int64}:
+     1
      3
      4
      5
      7
-     9
-    10
-    17
-    19
-    21
-    29
+    12
+    13
+    14
+    20
+    26
      ⋮
- 65516
- 65517
  65519
- 65520
+ 65521
  65522
- 65526
- 65527
- 65531
+ 65524
+ 65528
+ 65529
+ 65532
+ 65535
  65536

Afterwards we build a sampling operator which samples the image at the selected indices.

A = SamplingOp(eltype(image), pattern = sampledIndices , shape = size(image));

Then we apply the sampling operator to the vectorized image to obtain the sampled measurement vector

b = A*vec(image);

To visualize our image we can use CairoMakie:

using CairoMakie
 function plot_image(figPos, img; title = "", width = 150, height = 150)
   ax = CairoMakie.Axis(figPos; yreversed=true, title, width, height)
@@ -36,30 +36,30 @@
 samplingMask[sampledIndices] .= true
 plot_image(fig[1,2], image .* samplingMask, title = "Sampled Image")
 resize_to_layout!(fig)
-fig
Example block output

As we can see in the right image, only a third of the pixels are sampled. The goal of the inverse problem is to recover the original image from this measurement vector.

Solving the Inverse Problem

To recover the image from the measurement vector, we solve the TV-regularized least squares problem:

\[\begin{equation} +figExample block output

As we can see in the right image, only a third of the pixels are sampled. The goal of the inverse problem is to recover the original image from this measurement vector.

Solving the Inverse Problem

To recover the image from the measurement vector, we solve the TV-regularized least squares problem:

\[\begin{equation} \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \lambda\vert\vert\mathbf{x}\vert\vert_{\text{TV}} . \end{equation}\]

For this purpose we build a TV regularizer with regularization parameter $λ=0.01$

using RegularizedLeastSquares
 reg = TVRegularization(0.01; shape=size(image));

We will use the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) to solve our inverse problem. Thus, we build the corresponding solver

solver = createLinearSolver(FISTA, A; reg=reg, iterations=20);

and apply it to our measurement vector

img_approx = solve!(solver,b)
65536-element Vector{Float32}:
- 8.0653537f-19
- 1.4146734f-18
- 3.1340643f-18
- 7.135224f-18
- 1.5831074f-17
- 3.5051153f-17
- 7.9196204f-17
- 1.7715744f-16
- 3.7596116f-16
- 7.5909044f-16
+ 5.479949f-19
+ 1.0197881f-18
+ 2.3187864f-18
+ 5.21343f-18
+ 1.11894665f-17
+ 2.3435446f-17
+ 4.9866213f-17
+ 1.07909926f-16
+ 2.3241126f-16
+ 5.017118f-16
  ⋮
- 5.1193374f-16
- 2.3678025f-16
- 1.04729774f-16
- 4.8417316f-17
- 2.390231f-17
- 1.1638979f-17
- 5.3584624f-18
- 2.4544577f-18
- 1.3536931f-18

To visualize the reconstructed image, we need to reshape the result vector to the correct shape. Afterwards we can use CairoMakie again:

img_approx = reshape(img_approx,size(image));
+ 4.608231f-16
+ 2.1284327f-16
+ 9.824584f-17
+ 4.5602487f-17
+ 2.132212f-17
+ 1.01756814f-17
+ 4.889565f-18
+ 2.3502145f-18
+ 1.3239986f-18

To visualize the reconstructed image, we need to reshape the result vector to the correct shape. Afterwards we can use CairoMakie again:

img_approx = reshape(img_approx,size(image));
 plot_image(fig[1,3], img_approx, title = "Reconstructed Image")
 resize_to_layout!(fig)
-fig
Example block output

This page was generated using Literate.jl.

+figExample block output

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/callbacks/00c5faaa.png b/dev/generated/howto/callbacks/00c5faaa.png deleted file mode 100644 index b784812..0000000 Binary files a/dev/generated/howto/callbacks/00c5faaa.png and /dev/null differ diff --git a/dev/generated/howto/callbacks/1dcf3132.png b/dev/generated/howto/callbacks/1dcf3132.png new file mode 100644 index 0000000..d98714a Binary files /dev/null and b/dev/generated/howto/callbacks/1dcf3132.png differ diff --git a/dev/generated/howto/callbacks/index.html b/dev/generated/howto/callbacks/index.html index 9b5a2f6..d4ab982 100644 --- a/dev/generated/howto/callbacks/index.html +++ b/dev/generated/howto/callbacks/index.html @@ -24,24 +24,24 @@ global idx += 1 end end
65536-element Vector{Float32}:
- 1.7135411f-18
- 2.7601063f-18
- 5.4401928f-18
- 1.15541895f-17
- 2.591213f-17
- 5.997569f-17
- 1.3439096f-16
- 2.8001165f-16
- 5.522529f-16
- 1.0699407f-15
+ 1.3395445f-18
+ 2.2707473f-18
+ 4.587327f-18
+ 9.4164355f-18
+ 1.8923826f-17
+ 3.7065722f-17
+ 6.92436f-17
+ 1.2172425f-16
+ 2.1890205f-16
+ 4.555531f-16
  ⋮
- 1.6165244f-16
- 7.236208f-17
- 3.3924986f-17
- 1.6492023f-17
- 7.8633195f-18
- 3.500851f-18
- 1.454752f-18
- 6.157437f-19
- 3.3014668f-19

In the callback we have to copy the solution, as the solver will update it in place. As is explained in the solver section, each features fields that are intended to be immutable during a solve! call and a state that is modified in each iteration. Depending on the solvers parameters and the measurement input, the state can differ in its fields and their type. Ideally, one tries to avoid accessing the state directly and uses the provided functions to access the state.

The resulting figure shows the reconstructed image after 0, 4, 8, 12, 16 and 20 iterations:

resize_to_layout!(fig)
-fig
Example block output

This page was generated using Literate.jl.

+ 5.26216f-16 + 2.5074618f-16 + 1.1923106f-16 + 5.605712f-17 + 2.562969f-17 + 1.1559917f-17 + 5.309755f-18 + 2.5478933f-18 + 1.452134f-18

In the callback we have to copy the solution, as the solver will update it in place. As is explained in the solver section, each features fields that are intended to be immutable during a solve! call and a state that is modified in each iteration. Depending on the solvers parameters and the measurement input, the state can differ in its fields and their type. Ideally, one tries to avoid accessing the state directly and uses the provided functions to access the state.

The resulting figure shows the reconstructed image after 0, 4, 8, 12, 16 and 20 iterations:

resize_to_layout!(fig)
+fig
Example block output

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/efficient_kaczmarz/index.html b/dev/generated/howto/efficient_kaczmarz/index.html index d62f8a4..7727f60 100644 --- a/dev/generated/howto/efficient_kaczmarz/index.html +++ b/dev/generated/howto/efficient_kaczmarz/index.html @@ -5,44 +5,44 @@ b = A*x;

The dot_with_matrix_row function calculates the dot product between a row of A and the current approximate solution of x:

row = 1
 isapprox(RegularizedLeastSquares.dot_with_matrix_row(A, x, row), sum(A[row, :] .* x))
true

Since in Julia, dense arrays are stored in column-major order, such a row-based operation is quite inefficient. A workaround is to transpose the matrix then pass it to a Kaczmarz solver.

At = collect(transpose(A))
 A_eff = transpose(At)
256×256 transpose(::Matrix{Float64}) with eltype Float64:
-  0.910251    -1.19726     0.242322   …   0.655748    -0.631468    0.173259
- -1.1209      -0.308165    0.857353       0.225001    -1.0107     -1.65111
-  1.10741     -0.0924309   0.334928      -0.0829714    1.82627     0.440753
-  1.45953      0.803332   -0.639625       0.712208     0.918511   -0.775256
-  0.162769     0.696238   -0.330234       0.505525    -0.151154    0.964349
-  0.422367    -0.263707   -0.543677   …  -0.00640992   0.916573    0.306736
- -0.321718    -0.593243   -0.143105      -1.32274      0.412602    0.441744
-  0.5099      -0.370389    0.764467       1.11949      0.614571    1.35511
-  0.786083    -0.12562     0.641134       0.250447     0.718494    0.646119
- -0.165449     0.271816    1.11498       -0.756607     0.051197   -0.373214
-  ⋮                                   ⋱                            ⋮
-  0.0929029    1.37216    -0.365248       0.944092    -1.46735    -0.73965
- -1.37157      0.24936     0.0681442      1.07896      0.532222    0.102024
-  0.245905    -1.48169     0.797878       0.370598    -1.75201     1.24137
- -1.53938     -0.431481    0.978082   …   0.824308     0.217463   -1.89615
-  1.8194       0.450758    0.148104      -1.09053     -0.618595   -0.704446
- -0.697515    -1.22794     1.99515       -0.326534     0.0254783   0.539311
- -0.00853012   0.708896    0.0398781     -0.903955     0.349655    0.767209
- -1.28863      0.291673   -0.570181      -1.28984      0.137451    1.07658
-  0.938656    -0.79876    -0.227555   …  -0.355236    -1.17975    -0.272119

Note that the transpose function can return a lazy transpose object, so we first collect the transpose into a dense matrix. Then we transpose it again to get the efficient representation of the matrix.

We can compare the performance using the BenchmarkTools.jl package. First for the original matrix:

using BenchmarkTools
+ -0.833843   -0.446591    1.95904   …  -0.971519    0.937336    1.03061
+  0.214451    0.476693   -1.11859      -1.89836    -1.06377     0.078259
+  0.679967   -0.392655   -0.547967      0.114042    0.883773   -0.771914
+  0.36605    -0.847445    1.83063       0.316194    0.449842    0.938961
+ -0.519502   -0.157082   -1.23943       0.356067    0.226452   -1.62795
+  0.955959   -1.31348    -0.639579  …  -0.343428   -0.431315   -1.87556
+  0.0401495  -0.461557    0.871555      0.388586    2.86311    -0.329764
+ -0.634131   -0.538432    1.04536       0.524279    0.0496082  -0.669369
+  0.385215    1.40324    -0.765245      1.64382     1.12077     0.163061
+ -1.26645    -2.37651     0.412359     -0.399202    0.966635   -0.0963301
+  ⋮                                 ⋱                           ⋮
+  1.05448    -0.306465    0.521981     -0.0769038  -0.992576   -0.172011
+ -0.661873    0.0892155  -0.868629     -0.227159   -0.510807   -0.185506
+  0.909661    0.648055   -0.10962       0.474955    1.46412    -0.86389
+ -0.349312    1.46133    -0.601529  …   0.273865   -1.17989     1.29904
+ -0.990919    1.43603     1.18969      -0.798069    0.487757   -0.419157
+ -0.902652    0.425434   -0.301977     -0.928797    0.696109   -0.363101
+ -0.610555   -0.688722    0.43331       0.256582    1.64709    -0.234531
+ -0.878338   -0.215431   -0.600092     -1.22417     0.871353    0.539884
+  0.541702    1.79816     0.315094  …   0.481882   -0.191007    0.284455

Note that the transpose function can return a lazy transpose object, so we first collect the transpose into a dense matrix. Then we transpose it again to get the efficient representation of the matrix.

We can compare the performance using the BenchmarkTools.jl package. First for the original matrix:

using BenchmarkTools
 solver = createLinearSolver(Kaczmarz, A; reg = L2Regularization(0.0001), iterations=100)
 @benchmark solve!(solver, b) samples = 100
BenchmarkTools.Trial: 100 samples with 1 evaluation.
- Range (minmax):  26.142 ms 29.153 ms   GC (min … max): 0.00% … 0.00%
- Time  (median):     28.866 ms                GC (median):    0.00%
- Time  (mean ± σ):   28.628 ms ± 535.834 μs   GC (mean ± σ):  0.00% ± 0.00%
+ Range (minmax):  26.858 ms27.106 ms   GC (min … max): 0.00% … 0.00%
+ Time  (median):     26.941 ms               GC (median):    0.00%
+ Time  (mean ± σ):   26.945 ms ± 33.614 μs   GC (mean ± σ):  0.00% ± 0.00%
 
-                                     ▁                 ▁▃█ ▂   
-  ▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▁▁▁▁▁▁▁▁▁▁▄▁▅▆█▅▃▃▁▁▅▁▁▁▁▁▁▁▃▆▃███▆██ ▃
-  26.1 ms         Histogram: frequency by time         29.1 ms <
+                      ▃  █▃▃                                 
+  ▃▁▁▁▁▁▁▃▅▃▁▁▁▃▃▃▆▃▅██▆████▇█▆▃▁▁▁▃▃▁▁▃▃▅▁▁▁▁▁▁▁▁▁▁▁▃▁▁▃▁▃ ▃
+  26.9 ms         Histogram: frequency by time        27.1 ms <
 
  Memory estimate: 17.31 KiB, allocs estimate: 504.

And then for the efficient matrix:

solver_eff = createLinearSolver(Kaczmarz, A_eff; reg = L2Regularization(0.0001), iterations=100)
 @benchmark solve!(solver_eff, b) samples = 100
BenchmarkTools.Trial: 100 samples with 1 evaluation.
- Range (minmax):  1.933 ms 1.994 ms   GC (min … max): 0.00% … 0.00%
- Time  (median):     1.951 ms               GC (median):    0.00%
- Time  (mean ± σ):   1.953 ms ± 13.447 μs   GC (mean ± σ):  0.00% ± 0.00%
+ Range (minmax):  1.848 ms 1.928 ms   GC (min … max): 0.00% … 0.00%
+ Time  (median):     1.862 ms               GC (median):    0.00%
+ Time  (mean ± σ):   1.867 ms ± 17.352 μs   GC (mean ± σ):  0.00% ± 0.00%
 
-             ▂█▄  ▂▂   ▂     ▂▂                              
-  ▄█▄▄██▆█▆▆▆███▆▆██▄█▆▆█▄▄██▁█▆▄▁▁▆▆▁▁▄▁▁▁▄▁▁▄▁▄▁▁▁▁▁▆▁▄ ▄
-  1.93 ms        Histogram: frequency by time        1.99 ms <
+  ▁█  █▃ ▁▁▃   ▁▃                                        
+  ██▆▆██▆███▇▆▄▇█▄██▁▇▇▇▁▁▁▄▄▁▁▄▄▄▁▄▁▁▄▆▁▄▁▄▁▄▁▁▁▄▁▁▁▆▁▁▁▄ ▄
+  1.85 ms        Histogram: frequency by time        1.92 ms <
 
  Memory estimate: 17.34 KiB, allocs estimate: 506.

We can also combine the efficient matrix with a weighting matrix, as is shown in the Weighting example.

Custom operators need to implement the dot_with_matrix_row function to be used with the Kaczmarz solver. Ideally, such an implementation is allocation free.


This page was generated using Literate.jl.

diff --git a/dev/generated/howto/gpu_acceleration/c54799e4.png b/dev/generated/howto/gpu_acceleration/c54799e4.png new file mode 100644 index 0000000..9ba2ddc Binary files /dev/null and b/dev/generated/howto/gpu_acceleration/c54799e4.png differ diff --git a/dev/generated/howto/gpu_acceleration/d4bec6bc.png b/dev/generated/howto/gpu_acceleration/d4bec6bc.png deleted file mode 100644 index 6d922e3..0000000 Binary files a/dev/generated/howto/gpu_acceleration/d4bec6bc.png and /dev/null differ diff --git a/dev/generated/howto/gpu_acceleration/index.html b/dev/generated/howto/gpu_acceleration/index.html index fef5bdb..ebdc9fa 100644 --- a/dev/generated/howto/gpu_acceleration/index.html +++ b/dev/generated/howto/gpu_acceleration/index.html @@ -6,22 +6,22 @@ x = gpu(rand(Float32, 16)) b = A*x;

Solvers adapt their states based on the type of the given measurement vector. This means that the solver will automatically switch to GPU acceleration if a GPU array is passed as the measurement vector.

solver = createLinearSolver(CGNR, A; reg = L2Regularization(0.0001), iterations=32);
 x_approx = solve!(solver, b)
16-element JLArrays.JLArray{Float32, 1}:
- 0.3006767
- 0.9979235
- 0.7575724
- 0.11973946
- 0.562184
- 0.06426931
- 0.035133272
- 0.54036963
- 0.871795
- 0.20744058
- 0.25539547
- 0.7499704
- 0.6891877
- 0.2794442
- 0.107103325
- 0.11618214

This adaption does not include the operator. So if we want to compare with CPU result, we need to construct a new solver with a CPU operator.

solver = createLinearSolver(CGNR, Array(A); reg = L2Regularization(0.0001), iterations=32);
+ 0.58974725
+ 0.73969376
+ 0.015459145
+ 0.53007716
+ 0.09761662
+ 0.75303143
+ 0.88292795
+ 0.037307557
+ 0.46172315
+ 0.1712186
+ 0.71143055
+ 0.006434502
+ 0.6263704
+ 0.4439913
+ 0.7746079
+ 0.56623405

This adaption does not include the operator. So if we want to compare with CPU result, we need to construct a new solver with a CPU operator.

solver = createLinearSolver(CGNR, Array(A); reg = L2Regularization(0.0001), iterations=32);
 x_cpu = solve!(solver, Array(b))
 isapprox(Array(x_approx), x_cpu)
true

Matrix-Free Operators

A special case is the usage of matrix-free operators. Since these operators do not have a concrete matrix representation, their GPU support depends on their implementation. Since not all multiplications within a solver approximation are in-place, the operator also needs to support the * operation and construct an appropriate result vector. For matrix-free operators based on LinearOperators.jl, this can be achieved by implementing the LinearOperators.storage_type method.

In the following, we will take another look at the CS example and execute it on the GPU. Note that for the JLArray example we chose a small phantom, since the JLArray implementation is not optimized for performance:

using ImagePhantoms, ImageGeoms
 N = 32
@@ -35,26 +35,26 @@
 reg = TVRegularization(0.01; shape=size(image))
 solver = createLinearSolver(FISTA, A; reg=reg, iterations=20)
 img_approx = solve!(solver,b);

To visualize the reconstructed image, we need to reshape the result vector to the correct shape and convert it to a CPU array:

img_approx = reshape(Array(img_approx),size(image))
32×32 Matrix{Float32}:
- 0.000857372  0.000877798  0.000919081  …  0.00223371  0.00185778  0.00163596
- 0.000955748  0.000978901  0.0010305       0.00247486  0.00210944  0.00189034
- 0.00122326   0.00123791   0.00128847      0.0029248   0.00257866  0.0023754
- 0.00176677   0.00176617   0.00182229      0.00366101  0.00330334  0.00312092
- 0.00266908   0.00270214   0.00285759      0.00483948  0.00438435  0.00418747
- 0.00398195   0.00417329   0.00453338   …  0.00646508  0.0058633   0.00557817
- 0.00576602   0.00619464   0.00660468      0.00832111  0.00758895  0.007148
- 0.00796308   0.00840224   0.00840443      0.0102212   0.00943393  0.00876575
- 0.0102178    0.0100631    0.00934952      0.0122481   0.0113189   0.0104768
- 0.0118181    0.0108048    0.0225298       0.0137173   0.0133621   0.0124791
- ⋮                                      ⋱              ⋮           
- 0.0105018    0.0107348    0.0108003       0.0103946   0.0100623   0.00922942
- 0.00936079   0.00982931   0.0095659       0.00897139  0.00805138  0.00735834
- 0.00791283   0.00851877   0.00872386   …  0.00732005  0.00630711  0.00572713
- 0.00650447   0.00692626   0.00746719      0.00566959  0.00489637  0.00452834
- 0.00527134   0.0054829    0.0058499       0.00428411  0.00382215  0.00361954
- 0.00424112   0.00432453   0.00446525      0.00326574  0.00301238  0.00288762
- 0.00341286   0.00344839   0.00348042      0.00252515  0.00236794  0.002281
- 0.00283881   0.00285981   0.00286612   …  0.00199854  0.00186662  0.00179988
- 0.00255411   0.0025685    0.00257081      0.00171391  0.00158246  0.00152433

We will again use CairoMakie for visualization:

using CairoMakie
+ 0.00125885  0.00127508  0.00139172  0.00163228  …  0.000828707  0.000790568
+ 0.00150161  0.00154427  0.0016958   0.00196523     0.000901944  0.000835158
+ 0.00198645  0.00206327  0.00226334  0.00260021     0.00103522   0.000932892
+ 0.00277589  0.00286305  0.0031088   0.00359616     0.00124094   0.00110355
+ 0.00398631  0.00407219  0.00440135  0.0051339      0.00159244   0.0013899
+ 0.00566059  0.00582393  0.0063275   0.00722526  …  0.00225012   0.00189464
+ 0.00755877  0.00815058  0.00866721  0.0093824      0.00340907   0.00280875
+ 0.0108847   0.0108183   0.0106889   0.0111873      0.00506914   0.0043712
+ 0.0138547   0.0131071   0.0116323   0.0120399      0.00694845   0.00654249
+ 0.0168558   0.0148462   0.025235    0.0280806      0.00901083   0.00912052
+ ⋮                                               ⋱  ⋮            
+ 0.01093     0.0114882   0.0110311   0.0182192      0.00756706   0.0067902
+ 0.00811898  0.00883295  0.00926203  0.00809161     0.00626245   0.00552272
+ 0.00610862  0.006618    0.00738621  0.00770877  …  0.00511992   0.00456125
+ 0.00480505  0.00506648  0.00565302  0.00641597     0.00403086   0.00371571
+ 0.00399281  0.00413277  0.00447888  0.00504819     0.003093     0.00293022
+ 0.0034413   0.00355579  0.00379588  0.0041462      0.0023614    0.00225731
+ 0.00303429  0.00315307  0.00336581  0.00362357     0.00183862   0.00175991
+ 0.00282224  0.00292535  0.00310514  0.00331119  …  0.00149367   0.0014381
+ 0.00277154  0.0028497   0.00299082  0.00315615     0.0013162    0.00128073

We will again use CairoMakie for visualization:

using CairoMakie
 function plot_image(figPos, img; title = "", width = 150, height = 150)
   ax = CairoMakie.Axis(figPos; yreversed=true, title, width, height)
   hidedecorations!(ax)
@@ -67,4 +67,4 @@
 plot_image(fig[1,2], image .* samplingMask, title = "Sampled Image")
 plot_image(fig[1,3], img_approx, title = "Reconstructed Image")
 resize_to_layout!(fig)
-fig
Example block output

This page was generated using Literate.jl.

+figExample block output

This page was generated using Literate.jl.

diff --git a/dev/generated/howto/multi_threading/index.html b/dev/generated/howto/multi_threading/index.html index 1ee0718..bef6938 100644 --- a/dev/generated/howto/multi_threading/index.html +++ b/dev/generated/howto/multi_threading/index.html @@ -10,7 +10,7 @@ xs_approx[i] = solve!(solver, bs[i]) end

Operator Based Multi-Threading

This type of multi-threading involves linear operators or proximal maps that can be implemnted in parallel. Examples of this include the proximal map of the TV regularization term, which is based on the multi-threaded GradientOp from LinearOperatorCollection. GPU acceleration also falls under this approach, see GPU Acceleration for more information.

Measurement Based Multi-Threading

This level of multi-threading applies the same solver (and its parameters) to multiple measurement vectors or rather a measurement matrix B. This is useful in the case of multiple measurements that can be solved in parallel and can reuse the same solver. This approach is not applicable if the operator is stateful.

To use this approach we first build a measurement matrix B and a corresponding solver:

A = first(As)
 B = mapreduce(x -> A*x, hcat, xs)
-solver = createLinearSolver(CGNR, A; iterations=32)
CGNR{Matrix{Float64}, Matrix{Float64}, L2Regularization{Float64}, Vector{Any}}([0.35711140191545976 0.14725396431069715 … 0.3774597710614731 0.33119714544656953; 0.6849346967619209 0.823269058898878 … 0.05723088311497582 0.7870858315004065; … ; 0.06675888761935411 0.8076653449857688 … 0.051995419899852124 0.43753738172327694; 0.012930549066150432 0.5847663779539006 … 0.6382486140281142 0.8052753924736537], [11.27180896004046 7.772968042119611 … 7.370127255615703 8.274479962594766; 7.772968042119611 8.889587515264576 … 6.489624941446702 7.29526555539608; … ; 7.370127255615703 6.489624941446702 … 8.111986850188751 6.473410371812545; 8.274479962594766 7.29526555539608 … 6.473410371812545 9.72806813940192], L2Regularization{Float64}(0.0), Any[], NoNormalization(), 32, RegularizedLeastSquares.CGNRState{Float64, Float64, Vector{Float64}}([0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310, 6.9211863137405e-310], [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [6.9211974171161e-310, 6.9211974171161e-310, 6.9211551110863e-310, 6.9211974171161e-310, 6.9211974171161e-310, 6.9211974171161e-310, 6.9211974171161e-310, 6.9211551035876e-310, 6.92119741717975e-310, 6.92119741732204e-310, 6.92119741733627e-310, 6.921197417194e-310, 6.9211974172082e-310, 6.92119741722244e-310, 6.92119741723667e-310, 6.9211974172509e-310], 0.0, 0.0, 0.0, 0, 2.220446049250313e-16, 0.0))

We can then simply pass the measurement matrix to the solver. The result will be the same as if we passed each colument of B seperately:

x_approx = solve!(solver, B)
+solver = createLinearSolver(CGNR, A; iterations=32)
CGNR{Matrix{Float64}, Matrix{Float64}, L2Regularization{Float64}, Vector{Any}}([0.08738742375743858 0.29006270618604657 … 0.6312511004341781 0.8211043457009807; 0.6217012764704901 0.40013117346465765 … 0.99176726177215 0.3826917144063201; … ; 0.7589166471725711 0.9306966230737793 … 0.7497746380751567 0.7859261188072436; 0.13542548720409842 0.18381596295428493 … 0.2160294652478758 0.7626956785980866], [11.748371743140197 7.559301796801621 … 8.177859716477798 9.00873140393033; 7.559301796801621 9.305641182363653 … 7.740738374799199 8.795480384579363; … ; 8.177859716477798 7.740738374799199 … 10.893975558860143 8.523361496462421; 9.00873140393033 8.795480384579363 … 8.523361496462421 11.763175614374632], L2Regularization{Float64}(0.0), Any[], NoNormalization(), 32, RegularizedLeastSquares.CGNRState{Float64, Float64, Vector{Float64}}([3.5e-323, 5.0e-323, 9.4e-323, 1.1e-322, 1.2e-322, 1.24e-322, 1.43e-322, 1.5e-322, 1.53e-322, 1.73e-322, 1.8e-322, 1.9e-322, 1.93e-322, 2.0e-322, 2.1e-322, 2.17e-322], [3.5e-323, 5.0e-323, 9.4e-323, 1.1e-322, 1.2e-322, 1.24e-322, 1.43e-322, 1.5e-322, 1.53e-322, 1.73e-322, 1.8e-322, 1.9e-322, 1.93e-322, 2.0e-322, 2.1e-322, 2.17e-322], [4.4e-323, 6.0e-323, 1.04e-322, 1.2e-322, 1.3e-322, 1.33e-322, 1.53e-322, 1.6e-322, 1.63e-322, 1.83e-322, 1.9e-322, 2.0e-322, 2.03e-322, 2.08e-322, 2.2e-322, 2.27e-322], [4.0e-323, 5.4e-323, 1.0e-322, 1.14e-322, 1.24e-322, 1.3e-322, 1.5e-322, 1.53e-322, 1.6e-322, 1.8e-322, 1.83e-322, 1.93e-322, 2.0e-322, 2.03e-322, 2.17e-322, 2.2e-322], 0.0, 0.0, 0.0, 0, 2.220446049250313e-16, 0.0))

We can then simply pass the measurement matrix to the solver. The result will be the same as if we passed each colument of B seperately:

x_approx = solve!(solver, B)
 size(x_approx)
(16, 4)

The previous solve! call was still executed sequentially. To execute it in parallel, we have to specify a multi-threaded scheduler as a keyword-argument of the solve! call. RegularizedLeastSquares.jl provides a MultiThreadingState scheduler that can be used for this purpose. This scheduler is based on the Threads.@threads macro:

x_multi = solve!(solver, B; scheduler = MultiThreadingState)
 x_approx == x_multi
true

Custom Scheduling

It is possible to implement custom scheduling. The following code shows how to implement this for the Threads.@spawn macro. Usually one this to implement multi-threading with a package such as FLoop.jl or ThreadPools.jl for thread pinning:

Since most solver have conv. criteria, they can finish at different iteration numbers, which we track this information with flags.

 mutable struct SpawnState{S, ST <: AbstractSolverState{S}} <: RegularizedLeastSquares.AbstractMatrixSolverState{S}
    states::Vector{ST}
diff --git a/dev/generated/howto/normal_operator/index.html b/dev/generated/howto/normal_operator/index.html
index 22a1465..6c273a4 100644
--- a/dev/generated/howto/normal_operator/index.html
+++ b/dev/generated/howto/normal_operator/index.html
@@ -14,22 +14,22 @@
 
 solver = createLinearSolver(CGNR, A; AHA = adjoint(A) * A, reg = L2Regularization(0.0001), iterations=32);
 x_approx = solve!(solver, b)
16-element Vector{Float64}:
-  0.21864398778252633
- -1.1548665419879247
- -0.6721144984080208
-  0.33270395669894315
- -0.2008156723411115
-  1.423241207894347
-  1.4993451414928511
- -1.1994845349261622
- -0.4535289015520796
- -2.460470333457473
- -0.22039699645355365
- -0.6905565691110506
- -0.4306851943309586
- -0.7339931373333642
-  0.5948314416286282
- -0.013648535818127604

The normal operator can also be computed using the normalOperator function from LinearOperatorCollection.jl. This is useful if the normal operator is not directly available or shouldn't be stored in memory. This function is opinionated and attempts to optimize the resulting operator for iterative applications. Specifying a custom method for a custom operator allows one to control this optimization.

An example of such an optimization is a matrix-free weighting of $\mathbf{A}$ as shown in the Weighting example:

using LinearOperatorCollection
+ -0.07159443778413728
+  1.2442958319437942
+ -1.021295347659181
+ -0.7227593802229989
+  0.5701736591131468
+  1.7000651979788393
+ -0.03594255662420675
+ -1.9139239330892601
+  1.5111555272722839
+ -0.7805126463284169
+ -1.3338122349211472
+ -1.4452886499487336
+ -1.2244921758635996
+  0.033749921357981987
+ -0.7393022032861277
+ -1.923626895497195

The normal operator can also be computed using the normalOperator function from LinearOperatorCollection.jl. This is useful if the normal operator is not directly available or shouldn't be stored in memory. This function is opinionated and attempts to optimize the resulting operator for iterative applications. Specifying a custom method for a custom operator allows one to control this optimization.

An example of such an optimization is a matrix-free weighting of $\mathbf{A}$ as shown in the Weighting example:

using LinearOperatorCollection
 weights = rand(32)
 WA = ProdOp(WeightingOp(weights), A)
 AHA = LinearOperatorCollection.normalOperator(WA)
Linear operator
@@ -50,19 +50,19 @@
   \mathbf{A}^*\tilde{\mathbf{W}}\mathbf{A}
 \end{equation}\]

The optimized normal operator can then be passed to the solver:

solver = createLinearSolver(CGNR, WA; AHA = AHA, reg = L2Regularization(0.0001), iterations=32);
 x_approx2 = solve!(solver, weights .* b)
16-element Vector{Float64}:
-  0.2186764013809189
- -1.1547846814956353
- -0.6720717385399291
-  0.3328048409765905
- -0.20078423061380712
-  1.4231405191144206
-  1.4992626604162957
- -1.1994765030436259
- -0.4534730031709115
- -2.46033395630666
- -0.2203302343229495
- -0.6905299619473054
- -0.43068988798007357
- -0.7339348196582018
-  0.5948435559174189
- -0.01367908770912908

Of course it is also possible to optimize a normal operator with other means and pass it to the solver via the AHA keyword argument.

It is also possible to only supply the normal operator to these solvers, however on then needs to supply $\mathbf{A^*b}$ intead of $\mathbf{b}$.


This page was generated using Literate.jl.

+ -0.07161546938893568 + 1.2442339469656587 + -1.0212682074858201 + -0.7227332642817055 + 0.570167271074179 + 1.6999836863291176 + -0.03593483995119651 + -1.9138802486083213 + 1.511036605169281 + -0.7805003436513653 + -1.3337332477625856 + -1.4451925571974404 + -1.224385439355738 + 0.03374572243324476 + -0.7393711858345144 + -1.9236099025579403

Of course it is also possible to optimize a normal operator with other means and pass it to the solver via the AHA keyword argument.

It is also possible to only supply the normal operator to these solvers, however on then needs to supply $\mathbf{A^*b}$ intead of $\mathbf{b}$.


This page was generated using Literate.jl.

diff --git a/dev/generated/howto/plug-and-play/index.html b/dev/generated/howto/plug-and-play/index.html index bd086a9..f5f3203 100644 --- a/dev/generated/howto/plug-and-play/index.html +++ b/dev/generated/howto/plug-and-play/index.html @@ -5,19 +5,19 @@ b = A*x;

For the documentation we will just use the identity function as a placeholder for the PnP prior.

model = identity
identity (generic function with 1 method)

In practice, you would replace this with a neural network:

using Flux
 model = Flux.loadmodel!(model, ...)

The model can then be used together with the PnPRegularization term:

reg = PnPRegularization(1.0; model = model, shape = [16]);

Since models often expect a specific input range, we can use the MinMaxTransform to normalize the input:

reg = PnPRegularization(1.0; model = model, shape = [16], input_transform = RegularizedLeastSquares.MinMaxTransform);

Custom input transforms can be implemented by passing something callable as the input_transform keyword argument. For more details see the PnPRegularization documentation.

The regularization term can then be used in the solver:

solver = createLinearSolver(Kaczmarz, A; reg = reg, iterations = 32)
 x_approx = solve!(solver, b)
16-element Vector{Float64}:
-  0.7278189818295675
- -1.5381622618100503
-  0.4770061068413529
-  0.47019959251916577
- -0.693140806751673
-  1.4342456420748815
- -1.543451037824489
- -0.4140767689506015
- -0.17549629281960955
- -0.14384473647147056
- -0.13785737685304134
- -0.31199636380360163
-  0.006973658078053724
- -1.4080706447814213
- -1.1512591017809493
- -0.47672447980433197

This page was generated using Literate.jl.

+ -2.289200924556522 + 0.427379244236632 + 0.35912929503841573 + 1.703029899916221 + -0.346960790532143 + -0.6112130086275078 + 1.626356474544742 + 1.1348473573924438 + 0.24385448506375296 + -0.31118681893618927 + 0.1353377255536592 + -0.9630979523703582 + -2.7927411611976445 + 1.3158484851818764 + 0.5848345311084269 + 1.5215449853790237

This page was generated using Literate.jl.

diff --git a/dev/objects.inv b/dev/objects.inv index 7f785a1..e43b0cb 100644 Binary files a/dev/objects.inv and b/dev/objects.inv differ