Skip to content

Commit

Permalink
Merge pull request #92 from JuliaImageRecon/docuUpdate
Browse files Browse the repository at this point in the history
Documentation Update
  • Loading branch information
nHackel authored Aug 22, 2024
2 parents ce6af00 + 4f551dc commit 7bcd49d
Show file tree
Hide file tree
Showing 21 changed files with 794 additions and 181 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,6 @@
*.jl.mem
docs/build/
docs/site/
docs/src/generated/

Manifest.toml
11 changes: 8 additions & 3 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,10 +1,15 @@
[deps]
BenchmarkTools = "6e4b80f9-dd63-53aa-95a3-0cdb28fa8baf"
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
ImageGeoms = "9ee76f2b-840d-4475-b6d6-e485c9297852"
ImagePhantoms = "71a99df6-f52c-4da1-bd2a-69d6f37f3252"
JLArrays = "27aeb0d3-9eb9-45fb-866b-73c2ecf80fcb"
LinearOperatorCollection = "a4a2c56f-fead-462a-a3ab-85921a5f2575"
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
Literate = "98b081ad-f1c9-55d3-8b20-4c87d4299306"
RadonKA = "86de8297-835b-47df-b249-c04e8db91db5"
RegularizedLeastSquares = "1e9c538a-f78c-5de5-8ffb-0b6dbe892d23"
UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228"
Wavelets = "29a6e085-ba6d-5f35-a997-948ac2efa89a"

[compat]
Documenter = "1.0, 1.1, 1.2"
Documenter = "1"
34 changes: 29 additions & 5 deletions docs/make.jl
Original file line number Diff line number Diff line change
@@ -1,27 +1,51 @@
using Documenter, RegularizedLeastSquares, LinearOperatorCollection, Wavelets
using Documenter, Literate, RegularizedLeastSquares

# Generate examples
OUTPUT_BASE = joinpath(@__DIR__(), "src/generated")
INPUT_BASE = joinpath(@__DIR__(), "src/literate")
for (_, dirs, _) in walkdir(INPUT_BASE)
for dir in dirs
OUTPUT = joinpath(OUTPUT_BASE, dir)
INPUT = joinpath(INPUT_BASE, dir)
for file in filter(f -> endswith(f, ".jl"), readdir(INPUT))
Literate.markdown(joinpath(INPUT, file), OUTPUT)
end
end
end

makedocs(
format = Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://github.com/JuliaImageRecon/RegularizedLeastSquares.jl",
assets=String[],
collapselevel=1,
),
repo="https://github.com/JuliaImageRecon/RegularizedLeastSquares.jl/blob/{commit}{path}#{line}",
modules = [RegularizedLeastSquares],
sitename = "RegularizedLeastSquares.jl",
authors = "Tobias Knopp, Mirco Grosser, Martin Möddel, Niklas Hackelberg, Andrew Mao, Jakob Assländer",
pages = [
"Home" => "index.md",
"Getting Started" => "gettingStarted.md",
"Getting Started" => "generated/examples/getting_started.md",
"Examples" => Any["Compressed Sensing" => "generated/examples/compressed_sensing.md", "Computed Tomography" => "generated/examples/computed_tomography.md"],
"Solvers" => "solvers.md",
"Regularization" => "regularization.md",
"API" => Any["Solvers" => "API/solvers.md",
"Regularization" => "generated/explanations/regularization.md",
"How to" => Any[
"Weighting" => "generated/howto/weighting.md",
"Normal Operator" => "generated/howto/normal_operator.md",
"Multi-Threading" => "generated/howto/multi_threading.md",
"GPU Acceleration" => "generated/howto/gpu_acceleration.md",
"Efficient Kaczmarz" => "generated/howto/efficient_kaczmarz.md",
"Callbacks" => "generated/howto/callbacks.md",
"Plug-and-Play Regularization" => "generated/howto/plug-and-play.md"
],
"API Reference" => Any["Solvers" => "API/solvers.md",
"Regularization Terms" => "API/regularization.md"],

],
pagesonly = true,
checkdocs = :none,
doctest = true,
doctest = false,
doctestfilters = [r"(\d*)\.(\d{4})\d+"]
)

Expand Down
8 changes: 7 additions & 1 deletion docs/src/API/solvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,9 @@ REPL one can access this documentation by entering the help mode with `?`
## solve!
```@docs
RegularizedLeastSquares.solve!(::AbstractLinearSolver, ::Any)
RegularizedLeastSquares.init!(::AbstractLinearSolver, ::Any)
RegularizedLeastSquares.init!(::AbstractLinearSolver, ::AbstractSolverState, ::AbstractMatrix)
```

## ADMM
Expand Down Expand Up @@ -42,8 +45,11 @@ RegularizedLeastSquares.POGM
RegularizedLeastSquares.SplitBregman
```

## Miscellaneous Functions
## Miscellaneous
```@docs
RegularizedLeastSquares.solverstate
RegularizedLeastSquares.solversolution
RegularizedLeastSquares.solverconvergence
RegularizedLeastSquares.StoreSolutionCallback
RegularizedLeastSquares.StoreConvergenceCallback
RegularizedLeastSquares.CompareSolutionCallback
Expand Down
53 changes: 0 additions & 53 deletions docs/src/gettingStarted.md

This file was deleted.

31 changes: 20 additions & 11 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,34 @@

## Introduction

RegularizedLeastSquares.jl is a Julia package for solving large scale linear systems
using different types of algorithms. Ill-conditioned problems arise in many areas of practical interest. To solve these problems, one often resorts to regularization techniques and non-linear problem formulations. This packages provides implementations for a variety of solvers, which are used in fields such as MPI and MRI.
RegularizedLeastSquares.jl is a Julia package for solving large linear systems using various types of algorithms. Ill-conditioned problems arise in many areas of practical interest. Regularisation techniques and nonlinear problem formulations are often used to solve these problems. This package provides implementations for a variety of solvers used in areas such as MPI and MRI. In particular, this package serves as the optimization backend of the Julia packages [MPIReco.jl](https://github.com/MagneticParticleImaging/MPIReco.jl) and [MRIReco.jl](https://github.com/MagneticResonanceImaging/MRIReco.jl).

The implemented methods range from the $l_2$-regularized CGNR method to more general optimizers such as the Alternating Direction of Multipliers Method (ADMM) or the Split-Bregman method.
The implemented methods range from the $l^2_2$-regularized CGNR method to more general optimizers such as the Alternating Direction of Multipliers Method (ADMM) or the Split-Bregman method.

For convenience, implementations of popular regularizers, such as $l_1$-regularization and TV regularization, are provided. On the other hand, hand-crafted regularizers can be used quite easily. For this purpose, a `Regularization` object needs to be build. The latter mainly contains the regularization parameter and a function to calculate the proximal map of a given input.
For convenience, implementations of popular regularizers, such as $l_1$-regularization and TV regularization, are provided. On the other hand, hand-crafted regularizers can be used quite easily.

Depending on the problem, it becomes unfeasible to store the full system matrix at hand. For this purpose, RegularizedLeastSquares.jl allows for the use of matrix-free operators. Such operators can be realized using the interface provided by the package [LinearOperators.jl](https://github.com/JuliaSmoothOptimizers/LinearOperators.jl).
Other interfaces can be used as well, as long as the product `*(A,x)` and the adjoint `adjoint(A)` are provided. A number of common matrix-free operators are provided by the package [LinearOperatorColection.jl](https://github.com/JuliaImageRecon/LinearOperatorCollection.jl).

## Installation
## Features

Within Julia, use the package manager:
```julia
using Pkg
Pkg.add("RegularizedLeastSquares")
```
This adds the latest release of the package is added. To install a different version, please consult the [Pkg documentation](https://pkgdocs.julialang.org/dev/managing-packages/#Adding-packages).
* Variety of optimization algorithms optimized for least squares problems
* Support for matrix-free operators
* GPU support

## Usage

* See [Getting Started](@ref) for an introduction to using the package

## See also

Packages:
* [ProximalAlgorithms.jl](https://github.com/JuliaFirstOrder/ProximalAlgorithms.jl)
* [ProximalOperators.jl](https://github.com/JuliaFirstOrder/ProximalOperators.jl)
* [Krylov.jl](https://github.com/JuliaSmoothOptimizers/Krylov.jl)
* [RegularizedOptimization.jl](https://github.com/JuliaSmoothOptimizers/RegularizedOptimization.jl)

Organizations:
* [JuliaNLSolvers](https://github.com/JuliaNLSolvers)
* [JuliaSmoothOptimizers](https://github.com/JuliaSmoothOptimizers)
* [JuliaFirstOrder](https://github.com/JuliaFirstOrder)
69 changes: 69 additions & 0 deletions docs/src/literate/examples/compressed_sensing.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# # Compressed Sensing Example
# In this example we will go through a simple example from the field of Compressed Sensing.
# In addtion to RegularizedLeastSquares.jl, we will need the packages [LinearOperatorCollection.jl](https://github.com/JuliaImageRecon/LinearOperatorCollection.jl), [ImagePhantoms.jl](https://github.com/JuliaImageRecon/ImagePhantoms.jl), [ImageGeoms.jl](https://github.com/JuliaImageRecon/ImageGeoms.jl) and Random.jl, as well as [CairoMakie.jl](https://docs.makie.org/stable/) for visualization. We can install them the same way we did RegularizedLeastSquares.jl.

# ## Preparing the Inverse Problem
# To get started, let us generate a simple phantom using the ImagePhantoms and ImageGeom packages:

using ImagePhantoms, ImageGeoms
N = 256
image = shepp_logan(N, SheppLoganToft())
size(image)

# This produces a 256x256 image of a Shepp-Logan phantom.

# In this example, we consider a problem in which we randomly sample a third of the pixels in the image. Such a problem and the corresponding measurement can be constructed with the packages LinearOperatorCollection and Random:

# We first randomly shuffle the indices of the image and then select the first third of the indices to sample.
using Random, LinearOperatorCollection
randomIndices = shuffle(eachindex(image))
sampledIndices = sort(randomIndices[1:div(end, 3)])

# Afterwards we build a sampling operator which samples the image at the selected indices.
A = SamplingOp(eltype(image), pattern = sampledIndices , shape = size(image));

# Then we apply the sampling operator to the vectorized image to obtain the sampled measurement vector
b = A*vec(image);

# To visualize our image we can use CairoMakie:
using CairoMakie
function plot_image(figPos, img; title = "", width = 150, height = 150)
ax = CairoMakie.Axis(figPos; yreversed=true, title, width, height)
hidedecorations!(ax)
heatmap!(ax, img)
end
fig = Figure()
plot_image(fig[1,1], image, title = "Image")
samplingMask = fill(false, size(image))
samplingMask[sampledIndices] .= true
plot_image(fig[1,2], image .* samplingMask, title = "Sampled Image")
resize_to_layout!(fig)
fig

# As we can see in the right image, only a third of the pixels are sampled. The goal of the inverse problem is to recover the original image from this measurement vector.

# ## Solving the Inverse Problem
# To recover the image from the measurement vector, we solve the TV-regularized least squares problem:
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \vert\vert\mathbf{x}\vert\vert_{\lambda\text{TV}} .
# \end{equation}
# ```

# For this purpose we build a TV regularizer with regularization parameter $λ=0.01$

using RegularizedLeastSquares
reg = TVRegularization(0.01; shape=size(image));

# We will use the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA) to solve our inverse problem. Thus, we build the corresponding solver

solver = createLinearSolver(FISTA, A; reg=reg, iterations=20);

# and apply it to our measurement vector
img_approx = solve!(solver,b)

# To visualize the reconstructed image, we need to reshape the result vector to the correct shape. Afterwards we can use CairoMakie again:
img_approx = reshape(img_approx,size(image));
plot_image(fig[1,3], img_approx, title = "Reconstructed Image")
resize_to_layout!(fig)
fig
66 changes: 66 additions & 0 deletions docs/src/literate/examples/computed_tomography.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# # Computed Tomography Example
# In this example we will go through a simple example from the field of Computed Tomography.
# In addtion to RegularizedLeastSquares.jl, we will need the packages [LinearOperatorCollection.jl](https://github.com/JuliaImageRecon/LinearOperatorCollection.jl), [ImagePhantoms.jl](https://github.com/JuliaImageRecon/ImagePhantoms.jl), [ImageGeoms.jl](https://github.com/JuliaImageRecon/ImageGeoms.jl)
# and [RadonKA.jl](https://github.com/roflmaostc/RadonKA.jl/tree/main), as well as [CairoMakie.jl](https://docs.makie.org/stable/) for visualization. We can install them the same way we did RegularizedLeastSquares.jl.

# RadonKA is a package for the computation of the Radon transform and its adjoint. It is implemented with KernelAbstractions.jl and supports GPU acceleration. See the GPU acceleration [how-to](../howto/gpu_acceleration.md) for more information.

# ## Preparing the Inverse Problem
# To get started, let us generate a simple phantom using the ImagePhantoms and ImageGeom packages:

using ImagePhantoms, ImageGeoms
N = 256
image = shepp_logan(N, SheppLoganToft())
size(image)

# This produces a 64x64 image of a Shepp-Logan phantom.

using RadonKA, LinearOperatorCollection
angles = collect(range(0, π, 256))
sinogram = Array(RadonKA.radon(image, angles))

# Afterwards we build a Radon operator implementing both the forward and adjoint Radon transform
A = RadonOp(eltype(image); angles, shape = size(image));

# To visualize our image we can use CairoMakie:
using CairoMakie
function plot_image(figPos, img; title = "", width = 150, height = 150)
ax = CairoMakie.Axis(figPos[1, 1]; yreversed=true, title, width, height)
hidedecorations!(ax)
hm = heatmap!(ax, img)
Colorbar(figPos[2, 1], hm, vertical = false, flipaxis = false)
end
fig = Figure()
plot_image(fig[1,1], image, title = "Image")
plot_image(fig[1,2], sinogram, title = "Sinogram")
plot_image(fig[1,3], backproject(sinogram, angles), title = "Backprojection")
resize_to_layout!(fig)
fig

# In the figure we can see our original image, the sinogram, and the backprojection of the sinogram. The goal of the inverse problem is to recover the original image from the sinogram.

# ## Solving the Inverse Problem
# To recover the image from the measurement vector, we solve the $l^2_2$-regularized least squares problem
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \vert\vert\mathbf{x}\vert\vert^2_2 .
# \end{equation}
# ```

# For this purpose we build a $l^2_2$ with regularization parameter $λ=0.001$

using RegularizedLeastSquares
reg = L2Regularization(0.001);

# To solve this inverse problem, the Conjugate Gradient Normal Residual (CGNR) algorithm can be used. This solver is based on the normal operator of the Radon operator and uses both the forward and adjoint Radon transform internally.
# We now build the corresponding solver
solver = createLinearSolver(CGNR, A; reg=reg, iterations=20);

# and apply it to our measurement vector
img_approx = solve!(solver, vec(sinogram))

# To visualize the reconstructed image, we need to reshape the result vector to the correct shape. Afterwards we can use CairoMakie again:
img_approx = reshape(img_approx,size(image));
plot_image(fig[1,4], img_approx, title = "Reconstructed Image")
resize_to_layout!(fig)
fig
51 changes: 51 additions & 0 deletions docs/src/literate/examples/getting_started.jl
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
# # Getting Started

# In this example we will go through a simple random inverse problem to get familiar with RegularizedLeastSquares.jl.

# ## Installation

# To install RegularizedLeastSquares.jl, you can use the Julia package manager. Open a Julia REPL and run the following command:

# ```julia
# using Pkg
# Pkg.add("RegularizedLeastSquares")
# ```
# This will download and install the RegularizedLeastSquares.jl package and its dependencies. To install a different version, please consult the [Pkg documentation](https://pkgdocs.julialang.org/dev/managing-packages/#Adding-packages).

# Once the installation is complete, you can import the package with the `using` keyword:


using RegularizedLeastSquares

# RegularizedLeastSquares aims to solve inverse problems of the form:
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \mathbf{R(x)} .
# \end{equation}
# ```
# where $\mathbf{A}$ is a linear operator, $\mathbf{y}$ is the measurement vector, and $\mathbf{R(x)}$ is an (optional) regularization term.
# The goal is to retrieve an approximation of the unknown vector $\mathbf{x}$. In this first exampel we will just work with simple random arrays. For more advanced examples, please refer to the examples.

A = rand(32, 16)
x = rand(16)
b = A*x;

# To approximate x from b, we can use the Conjugate Gradient Normal Residual (CGNR) algorithm. We first build the corresponding solver:
solver = createLinearSolver(CGNR, A; iterations=32);

# and apply it to our measurement vector
x_approx = solve!(solver, b)
isapprox(x, x_approx, rtol = 0.001)

# Usually the inverse problems are ill-posed and require regularization. RegularizedLeastSquares.jl provides a variety of regularization terms.
# The CGNR algorithm can solve optimzation problems of the form:
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \vert\vert\mathbf{x}\vert\vert^2_2 .
# \end{equation}
# ```

# The corresponding solver can be built with the L2 regularization term:
solver = createLinearSolver(CGNR, A; reg = L2Regularization(0.0001), iterations=32);
x_approx = solve!(solver, b)
isapprox(x, x_approx, rtol = 0.001)
Loading

0 comments on commit 7bcd49d

Please sign in to comment.