Skip to content

Commit

Permalink
Fix smaller formating issues in docu
Browse files Browse the repository at this point in the history
  • Loading branch information
nHackel committed Aug 23, 2024
1 parent 59a3049 commit 12b8ef4
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 8 deletions.
2 changes: 1 addition & 1 deletion docs/src/literate/examples/compressed_sensing.jl
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ fig
# To recover the image from the measurement vector, we solve the TV-regularized least squares problem:
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \vert\vert\mathbf{x}\vert\vert_{\lambda\text{TV}} .
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \lambda\vert\vert\mathbf{x}\vert\vert_{\text{TV}} .
# \end{equation}
# ```

Expand Down
4 changes: 2 additions & 2 deletions docs/src/literate/examples/computed_tomography.jl
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ N = 256
image = shepp_logan(N, SheppLoganToft())
size(image)

# This produces a 64x64 image of a Shepp-Logan phantom.
# This produces a 256x256 image of a Shepp-Logan phantom.

using RadonKA, LinearOperatorCollection
angles = collect(range(0, π, 256))
Expand Down Expand Up @@ -43,7 +43,7 @@ fig
# To recover the image from the measurement vector, we solve the $l^2_2$-regularized least squares problem
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \vert\vert\mathbf{x}\vert\vert^2_2 .
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \lambda\vert\vert\mathbf{x}\vert\vert^2_2 .
# \end{equation}
# ```

Expand Down
6 changes: 3 additions & 3 deletions docs/src/literate/examples/getting_started.jl
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ using RegularizedLeastSquares
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \mathbf{R(x)} .
# \end{equation}
# ```
# where $\mathbf{A}$ is a linear operator, $\mathbf{y}$ is the measurement vector, and $\mathbf{R(x)}$ is an (optional) regularization term.
# where $\mathbf{A}$ is a linear operator, $\mathbf{b}$ is the measurement vector, and $\mathbf{R(x)}$ is an (optional) regularization term.
# The goal is to retrieve an approximation of the unknown vector $\mathbf{x}$. In this first exampel we will just work with simple random arrays. For more advanced examples, please refer to the examples.

A = rand(32, 16)
Expand All @@ -41,11 +41,11 @@ isapprox(x, x_approx, rtol = 0.001)
# The CGNR algorithm can solve optimzation problems of the form:
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \vert\vert\mathbf{x}\vert\vert^2_2 .
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_2^2 + \lambda\vert\vert\mathbf{x}\vert\vert^2_2 .
# \end{equation}
# ```

# The corresponding solver can be built with the L2 regularization term:
# The corresponding solver can be built with the $l^2_2$-regularization term:
solver = createLinearSolver(CGNR, A; reg = L2Regularization(0.0001), iterations=32);
x_approx = solve!(solver, b)
isapprox(x, x_approx, rtol = 0.001)
2 changes: 1 addition & 1 deletion docs/src/literate/howto/weighting.jl
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
# In the following, we will solve a weighted least squares problem of the form:
# ```math
# \begin{equation}
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_\mathbf{W}^2 + \vert\vert\mathbf{x}\vert\vert^2_2 .
# \underset{\mathbf{x}}{argmin} \frac{1}{2}\vert\vert \mathbf{A}\mathbf{x}-\mathbf{b} \vert\vert_\mathbf{W}^2 + \lambda\vert\vert\mathbf{x}\vert\vert^2_2 .
# \end{equation}
# ```
using RegularizedLeastSquares, LinearOperatorCollection, LinearAlgebra
Expand Down
3 changes: 2 additions & 1 deletion docs/src/solvers.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,5 @@ SolverVariant(A; kwargs...) = Solver(A, VariantState(kwargs...))

function iterate(solver::Solver, state::VarianteState)
# Custom iteration
end
end
```

0 comments on commit 12b8ef4

Please sign in to comment.