Skip to content

Commit

Permalink
A few final tweaks to the docs.
Browse files Browse the repository at this point in the history
  • Loading branch information
kellertuer committed Aug 26, 2024
1 parent 868c02d commit 9f48ba7
Show file tree
Hide file tree
Showing 8 changed files with 19 additions and 15 deletions.
2 changes: 1 addition & 1 deletion Changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ All notable Changes to the Julia package `Manopt.jl` will be documented in this
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

# [0.5.0] unreleased
# [0.5.0] – August 29, 2024

This breaking update is mainly concerned with improving a unified experience through all solvers
and some usability improvements, such that for example the different gradient update rules are easier to specify.
Expand Down
12 changes: 6 additions & 6 deletions docs/src/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,13 +8,13 @@ This Julia package was started and is currently maintained by [Ronny Bergmann](h
Thanks to the following contributors to `Manopt.jl`:

* [Constantin Ahlmann-Eltze](https://const-ae.name) implemented the [gradient and differential `check` functions](helpers/checks.md)
* [Renée Dornig](https://github.com/r-dornig) implemented the [particle swarm](solvers/particle_swarm.md), the [Riemannian Augmented Lagrangian Method](solvers/augmented_Lagrangian_method.md), the [Exact Penalty Method](solvers/exact_penalty_method.md), as well as the [`NonmonotoneLinesearch`](@ref)
* [Renée Dornig](https://github.com/r-dornig) implemented the [particle swarm](solvers/particle_swarm.md), the [Riemannian Augmented Lagrangian Method](solvers/augmented_Lagrangian_method.md), the [Exact Penalty Method](solvers/exact_penalty_method.md), as well as the [`NonmonotoneLinesearch`](@ref). These solvers are also the first one with modular/exchangable sub solvers.
* [Willem Diepeveen](https://www.maths.cam.ac.uk/person/wd292) implemented the [primal-dual Riemannian semismooth Newton](solvers/primal_dual_semismooth_Newton.md) solver.
* [Hajg Jasa](https://www.ntnu.edu/employees/hajg.jasa) implemented the [convex bundle method](solvers/convex_bundle_method.md) and the [proximal bundle method](solvers/proximal_bundle_method.md).
* Even Stephansen Kjemsås contributed to the implementation of the [Frank Wolfe Method](solvers/FrankWolfe.md) solver
* Mathias Ravn Munkvold contributed most of the implementation of the [Adaptive Regularization with Cubics](solvers/adaptive-regularization-with-cubics.md) solver
* [Tom-Christian Riemer](https://www.tu-chemnitz.de/mathematik/wire/mitarbeiter.php) implemented the [trust regions](solvers/trust_regions.md) and [quasi Newton](solvers/quasi_Newton.md) solvers.
* [Markus A. Stokkenes](https://www.linkedin.com/in/markus-a-stokkenes-b41bba17b/) contributed most of the implementation of the [Interior Point Newton Method](solvers/interior_point_Newton.md)
* [Hajg Jasa](https://www.ntnu.edu/employees/hajg.jasa) implemented the [convex bundle method](solvers/convex_bundle_method.md) and the [proximal bundle method](solvers/proximal_bundle_method.md) and a default subsolver each of them.
* Even Stephansen Kjemsås contributed to the implementation of the [Frank Wolfe Method](solvers/FrankWolfe.md) solver.
* Mathias Ravn Munkvold contributed most of the implementation of the [Adaptive Regularization with Cubics](solvers/adaptive-regularization-with-cubics.md) solver as well as ist [Lanczos](@ref arc-Lanczos) subsolver
* [Tom-Christian Riemer](https://www.tu-chemnitz.de/mathematik/wire/mitarbeiter.php) implemented the [trust regions](solvers/trust_regions.md) and [quasi Newton](solvers/quasi_Newton.md) solvers as well as the [truncated conjugate gradient descent](solvers/truncated_conjugate_gradient_descent.md) subsolver.
* [Markus A. Stokkenes](https://www.linkedin.com/in/markus-a-stokkenes-b41bba17b/) contributed most of the implementation of the [Interior Point Newton Method](solvers/interior_point_Newton.md) as well as its default [Conjugate Residual](solvers/conjugate_residual.md) subsolver
* [Manuel Weiss](https://scoop.iwr.uni-heidelberg.de/author/manuel-weiß/) implemented most of the [conjugate gradient update rules](@ref cg-coeffs)

as well as various [contributors](https://github.com/JuliaManifolds/Manopt.jl/graphs/contributors) providing small extensions, finding small bugs and mistakes and fixing them by opening [PR](https://github.com/JuliaManifolds/Manopt.jl/pulls)s. Thanks to all of you.
Expand Down
6 changes: 5 additions & 1 deletion docs/src/plans/stepsize.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,14 @@ on the manifold currently under consideration.
Currently, the following step sizes are available

```@docs
AdaptiveWNGradient
ArmijoLinesearch
ConstantLength
DecreasingLength
NonmonotoneLinesearch
Polyak
WolfePowellLinesearch
WolfePowellBinaryLinesearch
```

Some step sizes use [`max_stepsize`](@ref) function as a rough upper estimate for the trust region size.
Expand All @@ -41,7 +45,7 @@ Modules = [Manopt]
Pages = ["plans/stepsize.jl"]
Private = true
Order = [:function, :type]
Filter = t -> !(t in [Stepsize, ArmijoLinesearch, ConstantLength, DecreasingLength, Polyak])
Filter = t -> !(t in [Stepsize, AdaptiveWNGradient, ArmijoLinesearch, ConstantLength, DecreasingLength, NonmonotoneLinesearch, Polyak, WolfePowellLinesearch, WolfePowellBinaryLinesearch ])
```


Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/adaptive-regularization-with-cubics.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ AdaptiveRegularizationState

There are several ways to approach the subsolver. The default is the first one.

## Lanczos iteration
## [Lanczos iteration](@id arc-Lanczos)

```@docs
Manopt.LanczosState
Expand Down
2 changes: 1 addition & 1 deletion docs/src/solvers/gradient_descent.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ GradientDescentState
A field of the options is the `direction`, a [`DirectionUpdateRule`](@ref), which by default [`IdentityUpdateRule`](@ref) just evaluates the gradient but can be enhanced for example to

```@docs
AverageGradient
DirectionUpdateRule
IdentityUpdateRule
MomentumGradient
AverageGradient
Nesterov
```

Expand Down
4 changes: 2 additions & 2 deletions src/solvers/cyclic_proximal_point.jl
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ function show(io::IO, cpps::CyclicProximalPointState)
return print(io, s)
end
_doc_CPPA = """
cyclic_proximal_point(M, f, proxes_f; kwargs...)
cyclic_proximal_point(M, mpo; kwargs...)
cyclic_proximal_point(M, f, proxes_f, p; kwargs...)
cyclic_proximal_point(M, mpo, p; kwargs...)
cyclic_proximal_point!(M, f, proxes_f; kwargs...)
cyclic_proximal_point!(M, mpo; kwargs...)
Expand Down
4 changes: 2 additions & 2 deletions src/solvers/proximal_bundle_method.jl
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ with ``X_{q_j} ∈ ∂f(q_j)``, ``p_k`` the last serious iterate,
sub solver, see for example the [`proximal_bundle_method_subsolver`](@ref).
"""
_doc_PBM = """
proximal_bundle_method(M, f, ∂f, p, kwargs...)
proximal_bundle_method(M, f, ∂f, p=rand(M), kwargs...)
proximal_bundle_method!(M, f, ∂f, p, kwargs...)
perform a proximal bundle method ``p^{(k+1)} = $(_tex(:retr))_{p^{(k)}}(-d_k)``,
Expand Down Expand Up @@ -259,7 +259,7 @@ $(_note(:OutputSection))

@doc "$(_doc_PBM)"
function proximal_bundle_method(
M::AbstractManifold, f::TF, ∂f::TdF, p; kwargs...
M::AbstractManifold, f::TF, ∂f::TdF, p=rand(M); kwargs...
) where {TF,TdF}
p_star = copy(M, p)
return proximal_bundle_method!(M, f, ∂f, p_star; kwargs...)
Expand Down
2 changes: 1 addition & 1 deletion src/solvers/quasi_Newton.jl
Original file line number Diff line number Diff line change
Expand Up @@ -234,7 +234,7 @@ function quasi_Newton(
M::AbstractManifold,
f::TF,
grad_f::TDF,
p;
p=rand(M);
evaluation::AbstractEvaluationType=AllocatingEvaluation(),
kwargs...,
) where {TF,TDF}
Expand Down

0 comments on commit 9f48ba7

Please sign in to comment.